Uploaded by Peñalosa Shayne

STS 1

advertisement
Module 012 – Biotechnology and
Biocolonialism
The module contains the following topics:
1. Biotechnology and Environmental Politics in Agriculture
2. Biocolonialism
Biotechnology
Although the concept of biotechnology generally brings to mind genetic engineering, it can
be conceived of very broadly. Definition (Biotechnology) Any use of biological organisms or
processes in industrial, medical, agricultural and environmental engineer ing.



In this way, we can trace the history of biotechnology from the beginning of
scientific agriculture and fermentation at the end of the 19th century.
Throughout the 20th century, there was both much hope for, and much
disappointment in, the development of biotechnology.
By the last decades of the 20th century, biotech became a major component of the
R&D of most developed nations.
Zymotechnology is the old term for the study of the processes of fermentation in yeast and
bacteria in the production of foods and beverages such as bread, cheese, tofu, beer, wine,
sake, nato, etc.
Of course, these practices go back to ancient times, however, in the 19th century, with the
rise of big industries, particularly in Britain and Germany, technoscientists began to isolate
the microorganisms involved and to study them.
With the techniques of scientific biology of the 19th century, it became possible to isolate
pure strands of the various yeasts and molds involved in these processes, so as to
standardize the mass production of these products.
In this regard, at the end of the 19th century, various industrial and governmental labs, and
teaching institutions were established.
In the early part of the 20th century, technoscientists began to see zymotechnolgy as
included in the applied sciences, analogously to chemistry. They established institutions f or
collecting microorganisms.
The concept of zymotechnolgy was broadened to a general concept of biological chemistry,
involving the use of biological molecules such as amino-acids, proteins and enzymes in
industrial production.
The word ‘biotechnology’ was coined by Karl Ereky (1878–1952), in Hungary in 1919, to
describe general processes of converting raw materials into useful products, such as on
industrial farms.
Course Module
In Britain, Chaim Weizemann (1874–1952) developed bacterial fermentation processes for
producing organic chemicals such as acetone and cordite propellants. During WWII, he
worked on synthetic rubber and high-octane gas.
In the early part of the 20th century, technoscientists began to see zymotechnolgy as
included in the applied sciences, analogously to chemistry. They established institutions f or
collecting microorganisms.
Engineering nature
During the interwar period, philosophers, sociologists and public intellectuals began
to reflect on the growing link between biology and technology.
They put forward the idea that biotechnology could be used to change human nature,
and by changing human nature to change society.
The Austrian Raoul Francé (1874–1943), for example, claimed that we could regard
life as a series of technical problems, for which living organisms acted as optimal
solutions.
In Britain, biotechnology was conceived of as a possible solution to the damages of
the industrial revolution.
Patrick Geddes (1854–1932), the Scottish biologist, divided the history of technology
into three stages: paleotechnic (1st industrial revolution), neotechnic (2nd industrial
revolution) and biotechnic (future industrial revolution).
Raoul France’s vision of a harmonious engineering:
R. Francé, Plants as Inventors, 1920: “It was my thesis that we can conquer not only
by the destruction of disturbing influences, but by compensation in harmony with
the world. Only compensation and harmony can be the optimal solutio ns; for that
end the wheels of the world turn. To attain its aim, life: to overcome obstacles, the
organism — plant, animal, man, or unicellular body — shifts and changes. It swims,
flies, defends itself and invents a thousand new forms and apparatuses. If you follow
my thought, you will see where I am leading, what is the deepest meaning of the
biotechnical tokens. It portends a deliverance from many obstacles, a redemption, a
straining for the solution of many problems in harmony with the forces of the wo rld.”
Institutionalizing the engineering of nature
After WWII, technoscientists began to institutionalize biology and biotechnology in
various ways; that is, to establish departments, institutes and ministries.
During the war, a number of countries had used biotechnological means to
supplement their shortages. These labs were now institutionalized.
Cybernetics and general systems theory began to explore the parallel structures of
machines and biological systems. That is, they began to explore the general
theoretical similarities between biological and technological systems.
At MIT there was already a department of biological engineering (1936). The first
department of biotechnology was founded at UCLA in 1944, and, in the 1950s -60s,
became widely respected for its work on man-machine interfaces.
The promise of a green technology
In the early Cold War period, biotechnology was considered as an alternative to a list
of earth-destroying technologies developed by the “military-industrial complex.” It
was hoped that it might solve major social problems, such as energy and food
shortages.



Imitation rhizobia: There were projects to try to develop bacterial fertilizers
that could convert nitrogen to ammonia like the rhizobia bacteria in beans.
Biogas and gasohol: In rural countries like China and India, there were
projects to convert biomass into fuel. In 1974, Brazil began a massive project
to convert sugar cane to gas.
Single-cell protein: During WWII, the Germans grew single-cell (fungal)
protein for animal fodder. In the 1950s, the oil companies developed
processes for growing bacteria on oil. In 1968, the Japanese produced 110
tones of single-cell protein bacteria.
Early biotech policy: Japan
Course Module
Japan’s long history of the use of fermentation processes gave Japanese
technoscientists a broad conception of biotechnology.


In the 1970s, Japan became a world leader in biotech policy.
o By the end of the 1960s there were serious pollution problems and this
lead to the idea that biotechnology could be used to make
environmentally sound technologies.
In the 1970s, the Ministry of International Trade and Industry put special
emphasis on life sciences and biotechnology.
o White Paper on S&T, 1977: “Life Science, in particular, is for the study
of phenomena of life and biological functions that will be made useful;
for industrial, medical, agricultural and environmental purposes, and
so this area of science is expected to set the pace for the next round of
technical progress.”
Early biotech policy: Germany and Britain



In the 1960s the Germans also became concerned with environmental
protection (Umweltschutz) and began to put emphasis on a new mode of
development.
o Symposium of Industrial Microbiology, 1969: “A future aim should
therefore be to close the gaps by suitable training, to rise above
classical fermentation technology, and to build up a modern science of
biochemical-microbiological engineering.”
In Britain, chemical engineering, the antibiotics industry and applied
microbiology developed as rapidly as in the U.S.
In 1979, a government report outlined the country’s policy on biotechnology,
which it defined as “the application of biological organisms, systems of
processes to manufacturing and service industries.”
o The British generally followed the Japanese and German policies,
however, they put more emphasis on genetic engineering.
Contrary to its name, biotechnology is not a single technology. Rather it is a group of
technologies that share two (common) characteristics -- working with living cells and their
molecules and having a wide range of practice uses that can improve our lives.
Biotechnology can be broadly defined as "using organisms or their products for commercial
purposes." As such, (traditional) biotechnology has been practices since he beginning of
records history. (It has been used to:) bake bread, brew alcoholic beverages, and breed
food crops or domestic animals (2). But recent developments in molecular biology have
given biotechnology new meaning, new prominence, and new potential. It is (modern)
biotechnology that has captured the attention of the public. Modern biotechn ology can
have a dramatic effect on the world economy and society (3).
One example of modern biotechnology is genetic engineering. Genetic engineering is the
process of transferring individual genes between organisms or modifying the genes in an
organism to remove or add a desired trait or characteristic. Examples of genetic
engineering are described later in this document. Through genetic engineering, genetically
modified crops or organisms are formed. These GM crops or GMOs are used to produce
biotech-derived foods. It is this specific type of modern biotechnology, genetic engineering,
that seems to generate the most attention and concern by consumers and consumer
groups. What is interesting is that modern biotechnology is far more precise than
traditional forms of biotechnology and so is viewed by some as being far safer.)
Biotechnology for the 21st century
Experts in United States anticipate the world’s population in 2050 to be approximately 8.7
billion persons. The world’s population is growing, but its surface area is not. Compounding
the effects of population growth is the fact that most of the earth’s ideal farming land is
already being utilized. To avoid damaging environmentally sensitive areas, such as rain
forests, we need to increase crop yields for land currently in use. By increasing crop yields,
through the use of biotechnology the constant need to clear more land for growing food is
reduced.
Countries in Asia, Africa, and elsewhere are grappling with how to continue feeding a
growing population. They are also trying to benefit more from their existing resources.
Biotechnology holds the key to increasing the yield of staple crops by allowing farmers to
reap bigger harvests from currently cultivated land, while preserving the land’s ability to
support continued farming.
Malnutrition in underdeveloped countries is also being combated with biotechnology. The
Rockefeller Foundation is sponsoring research on “golden rice”, a crop designed to improve
nutrition in the developing world. Rice breeders are using biotechnology to build Vitamin A
into the rice. Vitamin A deficiency is a common problem in poor countries. A second phase
of the project will increase the iron content in rice to combat anemia, which is widespread
problem among women and children in underdeveloped countries. Golden rice, expected to
be for sale in Asia in less than five years, will offer dramatic improvements in nutrition and
health for millions of people, with little additional costs to consumers.
Similar initiatives using genetic manipulation are aimed at making crops more productive
by reducing their dependence on pesticides, fertilizers and irrigation, or by increasing their
resistance to plant diseases (14).
Increased crop yield, greater flexibility in growing environments, less use of chemical
pesticides and improved nutritional content make agricultural biotechnology, quite
literally, the future of the world’s food supply.
Industrial Biotechnology
Industrial biotechnology applies the techniques of modern molecular biolog y to
improve the efficiency and reduce the environmental impacts of industrial
processes like textile, paper and pulp, and chemical manufacturing. For example,
industrial biotechnology companies develop biocatalysts, such as enzymes, to
synthesize chemicals. Enzymes are proteins produced by all organisms. Using
biotechnology, the desired enzyme can be manufactured in commercial quantities.
Commodity chemicals (e.g., polymer-grade acrylamide) and specialty chemicals can
be produced using biotech applications. Traditional chemical synthesis involves
large amounts of energy and often-undesirable products, such as HCl. Using
biocatalysts, the same chemicals can be produced more economically and more
environmentally friendly. An example would be the substitution of protease in
detergents for other cleaning compounds. Detergent proteases, which remove
protein impurities, are essential components of modern detergents. They are used
Course Module
to break down protein, starch, and fatty acids present on items being washed.
Protease production results in a biomass that in turn yields a useful byproduct- an
organic fertilizer. Biotechnology is also used in the textile industry for the finishing
of fabrics and garments. Biotechnology also produces biotech-derived cotton that is
warmer, stronger, has improved dye uptake and retention, enhanced absorbency,
and wrinkle- and shrink-resistance.
Some agricultural crops, such as corn, can be used in place of petroleum to produce
chemicals. The crop’s sugar can be fermented to acid, which can be then used as an
intermediate to produce other chemical feedstocks for various products. It has been
projected that 30% of the world’s chemical and fuel needs could be supplied by such
renewable resources in the first half of the next century. It has been demonstrated,
at test scale, that biopulping reduces the electrical energy required for wood pulping
process by 30% (11).
Environmental Biotechnology
Environmental biotechnology is the used in waste treatment and pollution
prevention. Environmental biotechnology can more efficiently clean up many
wastes than conventional methods and greatly reduce our dependence on methods
for land-based disposal.
Every organism ingests nutrients to live and produces by-products as a result.
Different organisms need different types of nutrients. Some bacteria thrive on the
chemical components of waste products. Environmental engineers use
bioremediation, the broadest application of environmental biotechnology, in two
basic ways. They introduce nutrients to stimulate the activity of bacteria already
present in the soil at a waste site, or add new bacteria to the soil. The 5 bacteria
digest the waste at the site and turn it into harmless byproducts. After the bacteria
consume the waste materials, they die off or return to their normal population
levels in the environment.
Bioremediation, is an area of increasing interest. Through application of
biotechnical methods, enzyme bioreactors are being developed that will pretreat
some industrial waste and food waste components and allow their removal through
the sewage system rather than through solid waste disposal mechanisms. Waste can
also be converted to biofuel to run generators. Microbes can be induced to produce
enzymes needed to convert plant and vegetable materials into building blocks for
biodegradable plastics (7).
In some cases, the byproducts of the pollution-fighting microorganisms are
themselves useful. For example, methane can be derived from a form of bacteria
that degrades sulfur liquor, a waste product of paper manufacturing. This methane
can then be used as a fuel or in other industrial processes.
Biocolonialism
Biocolonialism is, to put it most simply,
“the commandeering of knowledge and biological resources
from an indigenous people without compensation.“
Laurie Ann Whitt explained it as “if colonialism encompasses the interlocking array of
policies and practices (economic, social, political and legal) that a dominant culture can
draw on to maintain and extend its control over other peoples and lands, then
biocolonialism emphasizes the role of science policy…. …where valued genetic resources
and information are actively sought, ‘discovered’, and removed to the microworlds of
biotechnoscience. There they are legally transformed into the private intellectual property
of corporations, universities and individuals, rendered as commodities, and placed for sale
in genetic marketplaces such as the American Type Culture Collection.
References and Supplementary Materials
Online Supplementary Reading Materials
1. Biotechnology and its Applications;
https://fbns.ncsu.edu/extension_program/documents/biotech_applications.pdf ;
November 9, 2017
2. A History of Biotechnology; http://www.f.waseda.jp/sidoli/STS_Intro_10.pdf;
November 9, 2017
3. Biocolonialism and its Effects on Indigenous Populations;
http://nativeamerasure.leadr.msu.edu/2015/12/09/biocolonialism-and-its-effectson-indigenous-peoples/; November 9, 2017
Course Module
Module 011 – Emerging Technology in Medicine
This module contains the following topics:
1. Genomic Sequencing and the Emerging Medical Practice
2. The Digitization of Health Records
Genomic sequencing
The development of massively parallel sequencing (or next-generation sequencing) has
facilitated a rapid implementation of genomic sequencing in clinical medicine. Genomic
sequencing (GS) is now an essential tool for evaluating rare disorders, identifying
therapeutic targets in neoplasms, and screening for prenatal aneuploidy. Emerging
applications, such as GS for preconception carrier screening and predisposition screening
in healthy individuals, are being explored in research settings and utilized by members of
the public eager to incorporate genomic information into their health management. The
rapid pace of adoption has created challenges for all stakeholders in clinical GS, from
standardizing variant interpretation approaches in clinical molecular laboratories to
ensuring that nongeneticist clinicians are prepared for new types of clinical information.
Clinical GS faces a pivotal moment, as the vast potential of new quantities and types of data
enable further clinical innovation and complicated implementation questions continue to
be resolved.
Current and emerging clinical applications of GS
Diagnostic sequencing
To date, the diagnosis of rare Mendelian disease has been the primary clinical
application of sequencing the genomes of individual patients. Thousands of
pathogenic mutations identified through GS have been reported in recent years, and
novel gene-disease associations are proliferating. Early reports on clinical GS
demonstrated that identification of a causative mutation through GS can help to
formulate a treatment plan and in other cases offer new opportunities for
reproductive planning, as in the first publication reporting a successful diagnosis via
GS, which resulted in effective management for a severe autoimmune illness in a
young boy. Diagnostic GS is indicated for the detection of diagnostic genetic variants
in patients with suspected monogenic disorders after known single -gene candidates
have been eliminated from consideration or when a multigene testing panel has not
yielded a diagnosis. The vast majority of diagnostic GS to date has been performed in
children. However, patients can be of any age and presentations of Mendelian
disorders in adulthood are probably underrecognized.
The breadth of possible results from GS requires that thorough counseling and
evaluation be performed before ordering GS to ensure proper interpretation of
genomic variants, as well as careful clinical contextualization of the results. This
process should include gathering detailed family history information, systematically
evaluating the patient's and/or family's phenotype, reviewing medical literature and
databases for possible overlap with known syndromes or implicated biochemical
Course Module
pathways, and obtaining informed consent. Individuals who consent to clinical GS
should be aware that they may learn about disease risks that may also affect their
relatives.
Whereas many clinical molecular laboratories in academic medical centers and
commercial laboratories now offer exome sequencing, Baylor College of Medicine
and the University of California Los Angeles (UCLA) in the United States have
reported on the largest number of clinical sequencing cases and have estimated that
they find a causative mutation in 25% to 26% of cases overall, with lower diagnostic
rates for adults than for children. Of solved cases, a surprising percentage (4.6%)
appear to result from blended phenotypes of two separate Mendelian disorders, each
associated with distinct pathogenic variants. The combined impact of two distinct
Mendelian disease variants often leads to a hybrid phenotype that appears unique
and challenging to diagnose.
The application of GS to rare disease has understandably been of intense research
interest. In the United States, several National Institutes of Health (NIH) gra nt
programs, including the Clinical Sequencing Exploratory Research (CSER)
Consortium, the Centers for Mendelian Genomics, and the Undiagnosed Diseases
Program and Network, have been funded to investigate the application of GS to the
diagnosis of rare diseases. The scope of these efforts is broad and includes
establishing technical standards for GS and interpretative pipelines (i.e., variant
filtration algorithms and interpretation protocols), developing and implementing
reporting mechanisms, and evaluating the clinical, behavioral, legal, and ethical
impacts of GS on clinical practice.
Emerging application: preconception carrier screening
Although targeted carrier screening is well established (e.g., focused carrier
screening for conditions such as Gaucher, Tay-Sachs, and Canavan disease in
individuals of Ashkenazi Jewish descent), genomic technologies offer the opportunity
for broader, more comprehensive screening. Preconception screening for carrier
variants associated with rare, recessive disorders has been increasingly available in
recent years via targeted multiplex genotyping that screens for known mutations in
dozens of genes. These tests do not necessarily detect extremely rare or novel
genetic variants that an unaffected individual may carry, and therefore a “residual
risk” of being a carrier remains after negative testing.
Several companies now offer GS for preconception screening. GS affords the
opportunity to go beyond a selected subset of recessive disorders to evaluate and
report on genes associated with extremely rare recessive conditions. Preliminary
data from the MedSeq Project, which reports results on carrier variants in any gene
associated with known autosomal recessive disorders, suggest that approximately
90% of individuals in the general population are carriers for at least one recessive
disorder and that most carry two to four carrier variants. Due to imperfect coverage
of some genes and the low sensitivity of GS for certain types of genetic variation
(reviewed below), a negative result on GS does not eliminate the post-test
probability of being a carrier, though it generally improves upon the existing residual
risk of mutation panel-based approaches.
Discovering that reproductive partners are each carrier for a severe recessive
condition enables preimplantation genetic diagnosis (PGD). PGD allows for testing of
embryos for a specific genetic variant (or variants, in the case of recessive diseases).
Embryos lacking the targeted genetic variants are then implanted, preventing
transmission of the genetic disease to offspring. PGD is a complicated and
controversial topic both technically and ethically, and has been reviewed thoroughly
elsewhere.
Emerging application: genetic predisposition screening
Several research studies and personal genomics companies have begun to report a
broad range of predispositional Mendelian variants to individuals. The general goal
of these initiatives is to provide genetically informed predictions of disease risk and
medication safety and efficacy, thereby enabling participants to make personalized
decisions for disease prevention. Although preliminary data has not demonstrated
significant risk of harm, benefits have not been systematically evaluated, and many
experts and professional organizations call for caution before adopting GS for
generally healthy individuals. To this end, the PeopleSeq (Personal Genome
Sequencing Outcomes) Consortium has been formed as the first systematic large scale longitudinal study of outcomes of predisposition sequencing and will seek to
collect short- and long-term data on participants in GS projects.
Monogenic variants for Mendelian syndromes that confer a significant risk for a
condition, such as the breast cancer susceptibility gene 1 and 2 (BRCA1/2) variants
associated with breast and ovarian cancer, may be revealed in GS of persons without
a personal or family history. In current clinical practice, these findings are discovered
secondary to diagnostic sequencing and are routinely reported for selected genes
believed to be clinically actionable. However, in predisposition screening, these
variants are a primary finding. Using strict variant-filtering criteria and all genes
associated with human disease, the MedSeq Project identified a monogenic variant in
21 out of 100 participants. Identification of these variants has enabled MedSeq
physicians to perform deep phenotyping (targeted medical examination and
assessment for manifestations of the associated conditions) of asymptomatic
individuals with monogenic variants.
GS can identify common genetic variants that have been associated with risk for
complex phenotypes, such as coronary artery disease and type 2 diabetes, in
genome-wide association studies (GWAS). Millions of individuals have undergone
genotyping for such variants via direct-to-consumer services such as 23andMe,
which have utilized chip arrays that identify genotypes at specific single-nucleotide
polymorphisms (SNPs). Because many variants identified in GWAS reside outside of
exons (protein-coding regions of the genome), such SNPs would not be detectable by
exome sequencing. Therefore, with regard to utilizing GS to identify these variants,
wholegenome sequencing, instead of exome sequencing, is required. Despite the
availability of relevant data from GS and the broad reporting of common disease
risks by personal genomic testing companies, there is limited evidence for the clinical
validity or utility of risk assessments from common genetic variation. GWAS variants
account for a small proportion of variability in the risk of multifactorial phenotypes,
known as the “missing heritability” problem (ie, other as yet unidentified genetic
factors or interactions between genetic variants must contribute to the heritability of
Course Module
diseases). Additionally, risk-assessment methodologies to combine multiple variants
remain in flux, and reclassification of individuals from higher risk to average or lower
risk is expected to occur in most phenotypes as additional data accrue . Nevertheless,
some studies have shown that individuals make positive lifestyle changes and
become more engaged in their care after receiving such risk predictions .
Utilizing known associations between genetic variants and blood group and antigen
subtypes, GS can be used to predict clinically relevant hematological data, such as
blood group and platelet antigen types. Antigen subgroup status has potential
relevance for individuals who require multiple transfusions secondary to a chronic
medical condition, as well as for identifying potential donors who have rare blood
group antigens. The analytical algorithms have been developed and validated as part
of the MedSeq Project.
Finally, GS is a powerful tool to screen for multiple pharmacogenomic variants
simultaneously, creating the opportunity for personalized medication selectio n and
dosing regimens based on an individual's genotype or haplotype (group of genes
inherited together). Pharmacogenomic data offer the opportunity for querying
genomic data at the point of care as patients are prescribed medications for the first
time and new associations among drugs, genetic variants, and dosing requirements
or side effect risks are discovered and validated. The topic of pharmacogenomics will
be explored more comprehensively in two companion articles in this special issue.
The Digitization of Health Records
In medicine, the first information technology wave to hit the art and science of
healing was the digitization of medical files, now known as electronic health records
(EHRs). The data contained in EHRs in combination with other sources have the
potential to transform medical practice by leveraging data, technologies, and
healthcare delivery to improve the overall efficiency and quality of care at a
reasonable cost.
The widespread adoption of EHRs has generated large sets of data. The cr eative
merging of datasets collected from patients and physicians could be a viable avenue
to strengthen healthcare delivery. These massive datasets are currently understood
as a byproduct of medical practice instead of utilizable assets that could play pivotal
roles in patient care. Currently, for instance, most EHRs collect quantitative,
qualitative, and transactional data, all of which could be collated, analyzed, and
presented using sophisticated procedures and techniques that are now available to
make use of text-based documents containing disparate and unstructured data.
The purposeful use of data is not a mystery to medical practice. Since their humble
beginnings, evidence-based undertakings have been grounded in the principle that
questions answered through the scientific method were superior to anecdotes,
expert opinion, panels, and testimonials. In terms of acknowledging the value of data
and information in guiding a rational and logical decision-making process, medicine
has been at the forefront of adapting to modernity. However, physicians, nurses, and
healthcare facilities have been slow to embrace the newest methods to fully use the
data contained in EHRs. Let us examine four hidden benefits of EHRs.
EHRs may augment the attainment of new knowledge through the automated and
systematic analysis of unstructured data by applying advanced computational
techniques that enable comprehensive data collection. The acquisition of structured
data to answer emerging clinical questions is onerous. Narrow and a utomatic
searches within EHRs using natural language processing (NLP) may be a less
expensive alternative. In fact, a 2011 study suggests that the automated identification
of postoperative complications within electronic medical records using NLP is far
superior to patient safety indicators linked to discharge coding. EHRs may assist in
the dissemination of new knowledge. As clinical practice evolves to incorporate the
latest evidence and facts guiding medical care, physicians encounter the daunting
task of sorting through large volumes of information to craft adequate and safe
treatment options for patients with diverse chronic illnesses. Tinkering with EHRs
can generate on-screen dashboards that can guide medical care decisions. Physicians
could receive pop-up messages informing them about clinical presentation,
diagnostic work, and therapeutic choices made by clinicians facing similar case
profiles. It appears that data-driven healthcare decision-support tools aid the
standardization of care and result in cost savings.
EHRs may help to blend medical practice with personalized clinical initiatives by
facilitating opportunities to utilize analytical methods and techniques that can
holistically integrate biology-based interdisciplinary fields of study (e.g.,
metabolomics, phenomics) with EHR datasets to streamline genomics research and
create a rich culture of cooperation.
EHRs may empower patients to play more active roles in caring for their health by
directly delivering information to these individuals. Patients not only can know
specific details about their health parameters and illnesses but also can present
medical records to other healthcare professionals when needed. The benefits of this
approach are twofold: information can be readily accessed without filling out forms
or having to interview patients with long questionnaires, and traditional health related data can be linked to other details associated with personal data, such as diet,
education, exercise, habits, hobbies, income, and military service.
There will surely be problems along the way. Current EHR systems and health
information exchange platforms are diverse and fragmented and have limited
interoperability. Privacy issues will very likely emerge as a concern, especially for the
protection of confidential information. Ultimately, interconnections between
technology and medicine are inevitable, which explains why medical informatics
plays a central role in healthcare.
An article by Palgon added the benefits of electronic health records:
In the early 1960s, when most hospitals and health systems were steeped in paper, a
handful of highly progressive healthcare providers embarked on a journey to
implement a computer-based patient record. Envisioning the benefits of electronic
health records to reduce, or even eliminate, paper in medical record keeping for
healthcare providers of all sizes and specialties, their bold steps forever changed the
way clinicians gather, document and review patient information.
Course Module
In 1972, the first electronic medical record system emerged, only to be shunned by
physicians due to its high cost. It was mainly used by government hospitals and few
forward thinking institutions.
Fast forward to 2017, and the benefits of electronic health records (EHR) are widely
recognized among healthcare providers. In fact, 98 percent of all hospitals now
demonstrate meaningful use and have adopted an EHR. On the ambulatory side,
the global EHR market expects 5.8% growth by 2021, growth that is fueled by
government mandates, the need to reduce costs and growing consumer demands to
enhance healthcare quality and delivery.
Growing Value for Providers, Patients
Despite growing use of electronic health records, the healthcare industry is nowhere
close to realizing the full benefits of the digitized record. While most providers
acknowledge the benefits and vision for the future, the growing pains created by
varying standards and the challenges of data exchange due to different electronic
formats remain a hurdle.
EHRs deliver advantages to healthcare providers and patients by enabling better
collection, storage and sharing of health information for the purpose of coordinated
care delivery. Electronic data storage and retrieval reduces the risk of lost patient
records and test results and offers more secure access over their paper predecessors,
which easily could be left on a desk and viewed by anyone walking by. This can be a
very important advantage and in better alignment with HIPAA compliance
requirements.
Another benefit of EHR technology is that it supports greater accuracy in records, as
healthcare providers are prompted to complete required data fields, use standard
terminology and check predefined boxes, not to mention the fact that the EHR has
purged the patient record of illegible physician notes. One specific benefit of
electronic health record technology is the speed in which clinicians can gain access to
critical test results and progress notes, eliminating delays in treatment caused by
missing data. Finally, electronic health records support enhanced patient safety by
collecting more complete data and providing secure access throughout the care
continuum.
On the other hand, electronic health records are not without their own challenges.
One of the biggest and perhaps most visible risks of electronic health records is data
security, as brought to light by the recent WannaCry ransomware cyber attack which
affected16 National Health Service hospitals in the UK. This massive hack effectively
took the hospitals offline, forcing suspension of services. In this attack, as in previous
ones, cyber criminals disrupted care and business operations by making personal
and clinical data contained in the electronic health records unavailable at the point of
care.
The negative impacts of cyber attacks are two-fold: risk to patient care and safety
and risk to patients’ financial health as other personal information is exposed to
unauthorized individuals with malicious intent. While data is potentially more secure
inside the four walls of the health system, the ability to share data to those who need
it to deliver care beyond those walls also offers the risk of unintended information
exchange on a mass scale. Therefore, health systems need a comprehensive approach
to data security that includes all aspects of their operations, including the EHR.
Efficiency Supports Better Care
The benefits of electronic medical records are spread between healthcare providers
and patients and support the ultimate goal of effective exchange of data
(information) between providers caring for the same patient. In addition, electronic
health records can help physicians practice more efficiently by saving time with
electronic prescription, lab and imaging ordering and faster test result transactions.
The end goal is improved patient care and outcomes through better health and
disease management.
Enabling data integration into a single electronic medical record or single view, EHRs
make data accessible for the right person at the right time in the care delivery
process. But on a broader scale, health systems, like Accountable Care Organizations
(ACO) and highly integrated delivery systems that embrace EHR technology, are able
to integrate, aggregate and harmonize data across specialties, multiple EHRs in acute
and ambulatory settings, and financial, operational and claims data sources. This
allows providers to effectively collaborate and establish appropriate metrics to
support the overarching goal of coordinated, high quality care.
Hidden Data Provides Insight
While the benefits of electronic health records to store, manage and exchange patient
information are enormous, the advantages of using the EHR as a data source to
provide insight beyond individual patient care are immeasurable. However, a recent
survey showed that we still have ways to go. The survey noted that only 31 percent
of healthcare providers use their EHR analytics capabilities while another third
utilized a combination of the EHR capabilities and an outside vendor to analyze data.
Demonstrating the underutilization of this important aspect of the EHR, 11 percent
of respondents said they didn’t analyze EHR data at all.
For the greater patient (or population) good, health systems more than ever need to
understand and utilize the comprehensive set of data that the EHR can provide,
especially in combination with other EHRs and other data sources. ACOs know that
this integrated approach to data management and exchange can improve care. They
understand the benefits of using the collective data in electronic health records to
analyze specific patient populations, distinguish risk factors, identify trends in
disease treatment and predict future outcomes, all of which improve patient care,
outcomes and the cost of care.
To unlock this hidden benefit of the EHR, healthcare organizations need a flexible
and scalable platform that allows management and integration of complex data
across and, in some cases, beyond the enterprise. In many organizations, internal IT
resources do not have the time or ability to manage the increasing volume and
integration complexities of new and expanding sources of data. Choo sing cloudbased technologies and a trusted partner to supplement internal IT resources helps
create a comprehensive data set in a secure and compliant manner.
The success of data management can be measured by the quality of the business
decisions and outcomes that are derived from the data. This requires moving beyond
Course Module
simple data collection to a strategy and tools that are designed to improve data
integration, data exchange, and overall data management along with care and
business outcomes. A good place to start is analyzing the data that exists in the EHR
and leveraging that data for continual improvement.
References and Supplementary Materials
Online Supplementary Reading Materials
1. Genomic sequencing in clinical practice: applications, challenges, and o pportunities;
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5067147/; November 9, 2017
2. Future Opportunities for Genome Sequencing and Beyond: A Planning Worksho[ for
the National Genome Research Institute;
https://www.genome.gov/27559219/future-opportunities-for-genome-sequencingand-beyond/; November 9, 2017
3. Deploying whole genome sequencing in clinical practice and public health: Meeting
the challenge one bin at a time;
https://www.nature.com/gim/journal/v13/n6/full/gim9201182a.html; November 9,
2017
4. Electronic health records: beyond the digitization of medical files;
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3752637/; November 9, 2017
5. Digitizing Delivers: The Benefits of Electronic Health Records;
https://www.liaison.com/blog/2017/05/19/digitizing-delivers-benefits-electronichealth-records-ehr/; November 9, 2017
6. The Digitization of Medical Records; https://rctom.hbs.org/submission/thedigitization-of-medical-records/; November 9, 2017
7. Digitizing Healthcare: Why Having an Electronic Medical Record Matter s;
http://possiblehealth.org/blog/electronic-medical-record/; November 9, 2017
Module 010 – Sociology of Science
This module contains the following topic:
1. Sociology of Science
The Sociology of Science
Merton’s “The Sociology of Science: Theoretical and Empirical Investigations” has stated
the following regarding the Sociology of Science:
Science, like any other activity involving social collaboration, is subject to shifting fortunes.
Difficult as the notion may appear to those reared in a culture that grants scien ce a
prominent if not a commanding place in the scheme of things, it is evident that science is
not immune from attack, restraint and repression. Writing a little while ago, Veblen could
observe that the faith of western culture in science was unbounded, u nquestioned,
unrivaled. The revolt from science which then appeared so improbable as to concern only
the timid academician who would ponder all contingencies, however remote, has now been
forced upon the attention of scientist and layman alike. Local conta gions of antiintellectualism threaten to become epidemic.
Science and Society
Attacks on the integrity of science have led scientists to recognize their dependence
on particular types of social structure. An institution under attack must reexamine its
foundations, restate its objectives, seek out its rationale. Crisis invites self -appraisal.
Scientists have been jarred into a state of acute self-consciousness: consciousness of
self as an integral element of society with corresponding obligations and intere sts.
Scientists are compelled to vindicate the ways of science to man. They have come full
circle to the point of the reemergence of science in the modern world. Centuries ago,
when the institution of science could claim little independent warrant for social
support, natural philosophers were likewise led to justify science as a means to the
culturally validated ends of economic utility and the glorification of God. The pursuit
of science was then no self-evident value. With the unending flow of achievement,
however, the instrumental was transformed into the terminal, the means into the
end. Thus fortified, the scientist came to regard himself as independent of society and
to consider science as a self-validating enterprise which was in society but not of it.
Science refers to a variety of distinct though interrelated items. It is commonly used
to denote (1) a set of characteristic methods by means of which knowledge is
certified; (2) a stock of accumulated knowledge stemming from the application of
these methods; (3) a set of cultural values and mores governing the activities termed
scientific; or (4) any combination of the foregoing. For examining sociology of
science, we shall consider the mores with which the methods of science are hedged
about.

Course Module
The Ethos of Science
The ethos of science is that affectively toned complex of values and norms
which is held to be binding on the man of science. These norms are expressed
in the form of prescriptions, proscriptions, preferences, and permissions.
They are legitimatized in terms of institutional values. These imperatives,
transmitted by precept and example and re-enforced by sanctions are in
varying degrees internalized by the scientist, thus fashioning his scientific
conscience or, if one prefers the latter-day phrase, his superego. Although the
ethos of science has not been codified, it can be inferred from the moral
consensus of scientists as expressed in use and wont, in countless writings on
the scientific spirit and in moral indignation directed. toward contraventions
of the ethos.
The institutional goal of science is the extension of certified knowledge. The
technical methods employed toward this end provide the relevant definition
of knowledge; empirically confirmed and logically consistent statements of,
regularities (which are, in effect, predictions). The institutional imperatives.
(mores) derive from the goal and the methods. The entire structure of
technical and moral norms implements the final objective. The technical norm
of empirical evidence, adequate and reliable, is a prerequisite for sustained'
true prediction; the technical norm of logical consistency, a prerequisite for
systematic and valid prediction. The mores of science possess a methodologic
rationale but they are binding, not only because they are procedurally
efficient, put because they are believed right and good. They are moral as well
as technical prescriptions. Four sets of institutional imperatives—
universalism, communism, disinterestedness, organized skepticism—are
taken to comprise the ethos of modern science.
o Universalism
Universalism finds immediate expression in the canon that truthclaims, whatever their source, are' to be subjected to pre-established
impersonal criteria: consonant with observation and with previously
confirmed knowledge. The acceptance or rejection of claims entering
the lists of science is not to depend on the personal or social attributes
of their protagonist; his race, nationality, religion, class, and personal
qualities are as such irrelevant. Objectivity precludes particularism.
The circumstance that scientifically verified formulations refer in that
specific sense to objective sequences and correlations militates against
all efforts to impose particularistic criteria of validity.
However, the institution of science is part of a larger social structure
with which it is not always integrated. When the larger culture opposes
universalism, the ethos of science is subjected to serious strain.
Ethnocentrism is not compatible with universalism. Particularly in
times of international conflict, when the dominant definition 'Of the
situation is such as to emphasize national loyalties, the man of science
is subjected to the conflicting imperatives of scientific universalism
and of ethnocentric particularism. The structure of the situation in
which he finds himself determines the social role that is called into
play. The man of science may be converted into a man of war —and act
accordingly.
However inadequately, it may be put into practice, the ethos of
democracy includes universalism as a dominant guiding principle.
Democratization is tantamount to the progressive elimination of
restraints upon the exercise and development of socially valued
capacities. Impersonal criteria of accomplishment and not fixation of
status characterize the open democratic society. Insofar as such
restraints do, persist, they are viewed as obstacles in the path of full
democratization. Thus, insofar as laissez-faire democracy permits the
accumulation of differential advantages for certain segments of the
population, differentials that are not bound up with demonstrated
differences in capacity, the democratic process leads to increasing
regulation by political authority. Under changing conditions, new
technical forms of organization must be introduced to preserve and
extend equality of opportunity. The political apparatus may be
required to put democratic values into practice and to maintain
universalistic standards.
o Communism
"Communism," in the nontechnical and extended sense of common
ownership of goods, is a second integral element of the scientific ethos.
The substantive findings of science are a product of social
collaboration and are assigned to the community. They constitute a
common heritage in which the equity of the individual producer is
severely limited. An eponymous law or theory does not enter into the
exclusive possession of the discoverer and his heirs, nor do the mores
bestow upon them special rights of use and disposition. Property rights
in science are whittled down to a bare minimum by the rationale of the
scientific ethic. The scientist's claim to "his" intellectual "property" is
limited to that of recognition .and esteem which, if the institution
functions with a modicum of efficiency, is roughly commensurate with
the significance of the increments brought to the common fund of
knowledge. Eponymy—for example, the Copernican system, Boyle's
law—is thus at once a mnemonic and a commemorative device.
Given such institutional emphasis upon recognition and esteem as the
sole property right of the scientist in his discoveries, the concern with
scientific priority becomes a "normal" response. Those controversies
over priority which punctuate the history of modern science are
generated by the institutional accent on originality. There issues
competitive cooperation. The products of competition are communized
and esteem accrues to the producer. Nations take up claims to priority,
and fresh entries into the commonwealth of science are tagged with.
Course Module
the names of nationals: witness the controversy raging over the rival
claims, of Newton and Leibniz to the differential calculus. But all this
does not challenge the status of scientific knowledge as common
property.
The communism pf the scientific ethos is incompatible with the
definition of technology as "private property" in a capitalistic economy.
Current writings on the "frustration of science" reflect this conflict.
Patents proclaim exclusive rights of use and, often, nonuse. The
suppression of invention denies the rationale of scientific production
and diffusion, as may be seen from the court's decision in the case of
U.S. v. American Bell Telephone Co.: "The inventor is one who has
discovered something of value. It is his absolute property. He may
withhold the knowledge of it from the public." Responses to this
conflict-situation have varied. As a defensive measure, some scientists
have come to patent their work to ensure its being made available for
public use. Einstein, Millikan, Compton, Langmuir have taken out
patents} Scientists have been urged to become promoters of new
economic enterprises. Others seek to resolve the conflict by advocating
socialism} These proposals—both those which demand economic
returns for scientific discoveries and those which demand a change in
the social system to let science get on with the job—reflect
discrepancies in the conception of intellectual property.
o Disinterestedness
Science, as is the case with the professions in general, includes
disinterestedness as a basic institutional confuse institutional and
motivational levels of analysis. A passion for knowledge, idle curiosity,
altruistic concern with the benefit to humanity, and a host of. other
special motives have been attributed to the scientist. The quest for
distinctive motives appears to have been misdirected. It is rather a
distinctive pattern of institutional control of a wide range of motives
which characterizes the behavior of scientists. For once the institution
enjoins disinterested activity, it is to the interest of scientists to
conform on pain of sanctions and, insofar as the norm has been
internalized, on pain of psychological conflict
It is probable that the reputability of science and its lofty ethical status
in the estimate of the layman is in no small measure due to
technological achievements. Every new technology bears witness to
the integrity of the scientist. Science realizes its claims. However, its
authority can be and is appropriated for interested purposes, precisely
because the laity is often in no position to distinguish spurious from
genuine claims to such authority. The presumably scientific
pronouncements of totalitarian spokesmen on race or economy or
history are for the uninstructed laity of the same order as newspaper
reports of an expanding universe or wave mechanics. In both
instances, they cannot be checked by the man-in-the-street and in both
instances, they may run counter to common sense. If anything, the
myths will seem more plausible and are certainly more
comprehensible to the general public than accredited scientific
theories, since they are closer to common-sense experience and to
cultural bias. Partly as a result of scientific achievements, therefore, the
population at large becomes susceptible to new mysticisms expressed
in apparently scientific terms. The borrowed authority of science
bestows prestige on the unscientific doctrine.
o Organized Skepticism
As we have seen in the preceding chapter, organized skepticism is
variously interrelated with the other elements of the scientific ethos. It
is both a methodological and an institutional mandate. The temporary
suspension of judgment and the detached scrutiny of beliefs in terms of
empirical and logical criteria have periodically involved science in
conflict with other institutions. Science which asks questions of fact,
including potentialities, concerning every aspect of nature and society
may come into conflict with other attitudes toward these same data
which have been crystallized and often ritualized by other institutions.
The scientific investigator does not preserve the cleavage between the
sacred and the profane, between that which requires uncritical respect
and that which can be objectively analyzed.
As we have noted, this appears to be the source of revolts against the
so-called intrusion of science into other spheres. Such resistance on the
part of organized religion has become less significant as compared
with that of economic and political groups. The opposition may exist
quite apart from the introduction of specific scientific discoveries
which appear to invalidate particular dogmas of church, economy, or
state. It is rather a diffuse, frequently vague, apprehension that
skepticism threatens the current distribution of power. Conflict
becomes accentuated whenever science extends its research to new
areas toward which there are institutionalized attitudes or whenever
other institutions extend their control over science. In modern
totalitarian society, anti-rationalism and the centralization of
institutional control both serve to limit the scope provided for
scientific activity.
References and Supplementary Materials
Online Supplementary Reading Materials
1. The Sociology of Science: Theoretical and Empirical Investigations;
http://www.collier.sts.vt.edu/5424/pdfs/merton_1973.pdf; November 9, 2017
Course Module
2. Sociology of Science;
http://students.ecs.soton.ac.uk/mwra1g13/msc/comp6037/pdfs/Sociology_of_Scien
ce.pdf; November 9, 2017
Module 009 – Ethical, Social and Policy Issues of
Nanotechnology
This module contains the following topics:
1. Ethical and Social Issues and Implications of Nanotechnology
2. Policy Issues and Implications for Nanotechnology Commercialization
3. Risk Management of Nanotechnology
Ethical and social implications nanotechnology
As we design systems on a nanoscale, we develop the capability to redesign the structure of
all materials– natural and synthetic–along with rethinking the new possibilities of the
reconstruction of any and all materials. Such increases in design power present significant
social and ethical questions. To support sustainable, ethical, and economic
nanotechnological development, it is imperative that we educate all nanotechnology
stakeholders about the short-term and long-term benefits, limitations, and risks of
nanotechnology. Nanotechnology, like its predecessor technologies, will have an impact on
all areas. For example, in healthcare it is very likely that nanotechnology in the area of
medicine will include automated diagnosis. This in turn will translate into fewer patients
requiring physical evaluation, less time needed to make a diagnosis, less human error, and
wider access to health care facilities. And, with nanomedicines, if the average human life
span increases, the larger number of elderly persons requiring medical attention will likely
result in increased health expenditures. It is essential for nanotechnology stakeholders to
strive to achieve four social objectives: (1) developing a strong understanding of local and
global forces and issues that affect people and societies, (2) guiding local/global societies to
appropriate uses of technology, (3) alerting societies to technological risks and failures,
and (4) developing informed and ethical personal decision-making and leadership to solve
problems in a technological world.7 Advances in nanotechnology also present numerous
challenges and risks in health and environmental areas. Nanotechnology risk assessment
methods and protocols need to be developed and implemented by the regulatory bodies.
Eric Drexler, author of Engines of Creation, has identified four challenges in dealing with
the development, impact, and effects of nanotechnology on society.
(1) The Challenge of Technological Development (control over the structure of
matter)
(2) The Challenge of Technological Foresight (sense of the lower bounds of future
possibilities)
(3) The Challenge of Credibility and Understanding (clearer understanding of what
these technological possibilities are)
(4) The Challenge of Formulating Public Policy (formulating polices based on
understanding)
Lewenstein gave the following social and ethical issues in nanotechnology:
Course Module
The list of social, ethical, legal, and cultural implications includes such issues as privacy,
avoiding a ‘nano-divide’, unintended consequences, university/industry relationships and
potential conflicts of interest, research ethics, and so on. It is widely acknowledged that,
precisely because the applications of nanotechnology are not yet clear, neither are the
ethical issues clear. And yet, many argue, the nano community must begin to address these
issues now, before they overwhelm nanotechnology and derail potential benefits.
Read the full document of his report here: http://www.hyle.org/journal/issues/111/lewenstein.pdf
In addition, Wolfson had this to say:
Nanotechnology has an enormous potential to do good in society. However, like many
technologies, its introduction and implementation raise serious societal and ethical issues,
both for the scientists who are developing this technology and for the members o f the
public who may benefit from or be exposed to it. The purpose of this paper is to explore
some of these societal and ethical issues. The purpose is not to take policy positions or to
suggest solutions but merely to raise some of the important social issues. In this way, it is
hoped that this paper can form the basis of a discussion on the public policy ramifications
of nanotechnology, from which positions and solutions can begin to emerge.
Many of the social and ethical issues are the same as those that affect a wide range of other
high technologies. That is, while the technology is new, the issues it raises have been faced
before by researchers and society. We need to remind ourselves about the lessons we have
already learned about social and ethical issues that were raised by biotechnology (such as
from regulatory failures in gene therapy), from the development of nuclear technologies,
and from computer technologies.
The Risk Management Model in Nanotechnology
Goudarzi, et.al. has suggested the 10-step model for nanotechnology risk management in
related projects and believe that these considerations could considerably control the
hazardous effects of the materials in workplaces and the environment.
Step 1: A basic knowledge of the work is essential fo r doing an adequate assessment.
Therefore, workplace personnel who have extensive knowledge of the field should always
be involved.
Consultation between managers and employees benefits the assessment and will help in
providing information about the substances used, how the work is performed, exposure to
nanomaterials and commitment to quality control.
Project managers might do the assessments themselves when working in a small
workplace, or it might be necessary to establish a team in a larger workplace. The
assessment team should have abilities to understand the information in the protocols and
labels, inspect the conditions of work and forecast potential problems. They also should
communicate efficiently with employees and possible stakeholders for makin g valid
conclusions about exposures and risks and finally report the findings accurately.
Step 2: To be able to make a thorough risk assessment, divide the work into sections,
subsections and tasks or process-units according to Work Breakdown Structure (WBS).
Step 3: Identify all nanoparticles that are, or will be, used or produced in every work unit
and process. A nanoparticle might be produced in the form of a powder, liquid, gel, vapor,
dust, mist or fume in the workplace.
Step 4: Identify the type of nanoparticle. Nanoparticles can be found in several forms, from
relatively safe such as engineered insoluble nanoparticles in a matrix to more hazardous
forms such as free nanoparticles.
Step 5: The supplier should provide information about the nanomater ials. However, for
most nanomaterials, Material Safety Data Sheets (MSDS) are not available, so it will be
necessary to obtain adequate information from other sources such as textbooks, provided
standards, technical reference sources, scientific papers, reports, trade journals, electronic
online databases or experience from a previous use of similar substances or processes.
Step 6: How are hazardous nanoparticles released into the work area? Are persons exposed
to hazardous nanoparticles through respiration, skin, ingestion or eye contact, or is there a
possibility of accidental injection into the body? A ‘walk through’ inspection will provide
information about each of the work units. It is important to talk to the employees at each
location and ensure that all persons that could be exposed to nanomaterials are covered. If
a new assignment, process or work unit is being planned but not yet in operation,
evaluation of the relevant work process, plan or design is needed.







Is there nanoparticle exposure?
How much and how long are the personnel exposed?
Is there an intermittent or continuous exposure?
Is there a frequent exposure?
What kind of control measures could be used or proposed?
Are the existing controls sufficient?
Are there any risks related to the storage and transport of nanomaterials?
Step 7: A significant risk involves serious health effects to people in the workplace18, for
instance by inhalation of nanoparticles or working with highly toxic nanoparticles (e.g.
nanobased anticancer drugs). Consider the nature and severity of the hazard and the
degree of exposure of people involved in the process. For summarizing the evaluation
process, four decisions could be made:
Decision 1: There are not significant risks at the moment and they are not likely to
increase in the future. Executions: Go to step 9 and end current assessment but
review the assessment if the situation changes.
Decision 2: There are significant risks but they have already been effectively
controlled. There might be a possible increase in the future. Executions: Maintain
control procedures and minimize chances of higher exposure occurring. Establish
additional control procedures (see step 8) if a high-risk event occurs despite
previous precautions. Review assessment steps if the situation changes.
Decision 3: There are significant risks present and they are not adequately
controlled. Executions: Determine and implement actions for preventing or
controlling exposure immediately. Investigate and implement a possible stop in the
production. Begin reviewing if more controls are required. Evaluate the exposures
again if the upgraded control procedures are used. Establish employee -training
programs.
Course Module
Decision 4: There are uncertainties about the risks involved – not enough
information or uncertainty about the degree of exposure. Executions: Find more
information or conduct a more detailed assessment. Request specialist advice if
necessary and decide using suitable actions presented in conclusion 1, 2 or 3. Apply
a good practice to minimize exposure meanwhile.
Step 8: If the assessment shows that there are significant risks to health, besides the
executions mentioned in step 7, further actions should be acquired if needed:




Complementary employee training
More precise monitoring procedures
Health surveillance system
First aid and emergency facilities
Step 9: The record should be concise and should include a description of the work unit,
name of assessor or assessment team personnel, date, time and a list of hazardous
nanomaterials used or produced in the project unit. It should also include a summary of the
process containing a description of normal operations in the project unit, with a note of any
changes observed or anticipated which might affect accuracy of assessment; risk
identification, including possible routes of exposure; procedure for assessment of
exposure; the degree of exposure and existing control procedures. The above mentioned
record should be saved either on paper or electronically in a permanent format.
Step 10: Review and regulation of the assessment is required if:




There should be significant changes in project products, work, material, process or
control procedures
Nanoparticle-related intoxication is reported
Inadequate control procedures are reported
New evidence about risks of nanoparticles emerges from recent publications
In these circumstances using a new or improved control method becomes reasonable.
Read the full report through this link: http://www.hypothesisjournal.com/wpcontent/uploads/2014/01/HJ330.pdf
References and Supplementary Materials
Online Supplementary Reading Materials
1. Ethical and social implications of nanotechnology;
http://www.qscience.com/doi/pdf/10.5339/qproc.2015.elc2014.57; November 9,
2017
2. What Counts as a ‘Social and Ethical Issue’ in Nanotechnology?;
http://www.hyle.org/journal/issues/11-1/lewenstein.pdf; November 9, 2017
3. Social and Ethical Issues in Nanotechnology: Lessons from Biotechnology and other
High Technologies;
https://www.blankrome.com/siteFiles/Publications/5B17637895210814D3535F12
76C22B89.pdf; November 9, 2017
4. Nanotechnology risks: A 10-step risk management model in nanotechnology projects;
http://www.hypothesisjournal.com/wp-content/uploads/2014/01/HJ330.pdf;
November 9, 2017
Course Module
Module 008 – Nanotechnology
commercialization and convergence with other
technologies
This module contains the following topics:
1. Commercialization of Nanotechnology
2. Convergence of Nanoscience with other technologies
Commercialization of Nanotechnology
Discoveries in nanotechnology have continued to increase as technologies have advanced
and commercialization strategies have become better implemented. In 2013, for example,
the number of patents issued under the nanotechnology classification, as defined by the
U.S. Patent and Trademark Office (USPTO), was 1,130. In fact, the last eight years (2006 2013) have shown steady growth in the number of patents issued, with approximately 4x
as many issued in 2013 as in 2006.
A variety of industries manufacture products incorporating nanotechnology including
biomedical devices, home appliances, batteries, industrial lubricants, computers, cameras,
food and beverage, clothing, cosmetics, fashion and manufacturing. To appropriately
measure nanotechnology’s commercial successes, it is essential to first define what it is
exactly. The National Nanotechnology Institute defines nanotechnology as “the
understanding and control of matter at dimensions between approximately 1 and 100
nanometers, where unique phenomena enable novel applications.” 9 The United States
Patent and Trademark Office (USPTO) applies a similar definition of nanotechnology
(Patent Classification 977) and further specifies more than 250 subclassifications including
nanostructures with biological material component (subclass 702), carbon nanotubes
(subclass 742), atomic force probe (subclass 863) and specified use of nanostructures for
medical, immunological, body treatment, or diagnosis (subclass 904), gene therapy
(subclass 916), dental (subclass 919) and carrying or transporting (subclass 963).
Commercialization Strategies
There are two basic commercialization strategies for nanotechnology - product
innovation or process innovation.

Product Innovation
Changes and advances in nanotechnology have resulted in commercial
successes in a variety of different industries. In most instances,
nanotechnology is used to facilitate a product innovation, often in response
to anticipated and/or actual demand for specific product characteristics. For
example, “a tennis racket made from a composite material which includes
CNTs to improve its mechanical properties is an attempt to create a
differentiated and improved product to gain market share” or a nanofiber
that, when used in conjunction with other materials, yields stronger and
Course Module
lighter bicycle frames. In these example, much like real life, nanotechnology
is use to augment current technologies to enhance products and/or
processes which already exists. Indeed, considered in this light, it often is
easier to identify nanotechnology as a process rather than a product.
Nanotechnology provides the means by which a desired characteristic can be
achieved within a product market that already exists. In such cases, the use of
nanotechnology becomes almost an incremental decision – one that allows
for the achievement of a requisite characteristic already valued by the
market. The numerous other characteristics also included in the technology
also are valued and thus the potential for royalty stacking comes into play.

Process Innovation
By contrast, process innovations are more embedded, but potentially more
radical. These tend to be much broader, focusing on developing new
technologies and thus new markets. For example, consider a hypothetical
self-repairing nanomachine in which demand is driven by the entirety of the
product.
Funding
Research and development spending and commercialization costs represent
significant barriers to entry for firms wanting to enter the nanotechnology
market. Development and manufacturing of equipment can be cost
prohibitive for firms with limited access to capital. Further, it also is
necessary to develop and maintain sufficient levels of human capital. As with
most other industries, access to capital markets for funding is vital to success.
For nanotechnology, the single largest share of investment funds comes from
corporations. In 2010, worldwide corporate funding amounted to
approximately $9 billion while the second largest share of investment funds,
federal funding, was just over $1 billion.
Nanosciences and its Convergence with other Technologies
Nanosciences and nanotechnologies are a rapidly growing field that already generates
many hopes within the scientific and technological community of future discoveries,
developments, and solutions to a number of societal problems. Simultaneously, fears of
possible negative and uncontrolled impacts on humans and the environment are also
developing steadily. In this paper, we propose a typology to classify these fears, which are
shown to be associated with images, metaphors, and symbols deeply rooted in the Western
religious tradition. However, we think that it is necessary, and urgent, to discern between
the hype, notably due to the media coverage of the field, and reality. Strangely enough, the
idea that there might be a problem with nanotechnologies first emerged amongst the
community of experts and promoters of this field, at a time when the general public was
not even aware of the existence/emergence of a nanoworld. Is it only initially a media
phenomenon?
Whatever the answer, we may have the opportunity, perhaps for the first time in the
history of science and technology, to consider simultaneously the developments of new
scientific knowledge and engineering capabilities with its impact on society and the
environment and, thus, to take in time appropriate decisions ‘to keep everything under
control’. In a potentially controversial context, political decision-makers have the
responsibility, with the active participation of scientists and engineers, to initiate,
stimulate, and organize the public debate. Their objective should be to clarify the actual
issues at stake, putting aside purely imaginary ones which rather belong to science fiction,
as well as to identify methodologies to tackle these issues and to implement regulations,
where necessary, to ‘master’ the development of nanotechnologies.
The difficulty of this task stems from the wide variety of (nano)objects, topics, and issues
associated with the expressions ‘nanosciences’ and ‘nanotechnologies’. Indeed,
nanoparticles, molecular robots, radiofrequency identification devices, etc., rais e different
questions and call for specific solutions. The possible toxicity of nanoparticles, which may
be released massively in the environment, poses a different problem than the wide
commercial diffusion of RFIDs, which may endanger the privacy of personal information,
even in a democratic society.
The convergence of bio, nano, and information technology
Nature has seen the evolution of extremely intelligent and complex adaptive systems
to drive the biological processes found in everyday life. For example, a cell can fuse
information-rich genetic processes with nanometer-scale sensors and actuators,
becoming one of the most efficient autonomous molecular systems. These basic
processes that occur at the molecular level lead us toward a compelling engin eering
approach: the fusion of biotechnology, nanotechnology, and information science.
Nanotechnology has enabled the production of new materials and molecular -scale
devices. Biotechnological advancements have allowed scientists to physically
manipulate genetic pathways or engineering strains of proteins to possess novel
functionalities. Informatics has served as the catalyst for organizing and
understanding vast knowledge from a system point of view.
The fusion of biotechnology, nanotechnology, and information science will culminate
in system architectures that can rival those that have taken millions of years to come
to fruition. With this comes the hope of achieving a fundamental comprehension of
how to manipulate and control cells on the molecular level. It will also enable us to
question just how much further we can push the envelope of human engineering.
The Institute for Cell Mimetic Space Exploration (CMISE) is one of four NASA
University Research, Engineering and Technology Institutes for developing
technologies on the nanometer scale for the study of biological phenomena. With
these unique nano modalities, the Center for Cell Control (CCC), a National Institute
of Health Nanomedicine Development Center, will apply engineering feedback
control schemes to direct information-rich biological cells towards therapeutic use.
Nature's Model for Bio, Nano, and Information Fusion: the Living Cell
The cell is the most fundamental biological unit, a magnificent, self-organized
system that performs the complex processes of life. A cell consists of a large
number of functional macromolecules, such as the millions of proteins with
sizes ranging from one to tens of nanometers. Self organization of these
nanometer-scale machineries confined within a fluidic capsule forms a live
cell at a size scale of only a few micrometers.
Course Module
Cellular activities are manifestations of the intra- and intermolecular
transports and motions of cellular molecules. These activities result in a
comprehensive set of functionalities: to sense (monitor its biological
surroundings and responses), to decide (evaluate incoming signals and
trigger an optimal response through information analysis), and to actuate
(modify its nanometer-scale surrounding to make it more suitable for
survival). The cell's responses to the internal and external stimulations
through organized molecular activities, governed by a complex information
processing network, render it an ideal model for a bio, nano, and information
fusion system.
References and Supplementary Materials
Online Supplementary Reading Materials
1. Commercialization of Nanotechnology;
http://www.micronomics.com/articles/Nanotechnology_Commercialization.pdf;
November 8, 2017
2. New Initiatives to Accelerate the Commercialization of Nanotechnology;
https://www.nano.gov/May2015Forum; November 8, 2017
3. New Initiatives to Accelerate the Commercialization of Nanotechnology;
https://obamawhitehouse.archives.gov/blog/2015/05/20/new-initiativesaccelerate-commercialization-nanotechnology; November 8, 2017
4. Nanosciences and its Convergence with other Technologies;
http://www.hyle.org/journal/issues/11-1/petit-laurent.pdf; November 8, 2017
5. The convergence of bio, nano, and information technology;
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2953859/; November 8, 2017
6. Science and technology convergence: with emphasis for nanotechnology-inspired
convergence; https://nsf.gov/crssprgm/nano/MCR_160714f_JNR_Perspectives_Convergence_Science_by%20Bainbridge_and_Roco_19p.pdf ;
November 8, 2017
7. Patenting Natural Products after Myriad;
http://jolt.law.harvard.edu/assets/articlePDFs/v30/30HarvJLTech569.pdf;
November 8, 2017
8. Convergence and Multidisciplinarity in Nanotechnology;
https://arrow.dit.ie/cgi/viewcontent.cgi?referer=https://au.search.yahoo.com/&http
sredir=1&article=1106&context=buschmarart; November 8, 2017
Module 007 – The Impact of Nanotechnology in
Business and Economy
This module contains the following topics:
1. Impact of Nanotechnology on Business
2. Effects of Nanotechnology on Economy
Impact of Nanotechnology on Business
The fundamental characteristics of nanotechnology have led analysts to suggest that it may
constitute a basis for long-term productivity and economic growth. It may also help to
address pressing national and global challenges in areas such as health care, energy, water
and climate change – you'll find plenty of examples here in our Nanowerk pages.
While sites like Nanowerk and others focus more on traditional science and technology
issues that highlight the broad-based nature of nanotechnology, others like consultants and
analysts go wild in predicting huge markets (hey guys – are we still on for a $3 trilliondollar market anytime soon?) and contributions to entrepreneurship and job creation.
Between those two areas, however, it is hard to obtain reliable information in terms of the
implications for nanotechnology companies and nanotech business in general. But it is
exactly this information that governments would need to determine how they should
structure their innovation policies.
– Nanotechnology is an enabling technology (or set of technologies) and the company case
studies show that this feature is a major reason for their entry into the field.
Nanotechnology allows for both the improvement of existing and the develo pment of
completely new products and processes, and sometimes new services as well. Companies
often experiment with multiple applications at the same time, many of which are still in the
research phase.
– Nanotechnology may best be described as a "science-based and demand-driven field".
While all of the case study companies undertake in-house R&D, collaboration with
universities and "star scientists" are also important sources of innovation and knowledge,
especially for small companies. Larger companies in relatively mature nanotechnology
subareas appear to focus more on applications which are driven by market demand and
tend to collaborate with a broader range of organizations to leverage their in -house R&D.
– Nanotechnology mainly affects the R&D and production activities of the case study
companies. Many of the smaller companies focus exclusively on nanotechnology, while the
larger ones typically blend nanotechnology with a range of other technologies. In the larger
companies it is thus difficult to single out the share of nanotechnology in total labor costs,
R&D expenditure, production costs, capital spending and sales.
– The larger companies in the sample have typically been involved in nanotechnology for
many years and seem well placed to assimilate nanotechnology due to their established
Course Module
critical mass in R&D and production, their ability to acquire and operate expensive
instrumentation and to access and use external knowledge. The relative strength of larger
companies in the early phases of nanotechnology developments runs counter to what the
traditional model of company dynamics and technology lifecycles would predict where
smaller, younger companies are generally considered more innovative.
–The case studies illustrate that nanotechnology is a complex field owing to its dependency
on various scientific disciplines, research/engineering approaches and advanced
instrumentation. Further, many nanotechnology sub-areas are in an early, immature, phase
of development. These features of nanotechnology can often create barriers to entry
especially for smaller companies which have limited human and other resources. They also
contribute to the poor process scalability of nanoscale engineering during the transition
from R&D to pilot and industrial scale production.
– Difficulties arise for recruiting human resources, especially for R&D and production
activities. The need for employees, or so-called gatekeepers, who combine specialist and
general knowledge (knowledge integration) and can manage interdisciplinary teams is also
a challenge.
– Challenges to funding R&D and related activities are often mentioned, especially by
business start-ups. The poor process scalability of R&D, which raises costs and prolongs
new product development times, can make nanotechnology less attractive to investors.
Uncertain regulatory environments and public perceptions of nanotechnology's
environmental, health and safety (EHS) risks can also influence R&D funding.
– The novelty of nanotechnology, the established interests of stakeholders, and difficulties
that companies can have in communicating the value proposition of applications to
potential customers (e.g. other companies), makes their entry and positioning in value
chains harder. The challenge is even greater for smaller companies th at experiment with
multiple applications and have to monitor many different industries and business
environments.
– Intellectual property rights (IPR) may become an issue as commercialization progresses
and nanotechnology matures as there is already a very wide range of patent claims, and the
possible formation of patent thickets (interrelated and overlapping patents), which could
contribute to barriers to entry for companies.
– The potential for overreaction to both actual and perceived EHS uncertainties a nd risks,
combined with regulatory uncertainties, complicates the business environment for
companies. Global harmonization of future EHS regulations is considered important.
A similar project was conducted by dandolopartners with their study of business’
understanding of and attitudes towards nanotechnology.
The report contains findings from a component of that research base – in-depth interviews
with 15 representatives from the business community. Businesses interviewed ranged
from small businesses to multinational companies, industry associations and local
government.
Key Findings
1. Companies are generally aware of nanotechnology and positive about its
potential benefits.
2. Overall, businesses have few concerns about nanotechnology, but are wary of
unknown health and safety side-effects.
3. For most companies, nanotechnology is a ‘watching brief’: they believe its impact
will not be felt in the short term, except perhaps in ICT (Information and
Communications Technology) and electronics. It is seen as offering a particularly
strong competitive advantage for companies operating in highly competitive and
mature markets.
4. Local companies see themselves predominantly as users of nanotechnology,
rather than developers of nanotechnology
5. Companies believe there is a clear role for government to support
nanotechnology development
The full details of the study are in the report that can be accessed through this link:
https://industry.gov.au/industry/IndustrySectors/nanotechnology/Publications/Docume
nts/Nanotechnologyandthebusinesscommunity2005.pdf
Effects of Nanotechnology on Economy
A recent review article in Environmental Health ("Opportunities and challenges of
nanotechnology in the green economy") examines opportunities and practical challenges
that nanotechnology applications pose in addressing the guiding principles for a green
economy.
The authors provide examples of the potential for nanotechnology applications to address
social and environmental challenges, particularly in energy production and storage thus
reducing pressure on raw materials, clean-up technologies as well as in fostering
sustainable manufactured products. The areas covered include:




nanomaterials for energy conversion (photovoltaics, fuel cells, hydrogen storage
and transportation)
nanomaterials for energy storage
nanomaterials for water clean-up technologies
nanomaterials for the construction industry
These solutions may offer the opportunities to reduce pressure on raw materials trading
on renewable energy, to improve power delivery systems to be more reliable, efficient and
safe as well as to use unconventional water sources or nano -enabled construction products
therefore providing better ecosystem and livelihood conditions.
Conflicting with this positive message is the growing body of research that raises questions
about the potentially negative effects of engineered nanoparticles on human health and the
environment. This area includes the actual processes of manufacturing nanomaterials and
the environmental footprint they create, in absolute terms and in comparison with existing
industrial manufacturing processes.
Consequently, the review aims to critically assess the impact that green nanotechnology
may have on the health and safety of workers involved in this innovative sector and
proposes action strategies for the management of emerging occupational risks.
Course Module
The authors propose action strategies for the assessment, management and
communication of risks aimed to precautionary adopt preventive measures including full
lifecycle assessment of nanomaterials, formation and training of employees, collective and
personal protective equipment, health surveillance programs to protect the health and
safety of nano-workers.
Concluding, the scientists emphasize that green nanotechnology should not only provide
green solutions, but should also 'become green' in terms of the attention paid to
occupational safety and health. In this context, a full democratic discussion between
expertise should be pursued to carefully balance the benefits of green nanotechnology and
the potential costs for the society, particularly in terms of environmental, public and
occupational health. This careful consideration will maximize environmental and societal
benefits, health gains and cost savings and will increase the likelihood of further
investment and sustainable development of this promising technological field.
References and Supplementary Materials
Online Supplementary Reading Materials
1. The Strategic Impact of Nanotechnology on the Future of Business and Economics ;
http://www.globalfuturist.com/dr-james-canton/insights-and-futureforecasts/stratigic-impact-of-nanotechnology-on-business-and-economics.html;
November 7, 2017
2. Nanotechnology business – The impact of nanotechnology on companies;
https://www.nanowerk.com/spotlight/spotid=19620.php; November 7, 2017
3. Nanotechnology and business opportunities: scenarios as awareness instrument;
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.5222&rep=rep1&type
=pdf; November 7, 2017
4. Nanotechnology and the business community;
https://industry.gov.au/industry/IndustrySectors/nanotechnology/Publications/Doc
uments/Nanotechnologyandthebusinesscommunity2005.pdf; November 7, 2017
5. Nanotechnology in the ‘green’ economy – opportunities and risks;
https://www.nanowerk.com/spotlight/spotid=38141.php; November 7, 2017
6. Assessing the Economic Impact of Nanotechnology: The Role of Nanomanufacturing;
http://www.internano.org/node/580; November 7, 2017
7. Social and economic aspects of nanotechnology;
http://www.softmachines.org/wordpress/?cat=5; November 7, 2017
8. The Social and Economic Impacts of Nanotechnologies: A Literature Review;
https://industry.gov.au/industry/IndustrySectors/nanotechnology/Publications/Doc
uments/SocialandEconomicImpacts_LiteratureReview.pdf; November 7, 2017
Module 006 – Technology in Various Fields and
the Internet
This module contains the following topics:
1.
2.
3.
4.
Technology and Economy
Technology in Education, Communication, and Mass Media
The Internet Domination
Internet Regulation and Legislation
Technology and Economy, Education, Communication and Mass Media
Technology and Economy
Technical progress is defined as new and better ways of doing things and new
techniques for using scarce resources more productively. An improved technology
yields greater output from the same quantity of resources. It involves two activities:
process innovation and product innovation.
There is no sharp distinction between process innovation and product innovation.
Process innovation is more emphasized because there are many literatures related
to the effects of technical change on productivity or new ways of satisfying existing
wants, rather on satisfying new wants.
Producing a new technology involves two processes: invention and innovation.
Invention entails the conception of a basic idea. This is the product of laboratory
scientists. Innovation is the application of that idea to something directly useful to
humankind. This is the work of engines. Innovation also pro vides cheaper and more
efficient ways to make existing goods.
According to Joseph Schumpeter, technical progress is partly technological and
partly economic in nature. Inventions are the emergence of new scientific or
technological ideas that may be part of a random, exogenous process. An innovation
is an economic process that occurs as a response to perceived profit opportunities,
through an act of foresight of the capitalist entrepreneurs, who create or realize
these opportunities though innovations.
Technology is a complex set of knowledge, ideas and methods and is likely to be the
result of a variety of different activities, both intentional and accidental.
Technological progress is a gradual process consisting of a sequence of small
increments lying along a continuous path.
For example, a generator and electric lights were demonstrated in 1876. Until six
years later, Thomas Edison opened the first commercial generator to power electric
lights in the Wall Street district of New York. Only in the 1930s, 60 years later, the
Rural Electrification Act provided the financing to bring electric power to most rural
areas of the United States.
It seems that the new idea spreads slowly initially, then it begins to be applied more
often, gradually attaining widespread acceptance and adoption; and finally it
reaches 100% diffusion as the last potential users are won over.
Course Module
While the growth path of technology is continuous, it does not generally exhibit a
constant slope or growth rate; technology can grow rapidly, stagnate, or even
decline. The path may take sudden sharp turns.
Technology is partially nonrival in nature. If one person uses an idea or method, that
does not prevent another from using it. Thus the marginal cost of using a particular
form of technology is zero, meaning that competitive market forcer will tend to
drive the price of existing technology toward zero.
Creativity and innovation will tend to be very low if nonrival ideas are freely used by
anyone. Therefore, the creators of the new ideas get no reward from their creative
efforts.
New ideas may be excludable. Patent laws seek to give the creator of an idea to use
the product or process exclusively for a specified number of years.
For example, the Coca-Cola Company has kept its formula secret for over 100 years;
its idea is protected by the complexity of a formula that no one has be en able to
reproduce exactly.
Some growth economists describe technology as path-dependent. The ability to
create new technologies depends on the level of technology already accumulated. It
means that previous technologies are often difficult to abandon.
Often, technology is not excludable. If old knowledge is not available, then others
cannot create new knowledge. Thus, patent laws set limits on the length of time tha t
a patent remains in effect.
The formal recognition of intellectual property rights is likely to facilitate the spread
of technology. Patents and copyrights permit the owners of intellectual property to
sell and sent their rights to others.
As long as the price for the use of the idea exceeds the possible loss of monopoly
profit, the owner of the idea should be willing to let others use the idea.
If a certain idea can be productively used elsewhere in the economy, others should
be willing to pay for the right to use the idea.
Technology in Education
Every day, many students are spending countless hours immersed in popular
technologies—such as Facebook or MySpace, World of Warcraft, or Sim City—which
at first glance may seem like a waste of time, and brain cells. But these genres of
technologies—Social Networking, Digital Gaming, and Simulations—deserve a
second, deeper, look at what’s actually going on.
Market research data indicates that many a normal, middle-aged adult uses these
technologies with frequency. The fact is, one can be 17, 35, or 60, and when one
begins to engage with them and observe what’s really going on, one can begin to see
that these technologies are more than just entertainment. These technologies are
already demonstrating how they impact the way we think, learn, and interact—and
they are also demonstrating the tremendous potential they have in these areas as
well. The emergence of social networking technologies and the evolution of digital
games have helped shape the new ways in which people are communicating,
collaborating, operating, and forming social constructs. In fact, re cent research is
showing us that these technologies are shaping the way we think, work, and live.
This is especially true of our youngest generations— those arriving at classrooms
doors, soon to be leaving them and entering the workforce and society-at-large.
Our newest generation – currently in K-12 – is demonstrating for us the impact of
having developed under the digital wave. These youths have been completely
normalized by digital technologies—it is a fully integrated aspect of their lives
(Green & Hannon, 2007). Many students in this group are using new media and
technologies to create new things in new ways, learn new things in new ways, and
communicate in new ways with new people— behaviors that have become
hardwired in their ways of thinking and operating in the world. Green and Hannon
give an excellent example of this, “Children are establishing a relationship to
knowledge gathering which is alien to their parents and teachers” (2007, p. 38)
Nearly all institutions known such as business, industry, medicine, science and
government, have all harnessed aspects of these technologies for decades. Games
and simulations have been a key component of training doctors and military
personnel, but even businesses like PricewaterhouseCoopers used a game about a
mining company in outer space to teach its employees about derivatives. Although
that may seem a bit “off the wall,” the fact is major corporations, the Department of
Defense, and the medical community would not use these tools if they were not
highly effective.
Although these examples are mainly centered on training purposes, there are
deeper educational benefits to digital simulations and games. Yet educational
institutions have been reluctant to embrace these technologies. Likewise, where
schools have often shied away from giving students an online identity in a digital
networking platforms to increase opportunities for learning, professional
organizations are leveraging networking technologies to increase collaboration,
knowledge-sharing, and production amongst their employees. Traditionally,
education has been impeded by the security and other po tential dangers of
employing social networking technologies. These concerns should not be ignored;
however, neither should these tools due to these concerns. Advances in these
technologies continue to afford us new ways to manage the potential dangers.
Simulations, digital gaming, and social networking technologies have all definitely
suffered the same public relations problems that all new technologies do. However,
there are countless examples of these technologies demonstrating their educational
value to other industries, confirming the powerful learning opportunities and
advantages they afford. It is our position that these technologies are safe, valuable
tools schools must take seriously.
Of course, changing instructional approaches is no easy task, particularly when
technology is involved. Adopting and integrating technology-based instructional
strategies has a long history of challenges, but with it has come a great
understanding of how to achieve success with them. In the contents to follow, we
will discuss:




Course Module
the background and affordances of Simulations, Digital Games, and Social
Networking;
the cognitive implications of these technologies;
specific challenges with using these tools in the classroom, as well as
strategies for overcoming these challenges in order to achieve successful
learning experiences; and
the future of these technologies and their impact and learning and teaching.
Read more about technology in education through the link to this paper:
http://education.mit.edu/wpcontent/uploads/2015/01/GamesSimsSocNets_EdArcade.pdf
Technology in Communication
Communication has been one of the deepest needs of the human race throughout
recorded history. It is essential to forming social unions, to educating the young, and
to expressing a myriad of emotions and needs. Good communication is central to a
civilized society.
The various communication disciplines in engineering have the purpose of providing
technological aids to human communication. One could view the smoke signals and
drum rolls of primitive societies as being technological aids to communication, but
communication technology as we view it today became important with telegraphy,
then telephony, then video, then computer communication, and today the amazing
mixture of all of these in inexpensive, small portable devices.
Initially these technologies were developed as separate networks and were viewed
as having little in common. As these networks grew, however, the fact that all parts of
a given network had to work together, coupled with the fact that different
components were developed at different times using different design methodologies,
caused an increased focus on the underlying principles and architectural
understanding required for continued system evolution.
This need for basic principles was probably best understood at American Telephone
and Telegraph (AT&T) where Bell Laboratories was created as the research and
development arm of AT&T. The Math center at Bell Labs became the predominan t
center for communication research in the world, and held that position until quite
recently. The central core of the principles of communication technology were
developed at that center.
Internet Regulation and Legislation
Internet Law, or Cyberlaw as it is sometimes called, refers to the legal issues related to
the use of the Internet. It is less a distinct field of law than a conglomeration of intellectual
property law, contract law, privacy laws, and many other fields, and how they pertain to
the use of the Internet.
Unique Nature of Cyberlaw
If there can be laws that could govern the Internet, then such laws will require a
unique structure to grapple with the international and ethereal nature of the web.
Many argue the Internet is not actually “regulable” at all, while others argue that not
only can it be regulated but substantial bodies of law already exist. Since the
Internet is not geographically bound, national laws can not apply globally. A few
international agreements exist, but some have argued that the Internet should be
allowed to self-regulate as its own "nation."
Internet Regulation
Aside from blatant censorship of the Internet in nations like China, Saudi Arabia, or
Iran, there are four primary modes of regulation of the internet: Laws, Architecture,
Norms, and Markets.
1. Laws are the most obvious form of regulation. As various states, countries,
and international groups attempt to grapple with issues raised by the use of
the Internet, they normally effect their policies through the implementation
of laws. Such regulations govern areas like gambling, child pornography, and
fraud. Of course, the shortcoming of laws are their limited geographical
scope. After all, should internet sites hosted in foreign countries but available
globally have to comply with varying, and sometimes conflicting, laws in
every corner of the globe?
2. Architecture refers to how information literally can and cannot be
transmitted across the Internet. This can refer to everything from internet
filtering software, to firewalls, to encryption programs, and even the very
basic structure of internet transmission protocols, like TCP/IP. In many ways,
this is the most fundamental form of Internet regulation, and all other areas
of Cyberlaw must relate to or rely upon it in some fashion since it is, quite
literally, how the Internet is made.
3. Norms refer to the ways in which people interact with one another. Just as
social norms govern what is and is not appropriate in regular society, norms
also affect behavior across the Internet. In other words, while laws may fail
to regulate certain activities allowed by the architecture of the internet,
social norms may allow the users to control such conduct. For example, many
online forums allow users to moderate comments made by other users.
Comments found to be offensive or off topic can be flagged and removed.
This is a form of norm regulation.
4. Similar to norm regulation is market regulation. Market regulation
controls patterns of conduct on the internet through the traditional eco nomic
principles of supply and demand. If something is unpopular, it will lack a
demand and eventually fail. On the other hand, if there is too much supply,
then competitors will eventually have to find ways to differentiate
themselves or become obscured by the competition. This helps to prevent
predatory conduct, drive innovation, and forces websites to self-regulate in
order to retain customers and remain viable.
Net Neutrality
Another major area of interest in Internet Law is net neutrality. Net neutrality refers
to regulations of the infrastructure of the Internet, itself. Every piece of information
transmitted across the internet is broken into what are called “packets” of data, then
passed through routers and transmission infrastructure owned by a variety of
private and public entities, like telecommunications companies, universities, and
government agencies. This has become a major area of concern in recent years,
because changes to laws affecting this infrastructure in one jurisdiction could have a
ripple effect, changing how information is sent and received in other jurisdictions
whether those areas would otherwise be subject to the jurisdiction of the country
implementing the new law or not.
Free Speech on the Internet
The Internet has allowed those living in many repressive countries, where free
speech is not a right, to rely upon the cloak of anonymity granted by the Internet to
have their voices heard. The rise of the Internet has been credited, in part, as the
Course Module
cause of many of the political movements around the world seeking greater access
and equality, such as the “Arab Spring” incidents.
Of course, this leads to an inevitable backlash in the form of internet censorship.
China is one of the staunchest in its efforts to filter unwanted parts of the internet
from its citizens, but many other countries, like Singapore, Iran, Saudi Arabia, and
Tunisia, have also engaged in such censorship.
In the Philippines, Republic Act No. 10175 or the Act Defining Cybercrime Providing for the
Prevention, Investigation, Suppression and the Imposition of Penalties therefore and for
other Purposes, has been begun in 2011.
The Cybercrime Offenses that are punishable under this act are the following:
a. Offenses against the confidentiality, integrity and availability of computer data and
systems:
1. Illegal Access. – The access to the whole or any part of a computer system
without right.
2. Illegal Interception. – The interception made by technical means without right of
any non-public transmission of computer data to, from, or within a computer
system including electromagnetic emissions from a computer system carrying
such computer data.
3. Data Interference. — The intentional or reckless alteration, damaging, deletion
or deterioration of computer data, electronic document, or electronic data
message, without right, including the introduction or transmission of viruses.
4. System Interference. — The intentional alteration or reckless hindering or
interference with the functioning of a computer or computer network by
inputting, transmitting, damaging, deleting, deteriorating, altering or
suppressing computer data or program, electronic document, or electronic data
message, without right or authority, including the introduction or transmission
of viruses.
5. Misuse of Devices.
i. The use, production, sale, procurement, importation, distribution, or
otherwise making available, without right, of:
01. A device, including a computer program, designed or adapted
primarily for the purpose of committing any of the offenses under
this Act; or
02. A computer password, access code, or similar data by which the
whole or any part of a computer system is capable of being
accessed with intent that it be used for the purpose of committing
any of the offenses under this Act.
a. The possession of an item referred to in paragraphs
5(i)(aa) or (bb) above with intent to use said devices for
the purpose of committing any of the offenses under this
section.
Please read the full document on the Cybercrime Act through this link:
http://www.officialgazette.gov.ph/2012/09/12/republic-act-no-10175/
References and Supplementary Materials
Online Supplementary Reading Materials
1. Technological Progress and Economic Growth; http://www.syecon.org/share/growth/growth-ch4.pdf; November 7, 2017
2. The Instructional Power of digital games, social networking, simulations and How
Teachers Can Leverage Them; http://education.mit.edu/wpcontent/uploads/2015/01/GamesSimsSocNets_EdArcade.pdf; November 7, 2017
3. Introduction to digital communication; https://ocw.mit.edu/courses/electricalengineering-and-computer-science/6-450-principles-of-digital-communications-ifall-2006/lecture-notes/book_1.pdf; November 7, 2017
4. Computer as Paintbrush: Technology, Play, and the Creative Society;
https://web.media.mit.edu/~mres/papers/playlearn-handout.pdf; November 7,
2017
5. Internet Regulation and the Role of International Law;
http://www.mpil.de/files/pdf3/06_antoniov1.pdf; November 7, 2017
6. Internet Law – Guide to Cyberspace Law; https://www.hg.org/internet-law.html;
November 7, 2017
7. Internet Policy; https://www.dccae.gov.ie/en-ie/communications/topics/InternetPolicy/Pages/default.aspx; November 7, 2017
Course Module
Module 005 – Scientific Research
This module contains the following topics:
1. Funding and Conflicts in Scientific Research
2. Research Data Recording
3. Societal Responsibilities of Scientists and Science
Funding and Conflicts in Scientific Research
From the October 2007 Issue of Discover Magazine, an article entitled, “Science’s Worst
Enemy: Corporate Funding” had this to say regarding funding and conflicts in research:
“In recent years there have been a number of highly visible attacks on American science,
everything from the fundamentalist assault on evolution to the Bush
administration’s strong-arming of government scientists. But for many people who pay
close attention to research and development (R&D), the biggest threat to science has been
quietly occurring under the radar, even though it may be changing the very foundation of
American innovation. The threat is money—specifically, the decline of government support
for science and the growing dominance of private spending over American research.
The trend is undeniable. In 1965, the federal government financed more than 60 percent of
all R&D in the United States. By 2006, the balance had flipped, with 65 percent of R&D in
this country being funded by private interests. According to the American Association for
the Advancement of Science, several of the nation’s science-driven agencies—the
Environmental Protection Agency (EPA), the Department of Agriculture, the Department of
the Interior, and NASA—have been losing funding, leading to more “outsourcing” of what
were once governmental science functions. The EPA, for example, recently began
conducting the first nationwide study on the air quality effects of large-scale animal
production. Livestock producers, not taxpayers, are slated to pay for the study. “The
government is clearly increasing its reliance on industry and forming ‘joint ventures’ to
accomplish research that it is unable to afford on its own anymore,” says Merrill Goozner, a
program director at the Center for Science in the Public Interest, a consumer advocacy
group.
Research universities, too, are rapidly privatizing. Both public and private institutions now
receive a shrinking portion of their overall funding from government sources. They are
looking instead to private industry and other commercial activities to enhance their
funding. Last summer, an investigation by the San Jose Mercury News found that one-third
of Stanford University’s medical school administrators and department heads now have
reported financial conflicts of interest related to their own research. These included stock
options, consulting fees, and patents.
Is all this truly harmful to science? Some experts argue that corporate support is actually
beneficial because it provides enhanced funding for R&D, speeds the transfer of new
knowledge to industry, and boosts economic growth. “It isn’t enough to create new
knowledge,” says Richard Zare, a professor of chemistry at Stanford University. “You need
to transfer that knowledge for the betterment of society. That’s why I don’t want to set up
this conflict of interest problem to such a heightened level of hysteria whereby you can’t
get universities cooperating with industry.”
Course Module
Even many industry leaders worry that the current mix of private and public funding is out
of balance, however. In 2005, a panel of National Academies (the National Academy of
Sciences, the National Academy of Engineering, and the Institute of Medicine) that included
both industry and academic members (including Zare) concluded that corporate R&D
“cannot and should not replace federal R&D.” Norman Augustine, the panel’s chairman and
a former CEO at Lockheed Martin, noted that market pressures have compelled industry to
put nearly all its investment into applied research, not the riskier basic science that drives
innovation 10 to 15 years out.
Others fear that if the balance tips too far, the “public interest” side of the science system—
known for its commitment to independence and objectivity—will atrophy. Earlier this year,
former FDA commissioner Jane Henney remarked that “it’s getting much more difficult to
get that pure person with no conflicts at all. . . . The question becomes both one of
disclosure and how much of a conflict you can have and still be seen as an objective and
knowledgeable reviewer of information.” More than half the scientists at the U.S. Fish and
Wildlife Service who responded to a survey conducted by the Union of Concerned Scientists
in 2005 agreed that “commercial interests have inappropriately induced the reversal or
withdrawal of scientific conclusions or decisions through political intervention.”
Merrill Goozner argues that the danger runs deeper. “In many precincts of the scientific
enterprise, the needs of industry have become paramount,” he says, turning science into “a
contested terrain” where facts are increasingly contingent on who is funding the research.
“The whole scientific revolution, which was a product of the Enlightenment, is threatened
when you commercialize science,” he warns.
So is private funding a boon or a bane for American science? The answer, like good science
itself, requires looking carefully at how the phenomenon is playing out in the real world.
Steven Nissen is perhaps the most prominent physician speaking out about the
pharmaceutical industry’s growing influence over medical research. An esteemed
cardiologist at the Cleveland Clinic, Nissen has written more than 300 articles and served
as the immediate past president of the American College of Cardiology. Working in a
bustling academia-affiliated medical center has given Nissen a unique perspective on the
benefits and risks of privatization.
In the past, academic medical investigators strove to maintain “arm’s-length relationships
with their corporate sponsors,” says Marcia Angell, a former editor in chief at The New
England Journal of Medicine. That changed with the rise of biotechnology and the passage of
landmark congressional legislation known as the Bayh-Dole Act. Passed in 1980, the act
granted universities and their professors automatic rights to own and commercialize
federally funded research. The goal was to unlock financial incentives that would speed the
pace of American scientific innovation. Overnight, many of the cultural taboos associated
with overt commercial profiteering on campus began to evaporate.
Nissen believes that interactions between academia and industry are crucial to the
development of new treatments. He also accepts sponsored research grants from industry,
both to test drugs and develop new treatments, although he tries to limit his personal
financial conflicts of interest by requiring that any other consulting fees and honoraria be
given directly to charity. Still, he is clearly troubled by the threat that privatization poses to
academic autonomy—and to research objectivity. “We can only make good decisions in
science when all of the information is available for physicians, scientists, and patients to
review,” he says. But drug companies are increasingly keeping physicians and their
patients in the dark.
Last year, Nissen grew suspicious about possible health risks associated with
GlaxoSmithKline’s top-selling diabetes drug, Avandia. “We requested access to the original
patient-level data,” he says, but “we were not afforded access.” Nissen wasn’t surprised; for
years he has perceived a growing tendency by the drug industry to suppress negative
research data.
Searching the Internet, Nissen stumbled upon a remarkable cache of data belonging to
Glaxo. His search unearthed 42 Avandia clinical trials—only 15 of which had ever been
published. Nissen didn’t know it at the time, but the reason Glaxo’s data were just sitting
there on the Web was the outcome of a lawsuit filed by former New York attorney general
(and current governor) Eliot Spitzer in 2004. The lawsuit alleged that Glaxo had concealed
negative trial data associated with its popular antidepressant drug, Paxil. When the data
were properly analyzed, they showed that children given Paxil were actually two times
more likely to experience suicidal thinking and behavior than children given a placebo, or
sugar pill. When Glaxo settled the suit, it denied having suppressed data and consented to
posting results of all its clinical trials online—including its data on Avandia.
Nissen knew there were limitations to the public information he had. He lacked any original
patient-level information, and a meta-analysis of prior drug studies is always less powerful
than a large prospective, randomized clinical trial. This May, however, Nissen felt
compelled to alert doctors and patients to what he had found.
Publishing in The New England Journal of Medicine, Nissen reported that Avandia raised the
risk of heart attacks in patients by 43 percent. The news made front-page headlines. Two
days later, the FDA, which had already been assessing the health risks of Avandia, imposed
its toughest warning label, the “black box,” on the drug, as well as on Actos, another drug
used to treat diabetes.
At a subsequent congressional hearing chaired by Representative Henry Waxman, it came
to light that the FDA had known about Avandia’s risks for some time. Rosemary JohannLiang, a former FDA drug safety supervisor, had recommended a black box warning label
for Avandia due to its harmful effects on the heart one year prior to Nissen’s publication.
Glaxo’s own meta-analysis, presented to the FDA in 2006, showed a 31 percent increased
risk of heart attacks. Yet according to Johann-Liang, “my recommending a heart failure box
warning was not well received by my superiors, and I was told that I would not be
overseeing that project.” She was also told to obtain her supervisors’ approval before
making any future black box recommendations. After the hearing, the FDA completed its
own meta-analysis of the original patient data and found virtually the same heart risks
Nissen had reported.
Nevertheless, Nissen found himself under attack, often by people with explicit financial ties
to the drug industry. His challengers have included Valentin Fuster, who wrote a critique of
Nissen’s work in Nature Clinical Practice Cardiovascular Medicine. Fuster receives Glaxo
funding and serves as the chairman of Glaxo’s Research and Education Foundation. Peter
Pitts wrote a stinging attack on Nissen in The Washington Times; he is a senior vice
president at the PR firm Manning Selvage & Lee, which represents Big Pharma, including
Glaxo. Douglas Arbesfeld, a senior communications consultant at the FDA, disparaged
Nissen in a biting e-mail to the media. He formerly worked as a spokesman for Johnson &
Johnson.
Press reports over the last 15 years detail how whistle-blowers inside academia and within
the FDA who have attempted to expose drug-research and safety issues have been
pressured. Some were threatened with legal action, others punished by their superiors and
Course Module
discredited. “Whenever we’ve raised safety questions about drugs,” Nissen says, “there’s
always been a reaction like this. Exactly the same thing happened in 2001 when we
published a manuscript that suggested that Vioxx might be causing excess heart attacks.”
Nissen was coauthor of one of the first studies on the dangers of Vioxx. Three years later,
Merck pulled the drug from the market. By that time, one FDA analyst estimates, the drug
had contributed to up to 139,000 heart attacks. (A Merck representative states that the
paper from which the estimate of 139,000 was derived had “serious limitations” and did
not necessarily reflect the views of the FDA.)
Experiences like these have bolstered Nissen’s position that the independent research
system needs to be protected and preserved. “I think having independent physicians
leading the study and analyzing the data is the best way to protect against biases in the
reporting of results.” But increasingly, he says, the pharmaceutical industry is farming out
its clinical trials to for-profit entities, known as contract research organizations.
Independent academic investigators are getting shut out.
The numbers bear Nissen out. Big Pharma now finances approximately 70 percent of the
nation’s clinical drug research. In the past, most of this sponsored-research money went to
academic medical centers; today an estimated 75 percent flows to for-profit contract
research firms.
Even when academic physicians are involved, often they don’t enjoy anything close to true
research independence, Nissen says: “Academic physicians are still involved in the
leadership of the study, but not fundamentally in the design of the study, or in the key
aspects of the execution of the study.” Often, he notes, the industry sponsor will prevent the
academic investigator from performing any independent analysis of the complete raw data
related to his or her research. “The physician gets a printout of the main results,” Nissen
says, “but the actual analysis itself is done by statisticians within the companies.”
Read further through this link: http://discovermagazine.com/2007/oct/sciences-worstenemy-private-funding
Furthermore, an article about government-funded research, the first one in a four-part
series described the current situation of research today:
“Throughout the ages, science has moved forward with boosts from many well-heeled
patrons, from monarchs to millionaires. Galileo’s heretical revelation that the Earth
revolves around the sun would have been unlikely if not for his education at the University
of Pisa, which was founded by Pope Clement VI, remembered even today as a devoted
patron of the arts and learning. Four centuries after Clement, German universities adopted
the notion that it was the academy’s responsibility to advance the understanding of science,
a conviction that we take for granted today. We also think that the government should pay
for university research—and it does pay for the vast majority of it. But since government
funding flatlined several years ago, scientists at BU and universities across the country are
worried, very worried, not just about their research, but about the future of science in
America.
“The situation is serious,” says Gerald Denis, a BU School of Medicine (MED) associate
professor of pharmacology and medicine in the Cancer Research Center and a fellow of the
Obesity Society. “The last few years of funding uncertainties have been deadly, and several
investigators I know have lost their jobs because grants were terminated. Cancer cohorts
have been lost, long-term studies decimated. Who will be around to make the next set of
American medical discoveries and advances? This is no way to maintain international
scientific leadership.”
Richard Myers, a MED professor of neurology and the author of more than 250 papers, says
his funding “came to a screeching halt” in 2008. On those rare occasions when he is funded,
he says, the money is likely to be reduced year after year until he ends up with just over
half of what he requested. “I know what good science is,” says Myers. “And that
compromises the science.”
Gloria Waters, vice president and associate provost for research, says finding funding
sources other than the federal government has become “a top priority” of the University. In
spring 2014, Waters’ office launched a series of workshops designed to help researchers
with such things as Humanities Proposal Writing and Working with Federal Agencies.
Every one, she says, was “extremely-well attended,” so well-attended that her office
recently ramped up the program to include eight events per semester.
At BU, whose researchers study an enormous range of subjects, from the birth of frogs to
the birth of planets, about 80 percent of the roughly $350 million for sponsored research
received in FY 2014 (down from a 2010 peak of $407 million) came directly from the
federal government, and another 10 percent originated in government grants and came to
BU through other institutions, such as Harvard or MIT. About 45 percent of that money
went to researchers at MED, where, according to Karen Antman, MED dean and Medical
Campus provost, funding anxiety is at an all-time high. Antman says grants to the Medical
Campus dropped $30 million in 2013 because of sequestration, although the money
bounced back in 2014 when sequestration was put on hold. “These types of fluxes in
research budgets produce a lot stress for faculty,” she says.
Some observers of the funding dilemma take a more sanguine approach. One Washington
insider, an expert on US research funding and a BU alum, who requested anonymity
because of his position, says that “research and development funding generally does pretty
well in the government’s budget process,” because the government branches agree it’s
important to stay competitive in science and technology. But looming over every budget
decision, this expert says, is a broader debate about what the size of the government should
be and how the government should spend its limited research budget.
In other words, some legislators wonder why the government should pay for so much
university research. Waters offers some good reasons. She points out that the other likely
source of research funding—industry—prefers to direct its money to projects that affect
the bottom line. “Industry is focused on applied research that will result in the
development of products with immediate commercial application,” she says. “But
fundamental or basic research is needed in order to create the knowledge base that leads
to more applied research. For example, in the area of medicine, specific treatments for
many diseases cannot be developed until we know much more about the basic cellular and
molecular changes involved in the development of the disease. Social science research has
also played an extremely important role in addressing national security challenges. In a
similar vein, scholarship in the humanities is critical to creating a broadly educated
workforce and our ability to engage with other areas of the world.”
Continue reading the article through this link:
http://www.bu.edu/research/articles/funding-for-scientific-research/
Conflicts of Interest
A number of sciences and professions have recently become aware of and concerned
about the extent to which corporate funding has influenced or will influence their
activities and directions. For example, the 54th Annual Meeting of the American
Course Module
Institute for Biological Sciences was entirely devoted to bioethics in a changing world
and the responsible conduct of science and included a plenary session titled Public
Citizenship and the Duties of Scientists: Avoiding the Best Science Money Can Buy
(Shrader-Frechette, 2003). Various medical journals have had difficulty finding
reviewers who are independent of pharmaceutical funding and have published new
guidelines for reviewers.
Philip Zimbardo, then president of the American Psychological Association (APA),
was appalled by the extravagant exhibits sponsored by pharmaceutical companies at
the 2002 convention of the American Psychiatric Association (as were newspaper
reporters; see Seligman, 2003; Vedantam, 2002). His concern that prescription
privileges for psychologists would be accompanied by increasing pharmaceutical
industry interest in funding APA activities led to discussions with the Board of
Directors and to the appointment of the APA Task Force on External Funding. The
purpose of the task force was to review the experiences of other organizations,
sciences, and professions receiving corporate funding; to consider relevant scientific
literature bearing on this issue; and to suggest policies and procedures to protect the
integrity of the association without unnecessarily restricting APA activities.
Problems may arise, of course, as a consequence of outside funding from any source
when the values of the donor and those of the recipient are either in conflict or
incompatible. It is sobering to note, however, that a broad range of industries,
including tobacco (Bero, 2003), lead (Markowitz & Rosner, 2003), food (Simon,
2006), real estate development (Ottaway & Stephens, 2003a, 2003b, 2003c), and
pharmaceuticals (Angell, 2004; Mundy, 2001; Rennie, 2003), have used similar and
often hidden strategies to influence a range of sciences and professions. Front
organizations—industry-funded grassroots, consumer advocacy (Herxheimer, 2003;
Mundy, 2003; Stern, 2003), research, and educational organizations whose primary
goal is to promote marketing, influence regulations, or advance other industry
interests—are among the strategies intentionally designed to obscure the actual
sources and amounts of funding for activities favoring corporations (Beder, 2002;
Center for Science in the Public Interest [CSPI], 2003a). In fact, much of the
knowledge available to investigators about such industry-funded activities has come
through documents only made available in the discovery process of litigation
(Castleman, 2003). This is true of the pharmaceutical industry as well as the lead and
tobacco industries.
The task force reviewed the consequences of external funding of a range of activities
across several sciences and professions but chose to focus on pharmaceutical funding
as a case example for three reasons. First, the effects of pharmaceutical funding on
the science and profession of medicine have been very well-documented and provide
a telling example of the distortions and unintended consequences that can occur
when academic centers, scientists, and practitioners become overly dependent on
for-profit industries. Second, pharmaceutical companies have expressed interest in
funding activities of the APA (and, in fact, have already done so to a limited extent),
and that interest is expected to increase as more psychologists obtain prescription
privileges. Finally, the pharmaceutical industry is of interest because it has been
enormously wealthy and politically influential and therefore has the potential to
exert a significant impact on the field of psychology.
Many readers may find it difficult to understand how the distortions that arose
within the field of medicine could occur in such a well-established and powerful
profession. That may be because they do not fully comprehend the size and scope of
the pharmaceutical industry, the significant role that it has come to play in the cost of
medical care, or how it has benefited from a very favorable social and political
climate in the United States. The result has been an enormously powerful industry
with virtually unprecedented financial resources to pursue its own agenda. The
pharmaceutical industry is so profitable and so influential that it is unlikely that APA
or any similar organization is going to change it or succeed in preventing its
influence on the health care system or on psychology as the number of interactions
with drug manufacturers increases. What psychologists can do is inform themselves
of the nature of this business and make certain that they have adopted appropriate
policies and procedures to help avoid the more egregious mistakes of others.
Continue reading the document through this link:
https://www.apa.org/pubs/journals/releases/amp-6291005.pdf
Transparency and objectivity are essential in scientific research and the peer
review process.
When an investigator, author, editor, or reviewer has a financial/personal interest or
belief that could affect his/her objectivity, or inappropriately influence his/her
actions, a potential conflict of interest exists. Such relationships are also known as
dual commitments, competing interests, or competing loyalties.
The most obvious conflicts of interest are financial relationships such as:


Direct: employment, stock ownership, grants, patents.
Indirect: honoraria, consultancies to sponsoring organizations, mutual fund
ownership, paid expert testimony.
Undeclared financial conflicts may seriously undermine the credibility of the journal,
the authors, and the science itself. An example might be an investigator who owns
stock in a pharmaceutical company that is commissioning the research.
Conflicts can also exist as a result of personal relationships, academic competition,
and intellectual passion. An example might be a researcher who has:



A relative who works at the company whose product the researcher is
evaluating.
A self-serving stake in the research results (e.g. potential promotion/career
advancement based on outcomes).
Personal beliefs that are in direct conflict with the topic he/she is researching.
Not all relationships represent a true conflict of interest–conflicts can be potential or
actual. Some considerations that should be taken into account include: whether the
person's association with the organization interferes with their ability to carry out
the research or paper without bias; and whether the relationship, when later
revealed, make a reasonable reader feel deceived or misled.
Full disclosure about a relationship that could constitute a conflict–even if the person
doesn't believe it affects their judgment–should be reported to the institution's ethics
group and to the journal editor to which a paper is submitted. All publishers require
disclosure in the form of a cover letter and/or footnote in the manuscript.
A journal may use disclosures as a basis for editorial decisions and may publish them
if they are believed to be important to readers in judging the manuscript. Likewise,
the journal may decide not to publish on the basis of the declared conflict. According
to the U.S. Office of Research Integrity, having a conflict of interest is not in itself
Course Module
unethical, and there are some that are unavoidable. Full transparency is always the
best course of action, and, if in doubt, disclose.
Guide to Conflict of Interest and How to Prevent It
Research Data Recording
Surrey’s web library has this to say for those handling qualitative research data.
1. Researchers can either take notes during their interviews (transcribing) or
observations, or take a recording
2. Using a tape recorder:
3. The benefits tape recording include:
1. The researcher can concentrate and listen and respond better
2. The discussion flows better when there are no distractions
3. In note taking there is an increased risk of the researcher being more
subjective
4. The entire interview/observation is recorded, which gives a better, more
holistic picture of what is going on
5. The participants may feel less observed if the tape recorded is used in a a
discreet way
6. During analysis, the researcher has the opportunity to go back over material
4. Transcribing:
1. Transcribing the interview involves taking notes of the interview...it is the
full 'script' of the interview and the aim is to take a full written version of the
interview
2. Transcribing an interview is very time consuming, with an estimated time
ratio of 5:1 (i.e. 5 hours of transcribing a one hour interview)
5. Tape analysis can be used, which is a combination on the two and involves the
researcher taking notes from the recording
6. Bias must be considered when taking notes or using tape analysis
7. Good quality transcribing relies on skills beyond just taking notes and there is often
space for subjectivity
Societal Responsibilities of Scientists and Science
At the American Association for the Advancement of Science (AAAS, publisher
of Science Careers) Annual Meeting in Boston this afternoon, Mark S. Frankel, the director
of the Scientific Responsibility, Human Rights, and Law Program at AAAS, made a case for
scientists to think more deeply about their social responsibilities.
Right now, much of the emphasis in science is on the professional responsibility of
scientists to stick to "standards agreed upon by the scientific community" regarding how
research should be conducted, Frankel said. He called these responsibilities "internal." But
scientists also have "external," social responsibilities "toward the larger community,"
Frankel argued—and "it is no longer acceptable to focus on internal responsibilities."
Science depends on public money, affects policy decisions, and offers risks and benefits to
society. "The communities in which you live and the communities much farther out … are
ultimately affected by the work that you do."
Frankel would like to see three core ideas integrated into graduate education. The first is
that "science is a social institution, with a mission and 'baggage' like all other social
institutions created by human beings," he said. By that, he means that graduate students
should be given the opportunity to explore the values and expectations inherent to their
specific fields and to consider whether these are consistent or in conflict with broader
social values. Graduate students should also seek to grasp the social aspects and
implications of scientific issues and be given the opportunity to "gain a good understanding
of what it means to be a socially responsible scientist in this day and age."
Frankel's second core message was that young scientists should appreciate the global
dimension of science. They should be "looking beyond themselves," he said, and should
"use their skills to help with global problems." Last, they should realize that their education
and research are being subsidized by society, and take into account society's expectations
of how they should be using this knowledge in the future. "We must educate graduates to
be vitally concerned with not only how to apply their knowledge and skills, but also to
consider the value of what they do for others," Frankel said.
Scientists should also be prepared to confront situations where their internal
responsibilities clash with their external responsibilities. One key professional
responsibility for scientists, for example, is to publish their results so they can be reviewed
and help science move forward. But in some cases, publication of sensitive information has
the potential to cause harm to society. Frankel took the example of the avian flu research
that in 2011 sparked a fierce debate about whether it should be published, given that it
identified mutations that could make the H5N1 virus much more transmittable to humans.
Scientists have the "social responsibility to make sure that this information is not used by
those who can do harm," such as bioterrorists or countries with ill-equipped safety
laboratories, Frankel says.
Sometimes, different social responsibilities can clash, as also happened with the avian flu
study. Scientists had "the social responsibility to give [the information] to those who need
Course Module
it to prevent an epidemic," Frankel said. Scientists' decision to impose a moratorium was "a
very profound thing to do," with "probably profound effects on their careers and funding."
The decision was an "exemplar" of how to deal with the issues. "We need to be thinking
about ways to train [students] about social responsibilities along [with] those internal
responsibilities." (Pain, 2013)
The Researcher in Society
The public appreciates the scientific and technological advances contributing to
improving their well-being. The scientific community should not “overreact” to the
uncertainty and even resistance with which society sometimes responds to
scientific or technological developments. Instead, it should try to understand the
basis and the meaning of such reactions, by creating an open, non-paternalistic
dialogue with the public.
Some segments of the public do not appreciate with clarity that there is no absolute
certainty in scientific theories and models (i.e. results that are immune to being
changed by subsequent theories). Likewise, they do no understand that “zero risk”
is unattainable (as much as risk is, and should be, reducible to socially acceptable
levels). Scientists, on their part, all too frequently seem to be disconcerted by ethical
debates on research, and often attribute them merely to the public’s lack of
information. The effect of the combination of these two attitudes on controversial
science-related subjects could erode the “intangible asset” of the public’s confidence
in the scientific community.
Researchers have to be aware of concerns and attitudes in the social environment
that are relevant to some aspects of their work. They should take advantage of any
available opportunities to inform society of how researchers incorporates the
public’s concerns, preferences, and requirements into its work and decision-making.
Another important aspect of researchers’ social commitment relates to the public
origin of the funds used for their work. It should be clear to the scientific community
that using public resources entails certain indissoluble, inherent principles of
reciprocity, such as explaining the efficient use of resources in terms that can be
understood by society that provides them. This task may be conducted by research
organizations through activities such as: open-house days, electronic information
resources, disseminating reports of activities undertaken and outlining researchers’
principles of conduct. This institutional support would in no way substitute for the
responsibility of individual researchers.
The researcher as a teacher and spokesperson
It is important and urgent to make a lasting and effective effort to increase
society’s knowledge and interest in the culture’s scientific foundations and
science’s contribution to their development. This may also help the younger
generation decide on taking up scientific careers. Initiatives should tackle many
aspects such as:
1. Providing an intelligible and attractive description of the creative
function of scientific knowledge and the impact of scientific and
technological advances on growth and well-being.
2. Stimulating scientific interest and scientific knowledge at all
educational levels, according to the specific characteristics of each
level.
3. Communicating information about the methods and elements that
typify scientific research, such as: curiosity and a desire to understand
the world, the role of doubt, attention to empirical evidence,
uncertainty, risk, perseverance, and critical analysis of the arguments
of others, and, more importantly, one’s own arguments.
A clear and explicit commitment to valuing and encouraging researchers’
work in this area has to be made by the scientific community and the
scientific institutions with competence in the area of science policy. They
should provide specific, professional and financial incentives to researchers
carrying out this task.
The researcher as advisor in public matters
The number of channels for managing and applying scientific knowledge
should be increased. Channels should be formalized and made transparent
(or institutionalized). They should not only be available in crisis situations,
but also for the daily management of the public’s interests.
Any research organization requires generous measures of the following:

social space for personal initiative and creativity;

time for ideas to grow to maturity;

openness to debate and criticism;

hospitality toward novelty; and

respect for specialized expertise.
[These] may sound too soft and old-fashioned to stand up against the cruel modern
realities of administrative accountability and economic stringency. On the contrary, I
believe that they are fundamental requirements for the continued advancement of
scientific knowledge—and, of course, for its eventual social benefits.
—JOHN ZIMAN, Prometheus Bound: Science in a Dynamic Steady State, Cambridge
University Press, New York, 1994, p. 276.
References and Supplementary Materials
Online Supplementary Reading Materials
1. Science’s Worst Enemy: Corporate Funding;
http://discovermagazine.com/2007/oct/sciences-worst-enemy-private-funding;
November 7, 2017
2. Who Picks Up the Tab for Science?; http://www.bu.edu/research/articles/fundingfor-scientific-research/; November 7, 2017
3. Corporate Funding and Conflicts of Interest;
https://www.apa.org/pubs/journals/releases/amp-6291005.pdf; November 7, 2017
4. Conflict of Interest;
https://www.elsevier.com/__data/assets/pdf_file/0010/92476/ETHICS_COI02.pdf;
November 7, 2017
5. Handling qualitative research data;
http://libweb.surrey.ac.uk/library/skills/Introduction%20to%20Research%20and%
20Managing%20Information%20Leicester/page_73.htm; November 7, 2017
Course Module
6. Managing your research data; http://www.research.uwa.edu.au/staff/humanresearch/managing-data; November 7, 2017
7. The Social Responsibilities of Scientists;
http://www.sciencemag.org/careers/2013/02/social-responsibilities-scientists;
November 7, 2017
8. Science for society: the social responsibility of scientists;
https://www.upf.edu/pcstacademy/_docs/cosce_en_02.pdf; November 7, 2017
9. The responsibility of scientists to society;
http://www.ucl.ac.uk/~zcapf71/The%20responsibility%20of%20scientists%20to%
20society.pdf; November 7, 2017
10. The Scientist in Society; https://www.nap.edu/read/4917/chapter/13; November 7,
2017
11. Science and Responsibility;
http://www.ppu.org.uk/learn/infodocs/st_science_res.html; November 7, 2017
Module 004 – Biopolicy
This module contains the following topic:
1. Biopolicy
Biopolicy
The text below is from the presentation of Dr. Agni Vlavianos-Arvanitis, President and
Founder of the Biopolitics International Organization, entitled: Biopolicy – A Vision for the
Millennium:
“Poverty, hunger, disease, environmental degradation, a declining resource base, the loss of
species and habitats, climate change, inadequate water supplies, desertification – all these
are global problems. They do not respect national boundaries and they are all related.
Addressing them will require an unprecedented level of international cooperation. If we
are to solve the problems of our world, nations must redirect their efforts away from
conflict toward environmental restoration and the eradication of poverty, hunger and
disease. This is the goal and vision of biopolicy.
Over the past 50 years. humans have affected global ecosystems more rapidly and
extensively than in any other comparable period in human history. Humans are an integral
part of the world’s ecosystems, constantly changing them and often damaging the ability of
the ecosystems to provide the services necessary for human well being. The deterioration
of the global environment is threatening the very continuation of life on our planet, adding
urgency to the need for coherent long-term international strategy and cooperation. The
increased mobility of goods, services, labor, technology and capital throughout the world,
facilitated by technological advancements in communications and transportation that has
been called globalization, profoundly demonstrates the urgency for rigorous inquiry into
the opportunities and challenges ahead. Increasingly, with information and communication
technologies empowering individuals everywhere, humanity’s future rests with new
models of thought, action, communication and participation. A new millennium vision in
policy, which we call biopolicy, is needed to guarantee the continuity of bios on our planet
and lead society to a harmonious future.
In 2000, all 189 member states of the United Nations adopted the Millennium Declaration,
an international acknowledgement of the massive problems facing humanity which sets
goals for achieving specific targets by certain dates. The Millennium Development Goals
include the reduction by one half of the proportion of people in the world whose income is
less than one dollar per day, and the proportion of people who suffer from hunger. Other
goals call for the achievement of universal primary education, the promotion of gender
equality, the reduction of child mortality, improvement of maternal health, halting the
spread of HIV/AIDS, malaria and other major diseases, ensuring environmental
sustainability, and developing a global partnership for development. The Millennium
Development Goals are an admirable effort to solve the world’s great problems. Achieving
them will require a great commitment by the developed nations and a fundamental
realignment of their priorities.
Course Module
Biopolicy encompasses all aspects of human endeavor, and is based on a framework of
environmental ethics that is intended to promote a reassessment of current assumptions
and lead to a new global appreciation for the protection of life on our planet. Biopolicy can
become a unifying vision for attaining the Millennium Development Goals and lead to the
future harmonious co-existence of all forms of life. It provides the necessary incentives for
every endeavor to be oriented toward the better understanding and preservation of the
environment and all forms of life. In the spirit of biopolicy, every individual on the planet is
encouraged to actively engage in the search for new paradigms and to join environmentally
committed legislators, scholars, educators and business leaders in influencing
governmental protection of environmental issues around the world.
Today’s society may be illustrated as an inverted and therefore highly unstable pyramid in
which societal values are heavily influenced by developments in the realm of technology. It
is vital that we correct this imbalance and move to a stable society, which is characterized
by respect for bios and the environment. B.I.O.’s educational and awareness-raising
programs are directed at restoring the stability of our human and natural environments.
To alleviate regional conflicts and reconcile economic growth with environmental
harmony, a new vision is needed in every aspect of human affairs – industry, energy,
transport, agriculture and regional development. In order to be successful, however, these
policies have to be based on a framework of environmental ethics. Biopolicy provides these
ethical guidelines and urges a reassessment of current assumptions with a view to a global
appreciation of bios. Society needs to mobilise every one of its elements and strive for a
better future. Working to sustain what already exists is not enough. With new challenges
constantly arising and with an increased awareness of the urgent need to take action
against destructive trends, the time is ripe to find more comprehensive, long-term
solutions to protect our planet and guarantee a balanced society for the future. A new
vision, beyond sustainable development, can help place the situation in perspective, and
provide the necessary incentives to move ahead and explore po ssibilities leading to more
just and safe global management.
World Referendum
How can we engage everyone in the race to save the environment? Advances in
communication technology provide the unprecedented opportunity for all the
people of the world to become actively involved in the great issues of our time. With
the internet, it is now possible for every citizen from any corner of the globe to cast
a vote for saving the global environment. B.I.O. has proposed such a worldwide
referendum on the urgency of saving bios and the environment. By giving every
individual the opportunity to simultaneously make their voice heard, new pathways
for participatory democracy would be established. With a massive vote in favor of
the environment, public opinion on saving the environment could no longer be
ignored.
Bio-education
The purpose and responsibility of bio-education is to uplift the spirit of humanity in
order to reverse the crisis in values that has resulted in serious environmental
deterioration. The advent of globalization has brought major changes in economic,
social and educational priorities and is creating new challenges for humanity. These
developments have, in effect, 3 made the world a single market place. To meet the
challenges of education for the new millennium, a radical shift is needed away from
the intra-disciplinary entrenchment that has prevailed in the past into more creative
patterns of thought for the development of the highest potential of each individual
and for the benefit of future generations. By providing interdisciplinary models with
concern for bios and the environment at the core of every specialty, bio-education
seeks to apply environmental protection to every human endeavor. This vision may
be illustrated graphically as follows:
To further this vision, B.I.O. launched the International University for the Bio -Environment
(I.U.B.E.) in 1990. The I.U.B.E. urges scholars, decision-makers, diplomats, business and
civic leaders to actively contribute to the development of a life-supporting society. Bearing
in mind that universities should be, by definition, universal, the I.U.B.E. acts as a catalyst to
accelerate environmental awareness and impart an environmental message to opinion
formers, students and training professionals around the world. Rather than focusing on the
Course Module
award of degrees, the I.U.B.E. functions as an open and distance learning initiative – using
modern teaching tools such as e-learning – whereby leading educators and decisionmakers infuse existing educational institutions with bios-enhancing values. B.I.O.’s
landmark textbook, BioSyllabus for European Environmental Education, has become part
of the curriculum of numerous university courses in an expanding list of countries. The
book provides basic concepts on a range of environmentally related topics, such as bio architecture, bio-ethics, bio-economics, bio-health, bio-history and bio-tourism. The book
provides themed references to the highly regarded and wide ranging resource of other
published B.I.O. material, and is freely available to both educators and educated, in print
and electronically – on the internet and on CD-Rom.
Bio-education to enrich sustainability – B.I.O.’s extensive e-learning programme
B.I.O. places a wealth of educational material and resources online with its broad
range of elearning courses promoting pioneering dimensions in bio-education. The
hope is to infuse new thinking in environmental education and to enrich the
concepts of sustainable development. Currently, participants from sixty-six
countries are enrolled in B.I.O.’s elearning courses:
The following courses are available:
-
-
Bio-Architecture: Environmental models in architecture, energy efficient
buildings, environmentally responsible urban planning
Bio-Diplomacy: International cooperation in environmental protection, the
environment as a unifying factor for peace.
Bio-Economics: Environmental management, natural resource economics,
international policy, EU environmental policy, corporate policy.
Bio-Energy: Renewable energy sources, clean energy, models for energy savings,
wind, solar, biomass, energy efficient buildings.
Bio-Ethics: Environmental protection as an ethical responsibility, codes of
environmental ethics for every profession, the environment in bioethics.
Bio-Health: Environmental quality and public health, pollution threats to health,
risks and benefits of biotechnology, quality of life.
Bio-History: Environmental factors in the development of human civilization,
culture, historical sources, ancient texts.
Bio-Legislation: International and European Union environmental policy and
legislation, international treaties, environmental action.
Bio-Assessment of Technology: Tools and methods for pollution abatement,
waste management technologies, recycling.
Waste Management: Tools and methods of waste management and technologies,
including recycling, composting, landfilling, and wastewater treatment.
Bio-Tourism: Environmentally friendly tourism industry, suggestions for
cultural tourism, environmental hotel management, water conservation,
recycling.
Common Agricultural Policy: A simplified text for non-experts who wish to
become acquainted with the EU’s Common Agricultural Policy (CAP).
Food and Agriculture: Agriculture and the environment, pollution loads, GMOs,
water and soils, chemicals and biotechnology, environmental policy.
-
People with a Disability in Modern Society: Improving equity and quality of life
for the disabled, accessibility, information, assistive technology, sports,
Paralympic Games.
Bio-economics
It is clear that there is an intimate relationship between the environment and development.
In the past, industries were the greatest polluters. Economic actors are therefore key
players in the drive to tie business to environmental protection. Preserving the wealth and
beauty of the natural world, securing the health of the earth’s population, providing fair
rules of trade, and guaranteeing equal educational opportunities for every country in the
world can be a source of genuine profit, both monetary and social. The quality of life issue
needs to assume top priority, along with biopolicy and education. Moreover, the concept of
“profit” has to be redefined to encompass elements which constitute a genuine profit for
society: culture, internal wealth, preservation of natural resources, better health and the
protection of biodiversity, as a measurable part of a nation's prosperity. The participation
of economic leaders is vital to the attainment of the Millennium Development Goals.
The world is experiencing a range of hurdles with regard to seeking a compromise between
the legitimate needs of development and fragile environmental balances. Poor countries
overuse their resource base and, thereby, their natural environment. Water development
projects often damage the downstream ecology. The sale of raw materials in over saturated
markets leads to falling prices, which in turn reduces net proceeds. Because of such
conditions, appeals to protect the environment are ignored or often met with derision. The
conflict between the industrial countries' ongoing economic growth and the developing
countries' undisputed need for growth, on the one hand and, the negative environmental
effects of intensive energy and raw material utilization on the other, cannot be solved
within the present framework.
Environmentally sound guidelines may be discussed and arrogated at the negotiating table,
but in real life, these directives too often do not reach national decision -making. An
approach combining the consensus and consent of the people, as well as that of
governments and international institutions, is essential in order to prevent economies from
expanding without due concern for the environmental repercussions of uncontrolled
growth. Corporations and entrepreneurs can work together to tackle these challenges and
tread lightly on the planet in their business endeavors. At same time, a grassroots
mobilization and public participation, on both the local and international levels, can
enhance the establishment of bios-supporting economic strategies and initiatives
worldwide.
Course Module
Eradicating poverty and fighting hunger
Global agriculture today faces a major challenge: feeding more people using less land,
without further degradation of natural resources and the environment. The Millennium
Development Goals call for cutting by one half, the number of people who suffer from
hunger by 2015. The industrialized model of agriculture cannot meet this challenge, due to
its excessive reliance on chemical inputs and the pattern of environmental degrad ation and
loss of biodiversity to which it contributes. To meet the challenge of feeding the world’s
hungry, society must focus upon reforming political institutions, creating appropriate
technologies, promoting cultural capital and enabling institutional frameworks that favor
policy for environmental protection. Key to these goals is the increased use of participatory
research methods, proper agrarian policies and local capacity building.
Consumers, however, must ultimately be the driving force for envir onmentally viable
economic development. Poverty and food security are social and economic issues, but are
also at the root of many environmental problems in developing countries. As world
population expands in these regions, the ability to provide basic necessities is threatened.
In the 21st century, agricultural policy will have to complement development policies and
programs with the aim of increasing food production and personal incomes without further
degrading local environments.
Food security – providing all the people with sufficient food at all times to meet their daily
dietary needs for a healthy and productive life – is an essential precondition for economic
and social development in every country. It depends on the availability of and access to
food, and on proper food use. Achieving food security is more than just an issue of food
production, nutrition, and food aid. Hunger is a severe manifestation of poverty, and
alleviating it depends in the long run on sustainable and broad-based economic growth and
income generation. In most countries, these depend on a productive, competitive, and
environmentally sound agricultural sector. To achieve these conditions, underdeveloped
countries must invest in rural areas to strengthen agriculture, the food sy stem, and
infrastructure, and to restore and conserve critical natural resources for agricultural
production. This requires both public and private investment, and the political will to
implement the necessary changes.
Bio-legislation
The central concept of bio-legislation is to link the protection of bios rights to the defense
of the rights of future generations. The interdependence between human rights and human
obligations is vital in this context. Rights correspond to obligations, and, in addition to the
existence of human rights, there exists a series of human obligations concerning our
common responsibility to preserve the environment and improve quality of life on a global
level. The defense of human rights should not be regarded as an issue unrelated to the
protection of other forms of life on our planet. Health hazards arising from environmental
degradation and pollution, desertification, depletion of natural resources, water scarcity
and famine are a threat to the human species. To secure o ur rights and to prevent disaster,
we urgently need to accept the responsibility of reversing negative trends and protecting
our natural heritage.
There has been a growing recognition that environmental justice cannot be achieved
without effective international legislation dedicated to addressing environmental issues.
After well-documented environmental disasters, such legislation is not a mere aspiration
but indeed a necessity. The integration of the environment into all aspects of global policy
and the issue of environmental liability are therefore priorities.
Bio-diplomacy and defense for life
Today, the world faces an unprecedented crisis of environmental degradation. The
continuation of life on our planet is threatened by global climate change, by hunger and
disease, by the destruction of the forests and biodiversity, and other forms of
environmental degradation. Yet the nations of the world are too pre-occupied with
international conflicts and preparations for war to mount an adequate response to the
environmental crisis. Future generations should not be burdened with the results of
today's negligence. The convergence of the aspirations of sovereign states and civil society
into a spirit of cooperation in long-term environmental policy and action can overcome the
current climate of competition and unending conflict and lead to universal harmony and
peace among the peoples of the world. This is the vision of bio -diplomacy.
Course Module
Bio-diplomacy – international cooperation in environmental protection – is a concept that
was pioneered by B.I.O. at a time when civic leaders, international organizations and the
world community as whole had not yet fully realized the urgency of adopting common
environmental policy. Bio-diplomacy focuses on the interdependence of all forms of life,
and calls upon diplomats and other people of influence to engage in a collective endeavor in
defense of the environment. Joint efforts to protect the environment can boost
international relations and act as a bridge between global communities at the national and
local levels. At the same time, bio-diplomacy actively supports efforts to maintain biological
and cultural diversity and seeks to improve human relations and to attain the goal of world
peace by replacing current diplomatic attitudes with a complete international and
intercultural perspective.
Defense for life must become a priority in every facet of our lives. The conversion of war
regimes to programs for the preservation of the environment would guarantee a better
future. Military aircraft, instead of dropping bombs, could be used to survey the state of the
environment and to drop seeds for trees, restoring devastated areas and benefiting the
entire planet. Naval destroyers could be used to clean the oceans and shorelines of
pollution. Hospital ships could be deployed off the coasts of Africa and South Asia, treating
the sick and hungry. Such steps would be the best response to poverty and deprivation. The
environment, as a common point of reference, can bring all peoples of the world together,
in harmony and coexistence.”
The following theoretical introduction is from the collection of researches called,
“Biopolicy: The Life Sciences and Public Policy,” edited by Somit and Peterson.
“Biopolicy, in simplest terms, is concerned with the relevance of biology and the life
sciences for public policy. This can take a number of forms. One is the relevance of evidence
in the life sciences that can help to inform policy decisions. For example, from an
evolutionary perspective, laws against prostitution are probably doomed to fail, given the
impelling urge of males to engage in sexual (reproductive) behavior (McGuire & Gruter,
2003). Another implication is that biology can affect the behavior of policy makers and, in
that manner, affect policy decisions. Finally, biotechnology can be a focus of policy making.
The development of medical information technology is a classic example. (Funke, 2009).”
References and Supplementary Materials
Online Supplementary Reading Materials
1. Biopolicy: The Life Sciences and Public Piolicy;
https://books.google.com.ph/books?id=K4dDlOiAu1kC&pg=PA205&lpg=PA205&dq=
biopolicy+pdf&source=bl&ots=hcMpB7bkU&sig=HDUW3HzBZnBRT8DXEUkm1AQuMh0&hl=en&sa=X&ved=0ahUKEwi5uWPq4bXAhXMXbwKHaxeAzYQ6AEIPzAF#v=onepage&q=biopolicy&f=false;
November 7, 2017
2. Biopolicy – A Vision for the Millennium;
http://www.globalecointegrity.net/docs/conferences/samos/presentations/Arvaniti
s.pdf; November 7, 2017
Module 003 – Science and Technology
This module will contain the following topics:
1. Science for Technology and Technology for Science
2. Sociopolitical Influence on Science
3. Technoscience and Intellectual Property Tussles
Science for Technology and Technology for Science
Our societies are dominated and even 'driven' by ideas and products from science and
technology (S&T). It is very likely that the influence on S&T on our lives will continu e to
increase in the years to come. Scientific and technological knowledge, skills and artefacts
'invade' all realms of life in our modern society: The workplace and the public sphere is
increasingly dependent on new as well as the more established technologies. So are also
the private sphere and our leisure time. Knowledge and skills in S&T are crucial for most of
our actions and decisions, as workers, as voters, as consumers etc. Meaningful and
independent participation in modern democracies assumes an ab ility to judge evidence
and arguments in the many socio-scientific issues that are on the political agenda.
In short, modern societies need people with S&T qualifications at the top level as well as a
general public with a broad understanding of S&T contents, methods and as a social force
shaping the future. S&T are major cultural products of human history. All citizens,
independent of occupational 'needs', need to be acquainted with this part of human culture.
S&T are important for economical well-being, but also seen from the perspective of a
broadly based liberal education.
One might expect that the increasing significance of S&T should be accompanied with a
parallel growth in the interest in these subjects as well as increasing understanding of basic
scientific ideas and ways of thinking. This does, however, not seem to be the case.
The evidence for such claims are in part based on 'hard facts' (educational statistics etc.), in
part on large comparative studies and in part based on research and analysis of trends in
our societies. The situation is described briefly described and analyzed in the following.
Who needs Science and Technology and Why?
The problematic situation for S&T can be seen from different perspectives and
different interests. These range from industry's concern about national, economical
competitiveness to a concern about an empowerment at the grassroots level for the
protection and conservation of nature. Different conceptions of 'the crisis' may
possibly lead to different solutions. Here is an indication of possible arguments for
learning S&T.
1. Industry needs people with high qualifications in S&T. Modern industry is high tech and often called 'knowledge industry'. This industry is in need for highly
qualified scientists and engineers to survive in a competitive global economy.
This aspect is of importance for the economy of the nation. (But young people do
not base their educational choices on what is good for the nation!)
Course Module
2. Universities and research institution have similarly a need for researchers (and
teachers) to maintain research at high international level and to provide good
learning possibilities for coming generations of experts, researchers and teachers.
The above-mentioned two groups constitute a highly skilled elite. But the actual
number of such people may not necessarily be very high. It would also be a mistake
to have mainly these groups in mind when reforming S&T in schools. A policy based
on this perspective could even further decrease the proportion of young people who
find S&T interesting, and who would choose to continue with S&T. The next
perspective is one of high importance for a much larger gro up, the teaching
profession:
3. Schools need qualified teachers in S&T.
The decline in recruitment has already hit the teaching profession. Well-qualified
and enthusiastic teachers constitute the key to any improvement of S&T in
schools -- and for the further development of knowledge, interests and attitudes
of ordinary citizens when they have left school. The S&T teachers also play a key
role in the recruitment of people to the S&T sector. The long-term effects of a lack
of good S&T teachers could be very damaging, although the effects are not so
immediately observable as the lack of qualified people in industry and research.
The S&T teachers need a broad basis for their activities. A solid foundation in the
academic discipline is important, but not enough. They need broader
perspectives and skills on order to cope with challenges of the sort outlined
earlier in this document. In short: S&T teachers do not only need a foundation in
S&T, they also need to have perspectives on S&T in a historical and social context.
This may require reforms in teacher training.
The next points, although different, are of importance for more or less all citizens.
4. A broader labor market needs S&T competencies
People in general need qualifications in S&T to compete on the modern labor
market. The need is great and growing fast, as knowledge and skills based on
science and technology become prerequisites in new areas and new parts of the
labor market. Not only doctors, pharmacists, engineers and technicians need S&T.
Health workers handle complicated and dangerous equipment, secretaries and
office staff need good computer literacy etc. New as well as more traditional
technologies often dominate the workplaces, and those with skills in these areas
may have a competitive advantage for their further career. Many countries have
also identified a need for people with S&T skills to replace those retiring in the
near future.
There is also a general need to become flexible and able to learn. A foundation in
S&T as well as mathematics is of great importance to develop such learning skills.
Besides, most of the changes are likely to be related to technological innovations,
and people with basic S&T skills may be better equipped to cope with changes
and innovations.
5. S&T for citizenship and democratic participation:
As stated in the introduction, our modern society is dominated by S&T. Many
aspects of life have a dimension related to S&T. All citizens are confronted with
such issues as consumers and as voters. As consumers we have to take decisions
about food and health, quality and characteristics of products, claims made in
advertisements etc. As voters we have to take a stand and be able to judge
arguments on all sorts of issues. Many of these political issues also have an S&T
dimension. In such cases, knowledge of the S&T involved has to be combined with
values and political ideals. Issues relating to the environment are obviously of
this nature, but also issues relating to energy, traffic, health policy etc. have S&T
dimensions. It is indeed hard to think of any contemporary issue that does not
have some aspects relating to S&T.
Social and political issues should not be seen as 'technical' – and left in the hands
of the 'expert'. A broad Public understanding of science and technology may in
fact be a democratic safeguard against 'scientism' and a domination of experts.
The above 'democratic argument' does not only assume that people have some
grasp of the contents of S&T. It also requires some public understanding of the
nature of S&T and the role they play in society. Among other things, people need
to know that scientific knowledge is based on argumentation and evidence, and
that statistical considerations about risks play an important role. Everybody
cannot become 'experts', but everybody should have tools to be able to judge
which 'expert' and what kind of arguments one should trust.
Science and Technology in schools – recent trends and responses
The challenges for S&T education outlined in this document have been met in
different ways. Many countries have introduced more or less radical reforms, and
there has been support to curriculum development and experiments. Reforms are
related to the content and framing of the curriculum as well as to pedagogies:
teaching methods and organization of the learning processes.
A general trend is that there seems to be less influence from the (traditional)
academic organization of curricula and contents. An underlying concern is that S&T
should contribute to more general aims of schooling in a situation where 'everybody'
attends school for 12-13 years. The general tendency is a widening of the perspective
and a gradual redefinition of what counts as valid school science. Social and ethical
aspects of S&T are often becoming part of the curriculum. The following is a listing of
some trends. Many are related, but still mentioned separately. Not all these trends
are found in all countries, but together they represent a series of identifiable
tendencies:
A. Towards "Science for all"
More weight on aspects of science that can be seen to contribute to the overall
goals of schooling. Key concern: liberal education ('allmenn dannelse',
'allmänn Bildning' Bildung, Formation..…) Hence; there is less weight on
traditional academic contents and science as mainly as preparation for
tertiary studies in science. Specialization postponed to the last few years of
school.
B. Towards more subject integration.
Course Module
C.
D.
E.
F.
G.
H.
In the early years of schooling, S&T is usually more or less integrated with
other school subjects. Only later are the sciences presented as separate
disciplines. The level where this specialization starts varies between
countries. It is a general trend that separate science subjects are taught only at
a late stage. (e.g. in Norway, only the two last years of upper secondary school
have single science subject.)
Widening perspectives
More weight on cultural, historical and philosophical aspects of science and
technology. S&T are presented as human activities. These aspects may also
appeal to the pupils that are in search for 'meaning', not only factual
information and the accepted correct explanations.
NOS: The Nature of Science
The 'Nature of science' has become an important concern in the curriculum.
This often means a rejection of the often stereotypical (and false) image of
science as a simple search for objective and final truths based on
unproblematic observations. The weight on recent understanding of the
nature of science also implies a stress on the social, cultural and human
aspects of science. Science is presented as knowledge that builds on evidence
as well arguments in a creative search for meaning and explanation. This
aspect also strengthens that human and social relevance of science, and may
attract pupils who value such aspects.
Contexts become important
More weight on putting science and technology in meaningful contexts for the
learner. This often implies examples from everyday life and current socio scientific issues. These themes or topics are by their nature interdisciplinary,
and require teacher cooperation. Such issues often require methods like
project work. (For which teachers have to be adequately educated.)
Concern for the environment
Towards more weight on environmental questions as part of the S&T
curriculum. (The name of the S&T subject in the new Norwegian curriculum is
"Science and environmental study") Environmental issues are often of the
socio-scientific nature mentioned above, and their treatment often requires
project work in interdisciplinary settings.
Weight on Technology
Technology has recently been introduced in many countries as a subject in its
own right, also in the general part of the education system. In other countries,
it has received a broader place within the science curriculum, not only as
interesting concrete examples to illustrate scientific theories and principles.
(The name of the new S&T subject in Denmark is "Nature and technology").
The curricular definition of 'technology' is, however, often confusing and
incoherent. In some countries technology is placed in a context of 'design and
technology' (in the UK). In other countries the term technology implies
modern information technology and ICT. In some places, the stress is on the
technical (and underlying scientific) aspect of technology. In other countries
the weight is put on human relations to technology, society and technology
etc.
STS: Science, Technology and Society
STS has become an acronym for a whole 'movement' within S&T education.
The key concern is not only the Science and the Technology content, but also
the relationship between S&T and society. The trends described in the
preceding points (relevant contexts, stress on the environmental and the role
of technology) can also be seen as belonging to an increase of the STS
perspective.
I. Inclusion of ethics
When S&T issues are treated in a wider context, it becomes evident that many
of the topics have ethical dimensions. This is of course the case when dealing
with socioscientific issues. But ethics is also involved in discussions relating to
'pure' science, like what sorts of research one ought to prioritize (or even
allow), and the moral dilemmas in e.g. using animals in research. Again, this
ethical dimension may contribute to giving S&T a more human face. It is also
likely to empower future voters on important political issues on which they
are invited to take a stand.
J. "Less is more"
This has become a slogan for curriculum development. More weight is put on
'great stories' of S&T and on presentation of key ideas and their development,
often in an historical and social context. These key ideas replace (the
impossible) attempt to give an encyclopaedic coverage of the whole of
science. One hopes to avoid the curse of the overcrowded curriculum that
leaves so little time for reflection and search for meaning. By choosing
'typical' and important stories, one hopes to convey an understanding of the
nature of S&T. One also hopes to nourish curiosity and respect for S&T – and
to inspire some students to pursue S&T. 'Narratives' have become a key word
for this development.
K. Information technologies as subject matter and as tools
Information and communication technologies (ICT) are products that by their
definition 'belong' to the S&T sector. (The 'hardware' is science-based
technologies; the 'software' builds on basic mathematics etc.) Hence, the
underlying physical and technical ideas are to an increasing extent treated as
important subject matter on their own right in S&T curricula.
Besides, ICT provide new tools that are very suitable for teaching and learning
in S&T. The whole range of 'ordinary' software is used, including databases,
spreadsheets, statistical and graphical programs. In addition, modelling,
visualization and simulations of processes are important. ICT is also used for
taking time series of measurements for a wide variety of parameters ('data
logging').
S&T subjects are likely to be key elements in strategies to develop ICT to
become a better educational tool. It is also likely that S&T teachers are better
educationally equipped for this task than most other teachers – although they
are also in need for ways to be updated and retrained.
Cultural Influence in Science
THE JOY OF SCIENCE.
Course Module
For most scientists, a powerful psychological motivation is curiosity about "how things
work" and a taste for intellectual stimulation. The joy of scientific discovery is captured in
the following excerpts from letters between two scientists involved in the development of
quantum mechanics: Max Planck (who opened the quantum era in 1900) and Erwin
Schrodinger (who formulated a successful quantum theory in 1926).
[Planck, in a letter to Schrodinger, says] "I am reading your paper in the way a curious child
eagerly listens to the solution of a riddle with which he has struggled for a long time, and I
rejoice over the beauties that my eye discovers." [Schrodinger replies by agreeing that]
"everything resolves itself with unbelievable simplicity and unbelievable beauty,
everything turns out exactly as one would wish, in a perfectly straightforward manner, all
by itself and without forcing."
OTHER PSYCHOLOGICAL MOTIVES and PRACTICAL CONCERNS
Most scientists try to achieve personal satisfaction and professional success by forming
intellectual alliances with colleagues and by seeking respect and rewards, status and power
in the form of publications, grant money, employment, promotions, and honors.
When a theory (or a request for research funding) is evaluated, most scientists will be
influenced by the common-sense question, "How will the result of this evaluation affect my
own personal and professional life?"
Maybe a scientist has publicly taken sides on an issue and there is ego involvement with a
competitive desire to "win the debate"; or time and money has been invested in a theory or
research project, and there will be higher payoffs, both practical and psychological, if there
is a favorable evaluation by the scientific community. In these situations, when there is a
substantial investment of personal resources, many scientists will try to use logic and
"authority" to influence the process and result of evaluation.
IDEOLOGICAL PRINCIPLES are based on subjective values and on political goals for "the
way things should be" in society. These principles span a wide range of concerns, including
socioeconomic structures, race relations, gender issues, social philosophies and customs,
religions, morality, equality, freedom, and justice.
A dramatic example of political influence is the control of Russian biology, from the 1930s
into the 1960s, by the "ideologically correct" theories and research programs of Lysenko,
supported by the power of the Soviet government.
OPINIONS OF "AUTHORITIES" can also influence evaluation. The quotation marks are a
reminder that a perception of authority is in the eye of the beholder. Perceived authority
can be due to an acknowledgment of expertise, a response to a dominant personality,
and/or involvement in a power relationship. Authority that is based at least partly on
power occurs in scientists' relationships with employers, tenure committees, cliques of
colleagues, professional organizations, journal editors and referees, publishers, grant
reviewers, and politicians who vote on funding for science.
SOCIAL-INSTITUTIONAL CONTEXTS. These five factors (psychology, practicality,
metaphysics, ideology, authority) interact with each other, and they develop and operate in
a complex social context at many levels — in the lives of individuals, in the scientific
community, and in society as a whole. In an attempt to describe this complexity, the
analysis-and-synthesis framework of ISM includes: the characteristics of individuals and
their interactions with each other and with a variety of groups (familial, recreational,
professional, political); profession-related politics(occurring primarily within the scientific
community) and societal politics (involving broader issues in society); and the institutional
structures of science and society.
The term "cultural-personal" implies that both cultural and personal levels are
important. These levels are intimately connected by mutual interactions because
individuals (with their motivations, concerns, worldviews, and principles) work and think
in the context of a culture, and this culture (including its institutional structure, operations,
and politics, and its shared concepts and habits of thinking) is constructed by and
composed of individual persons.
Cultural-personal factors are influenced by the social and institutional context that
constitutes the reward system of a scientific community. In fact, in many ways this context
can be considered a causal mechanism that is partially responsible for producing the
factors. For example, a desire for respect is intrinsic in humans, existing independently of a
particular social structure, but the situations that stimulate this desire (and the responses
that are motivated by these situations) do depend on the social structure. An important
aspect of a social-institutional structure is its effects on the ways in which authority is
created and manifested, especially when power relationships are involved.
What are the results of mutual interactions between science and society? How does
science affect culture, and how does culture affect science?
SCIENCE AFFECTS CULTURE.
The most obvious effect of science has been its medical and technological applications, with
the accompanying effects on health care, lifestyles, and social structures. But science also
influences culture, in many modern societies, by playing a major role in shaping cultural
worldviews, concepts, and thinking patterns. Sometimes this occurs by the gradual,
unorchestrated diffusion of ideas from science into the culture. At other times, however,
there is a conscious effort, by scientists or nonscientists, to use "the authority of science"
for rhetorical purposes, to claim that scientific theories and evidence support a particular
belief system or political program.
CULTURE AFFECTS SCIENCE.
ISM, which is mainly concerned with the operation of science, asks "How does culture affect
science?" Some influence occurs as a result of manipulating the "science affects culture"
influence described above. If society wants to obtain certain types of science-based
medical or technological applications, this will influence the types of scientific research that
society supports with its resources. And if scientists (or their financial supporters) have
already accepted some cultural concepts, such as metaphysical and/or ideological theories,
they will tend to prefer (and support) scientific theories that agree with these culturalpersonal theories. In the ISM diagram this influence appears as a conceptual
factor, external relationships with cultural-personal theories. For example, the Soviet
government supported the science of Lysenko because his theories and research supported
the principles of Marxism. They also hoped that this science would increase their own
political power, so their support of Lysenko contained a strong element of self-interest.
PERSONAL CONSISTENCY.
Course Module
Some cultural-personal influence occurs due to a desire for personal consistency in
life. According to the theory of cognitive dissonance (Festinger, 1956), if there is a conflict
between ideas, between actions, or between thoughts and actions, this inconsistency
produces an unpleasant dissonance, and a person will be motivated to take action aimed at
reducing the dissonance. In the overall context of a scientist's life, which includes science
and much more, a scientist will seek consistency between the science and non -science
aspects of life.
Because groups are formed by people, the principles of personal consistency can be
extrapolated (with appropriate modifications, and with caution) beyond individuals to
other levels of social structure, to groups that are small or large, including societies and
governments. For example, during the period when the research program of Lysenko
dominated Russian biology, the Soviets wanted consistency between their ideological
beliefs and scientific beliefs. A consistency between ideology and science will reduce
psychological dissonance, and it is also logically preferable. If a Marxist theory and a
scientific theory are both true, these theories should agree with each other. If the theories
of Marx are believed to be true, there tends to be a decrease in logical status for all theories
that are inconsistent with Marx, and an increase in status for theories consistent with
Marx. This logical principle, applied to psychology, forms the foundation for theories of
cognitive dissonance, which therefore also predict an increase in the status of Lysenko's
science in the context of Soviet politics.
Usually scientists (and others) want theories to be not just plausible, but also useful. With
Lysenko's biology, the Soviets hoped that attaining consistency between science policy and
the principles of communism would produce increased problem-solving utility. Part of this
hope was that Lysenko's theories, applied to agricultural policy, would increase the Russian
food supply; but nature did not cooperate with the false theories, so this policy resulted in
decreased productivity. Another assumption was that the Soviet political policies would
gain popular support if there was a belief that this policy was based on (and was consistent
with) reliable scientific principles. And if science "plays a major role in shaping
cultural...thinking patterns," the government wanted to insure that a shaping -of-ideas by
science would support their ideological principles and political policies. The government
officials also wanted to maintain and increase their own power, so self -interest was
another motivating factor.
FEEDBACK.
In the ISM diagram, three large arrows point toward "evaluation of theory" from the three
evaluation factors, and three small arrows point back the other way. These small arrows
show the feedback that occurs when a conclusion about theory status already has been
reached based on some factors and, to minimize cognitive dissonance, there is a tendency
to interpret other factors in a way that will support this conclusion. Therefore, each
evaluation criterion is affected by feedback from the current status of the theory and from
the other two criteria.
THOUGHT STYLES.
In the case of Lysenko there was an obvious, consciously planned interference with the
operation of science. But cultural influence is usually not so obvious. A more subtle
influence is exerted by the assumed ideas and values of a culture (especially the culture of a
scientific community) because these assumptions, along with explicitly formulated ideas
and values, form a foundation for the way scientists think when they generate and evaluate
theories, and plan their research programs. The influence of these foundational ideas and
values, on the process and content of science, is summarized at the top of the ISM diagram:
"Scientific activities...are affected by culturally influenced thought styles
OVER-GENERALIZING.
When scholars are thinking about cultural-personal factors and their influence in science,
too often there is too much over-generalizing. It's easy to get carried away into silly ideas,
unless we remember that all of these cultural-personal factors vary in different areas of
science and in communities within each area, and for different individuals, so the types and
amounts of resulting influences (on the process of science and the content of science) vary
widely.
CONTROVERSY.
Among scholars who study science there is a wide range of views about the extent to which
cultural factors influence the process and content of science. An extreme emphasis on
cultural influence is neither accurate nor educationally beneficial, and that even though
there is a significant cultural influence on the process of science, usually (but not always)
the content of science is not strongly affected by cultural factors.
Technoscience
Technoscience refers to the strong interactions in contemporary scientific research and
development (R&D) between that which traditionally was separated into science
(theoretical) and technology (practical), especially by philosophers. The emphasis that the
term techno(-)science places on technology as well as the intensity of the connection
between science and technology varies. Moreover the majority of scientists and
philosophers of science continue to externalize technology as applications and
consequencesof scientific progress. Nevertheless they recognize the success and efficiency
of technology as promoting realism, objectivity, and universality of science.
The prehistory of the concept of technoscience goes back at least to the beginning of
modern science. Francis Bacon (1561–1626) explicitly associated knowledge and power;
science provided knowledge of the effective causes of phenomena and thus the capacity for
efficient intervention within them. The concept became clearer during the first half of the
twentieth century. Gaston Bachelard (1884–1962) in Le nouvel esprit scientifique (1934;
The new scientific spirit) places the new scientific spirit under the preponderant influence
of the mathematical and technical operations, and utilizes the expression science
technique to designate contemporary science. However the term techno(-)science itself was
not coined until the 1970s.
The History of Techno(-)science
The first important occurrence of the term appears in the title of an article titled
"Ethique et techno-science" by Gilbert Hottois, first published in 1978 (included in
Hottois 1996). This first usage expresses a critical reaction against the theoretical
and discursive conception of contemporary science, and against philosophy blind to
the importance of technology. It associates technoscience with the ethical question,
Course Module
“What are we to make of human beings?” posed from an evolutionist perspective
open to technical intervention.
Throughout the 1980s two French philosophers, Jean François Lyotard and Bruno
Latour, contributed to the diffusion of the term in France and North America. For
Lyotard technoscience realizes the modern project of rendering the human being, as
argued from the work of René Descartes (1596–1650), a master and possessor of
nature. This project has become technocratic and should be denounced because of
its political association with capitalism. As a promoter of the postmodern, Lyotard
thus facilitates diffusion of the term within postmodern discussions.
In Science in Action (1987), Latour utilizes the plural technosciences in order to
underline his empirical and sociological approach. The technosciences refer to those
sciences created by human beings in real-world socioeconomic-political contexts, by
conflicts and alliances among humans and also among humans and non -humans
(institutions, machines, and animals among others). Latour insists on networks and
hybrid mixtures. He denounces the myth of a pure science, distinct from technologies
susceptible to good and bad usages. In reality it is less technology that Latour
internalizes in the idea of science than society (and therefore politics), of which
technologies are part in the same ways as other artifacts. He rejects any
philosophical idea, whether ancient or modern, of a science that is supra - or extrasocial and apolitical. The worldwide successes of the technosciences are a matter of
political organization and will, and do not derive from some universal recognition of
a rational and objectively true knowledge that progressively imposes itself. Latour
has contributed to the success of the term technoscience in social-constructivist
discussion since the 1990s.
The work of Donna Haraway illustrates well the diffusion of technoscience crossed
with the postmodern and social-constructivist discussions in North America.
Technoscience becomes the word-symbol of the contemporary tangle of processes
and interactions. The basic ingredients are the sciences, technologies, and societies.
These allow the inclusion of everything: from purely symbolic practices to the
physical processes of nature in worldwide networks, productions, and exchanges.
In France, in continental Europe, and in the countries of Latin America, the use of
the term technoscience has often remained closer to its original meaning that
involves more ontological (as with German philosopher Martin Heidegger (1889 –
1976)), epistemological, and ethical questioning than social and political criticism.
Indeed, in a perspective that complements the one provided here, in La revolución
tecnocientífica (2003; The technoscience revolution), Spanish philosopher Javier
Echeverría provides an extensive analysis of technoscience as both concept and
phenomenon. A political usage is not, however, rare, especially in France where
there is a tendency to attribute to technoscience a host of contemporary ills such as
technicism and technocracy, multinational capitalism, economic neo-liberalism,
pollution, the depletion of natural resources, the climate change, globalization,
planetary injustice, the disappearance of human values, and more, all related to U.S.
imperialism. The common archetype of technoscience is Big Science, originally
exemplified by the Manhattan Project, which closely associated science, technology,
and the politics of power. In this interpretation, technoscience is presented from the
point of view of domination, mastery, and control, and no t from that of exploration,
research, and creativity. It is technocratic and totalitarian, not technopoiétique and
emancipating.
The Questions of Technoscience
What distinguishes contemporary science as technoscience is that, unlike the
philosophical enterprise of science identified as a fundamentally linguistic and
theoretical activity, it is physically manipulative, interventionist, and creative.
Determining the function of a gene whether in order to create a medicine or to
participate in the sequencing of the human genome leads to technoscientific
knowledge-power-doing. In a technoscientific civilization, distinctions between
theory and practice, fundamental and applied, become blurred. Philosophers are
invited to define human death or birth, taking into account the consequences of
these definitions in the practical-ethical plans, that is to say, in regard to what will or
will not be permitted (for example, the harvesting of organs or embryonic
experimentation).
Another example is familiar to bioethicists. Since the 1980s there has existed a line
of transgenic mice (Onco mice) used as a model for research on the genesis of
certain cancers. Here is an object at once natural and artificial, theoretical and
practical, abstract and concrete, living and yet patented like an invention. Their
existence and use in research further involves many different cognitive and practical
scientific questions and interests: therapeutic, economic, ethical, and juridical. It is
even a political issue, because transgenic mice are at the center of a conflict between
the European Union and the United States over the patentability of living organisms.
The most radical questions raised by technosciences concern their application to the
natural (as a living organisms formed by the evolutionary process)
and manipulated (as a contingent creation of human culture). Such questions
acquire their greatest importance when one takes into account the past and future
(unknowable) immensity of biological, geological, and cosmological temporality, in
asking, for example: What will become of the human being in a million years? From
this perspective the investigation of human beings appears open not only to
symbolic invention (definitions, images, interpretations, values), but also to techno physical invention (experimentation, mutations, prosthetics, cyborgs). A related
examination places the technosciences themselves within the scope of an evolution
that is more and more affected by conscious human intervention. Both approaches
raise questions and responsibilities that are not foreign to ethics and politics but
that invite us at the same time to consider with a critical eye all specific ethics and
politics because the issues exceed all conceivable societal projects.
References and Supplementary Materials
Online Supplementary Reading Materials
1. Science and Technology in Education – Current Challenges and Possible Solutions;
http://www.iuma.ulpgc.es/users/nunez/sjobergreportsciencetech.pdf; November 7,
2017
Course Module
2. Technoscience; http://www.encyclopedia.com/science/encyclopedias-almanacstranscripts-and-maps/technoscience; November 7, 2017
3. Cultural Influence in Science: Causes and Effects;
http://www.asa3.org/ASA/education/science/cp2.htm; November 7, 2017
4. Science and Society; https://undsci.berkeley.edu/article/scienceandsociety_01;
November 7, 2017
5. Social Impact/Activism; https://www.acs.org/content/acs/en/careers/college-tocareer/chemistry-careers/social-impact.html; November 7, 2017
6. Impact of Science and Technology on Society and Economy;
http://www.worldacademy.org/trieste-forum/march-2013; November 7, 2017
[Science Technology and Society]
[Formation of Scientific Knowledge]
1
Formation of Scientific Knowledge
This module starts with the stages toward the acquisition of scientific
knowledge and ends with responsibilities of scientists to the society. The
module covers the experimentation on animals and humans and its
ethicality, the application of science to technology and of technology to
science, the sociopolitical influence on science and the intellectual property
dispute, biopolicy, conflicts in scientific study, and research data recording.
Read through the text more than once in order to grasp the key details
of the lesson..
Facets of Science
Science may be defined as a “body of organized knowledge” that has
been accumulated through research and that serves as a tool for solving
problems, a learning theme, a cultural resource, or a social enterprise which
needs physical facilities (Ziman, 1985). In this regard, scientific
investigations are geared towards obtaining new information for short-term,
long-term, immediate or future use in various fields, including economics,
agriculture, industries, and education, or for publication in scholarly journals,
encyclopedias, reference books and textbooks, and so forth, across several
areas of study, for example, biology, psychology, geology, chemistry or
physics.
As presented below, principles or concepts are used to explain or
describe the features or aspects of science. These are the discovery science,
academic science, industrial science, science as a social enterprise, and
science as a cultural resource. Thus, science is multifaceted, for it is an
engagement with investigated information and phenomena in the context of
society, education, economics, industries, politics, and culture.
1. Discovery science
The formation of scientific knowledge starts from the works of
scientists that leads to discovery of novel (new) information explaining or
describing a phenomenon. Through systematic methods, the data relating to
the discovery is rigorously examined for validity prior to its publication as
historical knowledge for addressing a corresponding economic, social, or
political problem, issue, or necessity.
2. Academic science
From the territory of science, scientific knowledge is passed on to the
world of technology.
3. Industrial science
Course Module
Technology that emerges from scientific knowledge serves as an
instrument to solve practical problems in areas of sociology, military,
commerce, or industry.
4. Science as a social enterprise/institution
Scientists have a social responsibility in their quest for novel
knowledge, while the society interacts with science and are becoming
increasingly concerned about its impact to both society and culture. In this
regard, the community of scientists communicate with one another in order
to arrive at a consensus of opinion as to the validity or truthfulness of
publicized outcomes of scientific investigations, while the community of
learners examine such publications through a variety of media, such as books
and scholarly journals.
5. Science as a cultural resource
Scientific knowledge influences cultural beliefs and values.
Dimensions of Science
1. Cognitive/Philosophical Dimension
Scientific knowledge is spread, for instance, through scholarly
publication, which brings about the historical dimension, as the pieces of
scientific knowledge are stored and organized in an archive to serve as a
bridge to future discoveries.
2. Sociological/Communal Dimension
Scientific knowledge is addressed to a specific segment of society, for
example, the scientific researchers.
3. Psychological Dimension
The scientific information has a psychological relevance to its author
or discoverer who has an intellectual authority over the information and who
deserves the recognition for bringing about novel knowledge which is related
to the cognitive status of the research outcome that the information presents.
Research Toward Scientific Information
The stages of obtaining scientific knowledge are:
1. Describing the natural or physical world or event through expert
observation
2.
Making generalizations about an observed phenomenon
3.
Examining patterns of facts derived from observation
4. Using research instruments to measure and interpret data collected from
investigation
5. Conducting a purposeful, contrived (designed), empirical (real-world or
experience-based), and relatively original experiment
6. Formulating scientific laws or rational (logical) generalizations based on
the outcome of the experimentation
[Science Technology and Society]
[Formation of Scientific Knowledge]
3
7. Presenting an explanation for the formulated scientific laws or rational
generalizations, which can be (a) a cause-and-effect relationship, (b) a model
for the investigated phenomenon, or (c) a theory
8. Subjecting the rational generalizations or scientific laws to investigation
and review by other members of the scientific community for evaluation
9. Interpolating over the evaluated information for consideration as a
scientific frontier (fresh discovery) or as a support or addition to an already
established or widely accepted knowledge
10. Acknowledging the verity (trueness) of the scientific knowledge
Glossary
Science – is a body of organized knowledge that has been accumulated
through research and that serves as a tool for solving problems, learning
theme, cultural resource, and social institution, which needs physical
facilities (Ziman, 1985).
Phenomenon (pl. phenomena) – a situation or event that can be perceived by
the senses
(http://www.macmillandictionary.com/us/dictionary/american/phenomen
on).
Model – a descriptive statement of how something works.
(http://www.macmillandictionary.com/us/dictionary/american/model_1#
model_1__18).
Theory – an explanation of why or how something occurs; a set of principles
on which a particular subject or occurrence is based.
(http://www.macmillandictionary.com/us/dictionary/american/theory).
Cognitive – recognition and comprehension of things.
(http://www.macmillandictionary.com/us/dictionary/american/cognitive).
References
Lecture Reference:
Ziman, J. (1985). An Introduction to Science Studies: The Philosophical and
Social Aspects of Science and Technology. NY: Cambridge University Press.
PDF.
Reading Activity Reference:
Kramer, D. (2015). Reducing carbon: a bacterial approach. Bio 2.0. Scitable.
Nature Education. Retrieved from
http://www.nature.com/scitable/blog/bio2.0/reducing_carbon_a_bacterial_approa
Course Module
ch
Reading Assignment Reference:
Norrgard, K. (2008). Human Subjects and Diagnostic Genetic Testing. Nature
Education. Retrieved from http://www.nature.com/scitable/topicpage/humansubjects-and-diagnostic-genetic-testing-720
Formation of Scientific Knowledge
This module starts with the stages toward the acquisition of scientific
knowledge and ends with responsibilities of scientists to the society. The
module covers the experimentation on animals and humans and its
ethicality, the application of science to technology and of technology to
science, the sociopolitical influence on science and the intellectual property
dispute, biopolicy, conflicts in scientific study, and research data recording.
Read through the text more than once in order to grasp the key details
of the lesson. Then, view the powerpoint presentation while listening to the
recording that refers to the details in the slides.
Biomedical Experimentation with Animals
Sociopolitical Foundation
Biomedical experimentation using animals as subjects has made
breakthroughs in understanding the functions of body organs and in
formulating medicinal drugs for treating various disorders. However, since
1800s, the involvement of animals for the study of anatomy and physiology
of both animals and humans and for the development of therapeutic drugs
has been the subject of criticism of animal rights activists who were then
called antivivisectionists. Still considered as the two most influential animal
rights philosophers are Peter Singer (Princeton faculty member) and Tom
Regan (North Carolina State University emeritus professor). Of much
influence on the ethical and legal foundations of biomedical research using
human subjects were the 10 principles listed in the Nuremberg Code of the
late 1940s. The third principle of the Code validated the use of animals for
biomedical experimentation, whereby the anticipated outcomes of the
biomedical research should justify the experimentation with animals.
Legislation/Regulation
Regardless of the consensus concerning the use or criticism of the use
of animals in biomedical research, the US has undergone a series of
legislation and regulation of animal research, as shown in Table 1.
Table 1. Brief history of US legislation/regulation of animal use in research
1960 Federal legislation requiring individual animal researchers to be
licensed was proposed, owing to the initiatives of Animal Welfare
Institute.
1963 The Guide for the Care and Use of Laboratory Animals (shortened to the
Guide) was published by the US National Institutes of Health (NIH). The
Guide was revised several times from 1965 to 1996.
1966 The Laboratory Animal Welfare Act was enacted, owing to the public
clamor over an article in Life magazine. The legislation underwent a
Course Module
series of amendments from 1970 to 1985, and is presently termed as the
Animal Welfare Act (AWA).
1985 NIH was required, through the Health Research Extension Act of 1985,
to establish guidelines concerning the use of animals in both biomedical
and behavioral research.
1986 The NIH Office of Protection from Research Risks published the Public
Health Service (PHS) policy on the Humane Care and Use of Laboratory
Animals, whereby PHS laboratories (as well any other institution that
would request for funding from PHS) must abide by the PHS policy and
the Guide.
2010 The US National Academy of Sciences published the 8th edition of the
Guide. Such publication signaled the wide acceptance of the Guide by the
US and international animal research institutions.
The Guide for the Care and Use of Laboratory Animals (shortened to the
Guide) serves as a significant document for both the scientific community
and animal care personnel because of the following reasons:
(1) The Guide provides guidelines concerning the way in which animal
research should be done, including recommendations for overseeing the
welfare of animals, including veterinary care and management of facilities
for housing and environment.
(2) The Guide mandates numerous institutional policies that animal
researchers should follow as to the screening and training of the
professional animal care personnel and as to the protection of the staff
who come into contact with the animal subjects.
(3) The Guide addresses the appropriateness of the physical environment
where the experimental animals stay, including ventilation and
temperature conditions, as well as the actual place where animals are
experimented upon.
The Guide requires each research institution to have an Institutional
Animal Care and Use Committee (IACUC) having a minimum of three
members, who are responsible for the welfare of animals used in research
and who should evaluate the living conditions of the animals and the
research protocols for approval. One of the members of the committee must
be a doctor of veterinary medicine (DVM) who should oversee all aspects of
animal care, one practicing scientist, and at least one non-affiliated
personnel.
The AWA obliges each research institution to have an Institutional Animal
Care and Use Committee (IACUC) having a minimum of three members. The
members must include one DVM and at least one non-affiliated personnel.
The PHS policy mandates an IACUC that has a minimum of five members.
The members must include one DVM, one practicing scientist, one nonscientist, and at least one non-affiliated personnel.
Ethical Guidelines
As proposed by William Russell and Rex Burch in 1959, animal research
institutions should conform to the three principles (3 R’s) concerning the
human use of animals for biomedical experimentation. These principles are:
(1) Replacement – refers to the use of lower species of animals as much as
possible, as lower species are viewed as less susceptible to pain and
distress as compared to higher species of animals, including chimpanzees.
(2) Reduction – refers to the reduction of the number of animals to be used
for experimentation as much as possible.
(3) Refinement – refers to the minimization of frequency or degree of pain
and distress that animal subjects experience in experiments.
Animal Rights Movement
One of the staunch defenders of animal welfare is the People for the
Ethical Treatment of Animals (PETA). Although numerous animal rights
activists fight for animal rights appear to be sincere in their advocacy, some
of them have resorted to violence to discourage the scientific commu nity
from using animals as experimental subjects. In this regard, the US enacted in
2006 the Animal Enterprise Terrorism Act to protect researchers from acts
of violence perpetrated by groups of anti-animal research militants.
Biomedical Experimentation with Humans
Sociopolitical Foundation
The 10 Nuremberg Principles (Nuremberg Code) served as the ethical
and legal foundation for the future guidelines concerning the use of human
subjects for biomedical research, of which the most notable is the Declaration
of Helsinki. The statements in the Code upheld the protection of human
subjects, the analysis of the risk as contrasted to the benefit of the
experiments, the performance of experiments only by scientists, the right of
the human subject to withdraw from the experiment anytime they wish to,
and the initiative of the researchers to halt the experimentation for
anticipated injury, disability or death of the human subject in the course of
the experiment. The Code actually stemmed from the trials in Nuremberg
concerning crimes committed in World War II. The trials prosecuted all those
involved in experimentation on humans without the willingness or
permission of the human subjects.
International Regulation
The Declaration of Helsinki was formalized in 1964 by the World
Medical Association (WMA) in Helsinki, Finland. Containing guidelines
concerning the humane use of humans in biomedical research, such
document has become the international standard for biomedical
Course Module
experimentation with humans. Since then, the document underwent a series
of amendments until 2013.
American Initiative
In relation to the Declaration of Helsinki, the US PHS issued a memo
two years after the Helsinki Conference, specifying the first requisite to the
institutional review boards (IRBs). Such memo required that research
studies that are to be funded by PHS be subjected to independent review to
examine the rights and welfare of study participants, the accuracy of
processing the informed consent, and the possible benefits and risks of the
biomedical research to be conducted.
In 1979, the National Commission for the Protection of Human
Subjects of Biomedical and Behavioral Research presented the Belmont
Report, which contains three basic ethical considerations in using h umans as
subjects for research. Generally accepted by IRBs, the three principles are:
(1) Respect for Persons – requires that the research subjects to be capable of
making their own decisions.
(2) Beneficence – requires that the risk to human subjects be minimized and
that the benefits of conducting the research be maximized.
(3) Justice – requires that the burden on human subjects be equally
distributed and not merely concentrated on an individual or a single
group of individuals.
Glossary
Anatomy – parts of the animal or human body; plant structure
(http://www.macmillandictionary.com/us/dictionary/american/anatomy).
Physiology – the study of the functioning or operation of bodily parts of living
things
(http://www.macmillandictionary.com/us/dictionary/american/physiology
).
Biomedical – pertaining to biomedicine.
Biomedicine – the application of the principles of biology/biochemistry to
the field of medicine
(http://www.macmillandictionary.com/us/dictionary/american/biomedicin
e).
References
Lecture Reference:
Macrina, F. (2014). Scientific Integrity: Text and Cases in Responsible Conduct
of Research (4th ed.). Washington, DC: ASM Press.
Reading Activity Reference:
Norrgard, K. (2008). Human Subjects and Diagnostic Genetic Testing. Nature
Education. Retrieved from http://www.nature.com/scitable/topicpage/humansubjects-and-diagnostic-genetic-testing-720
Course Module
Download