Uploaded by Aksel Dahl

Philosophy of Science - Notes for curriculumn texts

advertisement
Curriculum texts
Cross references from curriculum: .......................................................................................... 3
Lecture 1: Paradigms ................................................................................................................ 8
Richie 2020: View on science. ...........................................................................................................8
Richie paradigm: ............................................................................................................................................8
Watts 2011: Why social science is hard! .........................................................................................9
Watts Paradigm: ...........................................................................................................................................11
Lecture 2: Correlation not equal causation, experiments ..................................................... 12
Isager 2023: Correlation and causation ........................................................................................12
The scientific method: ..................................................................................................................................12
Paradigm Isager: ...........................................................................................................................................13
BRM ch. 3 - Experiments: ..............................................................................................................14
Bergenholtz 2024a: Setting up randomized control trials (RCT’s) ............................................15
Lecture 3: Quasi experiments and research designs ............................................................. 16
Bergenholtz 2024b - Quasi experiments ........................................................................................16
BRM Ch. 3 - Research design: .......................................................................................................18
Cross sectional design: .................................................................................................................................18
Longitudinal design: .....................................................................................................................................19
Case study:....................................................................................................................................................20
Comparative design: .....................................................................................................................................21
Levels of analysis: ........................................................................................................................................22
Table: Research strategy and research design: .............................................................................................22
AMJ 2011 - Research designs .........................................................................................................24
Lecture 5: Spring exam case (No Text).................................................................................. 26
Lecture 6: Essence of theory .................................................................................................. 26
Bergenholtz 2024c - What is theory? .............................................................................................26
Causal explanations v. Understanding explanations ....................................................................................27
NYT 2016 - Not just a theory .........................................................................................................32
Lecture 7: Popper, Scientific change, tacit knowledge.......................................................... 33
ABH 2018: Critical rationalism of Popper....................................................................................33
Paradigm: ......................................................................................................................................................33
Why is Popper not post-positivistic:.............................................................................................................34
Kuhn 1962: Tacit knowledge and intuition ...................................................................................35
Popper 1962: ....................................................................................................................................37
Summary: Popper 1962 ................................................................................................................................37
Paradigms: Popper 1962 ...............................................................................................................................38
POS chap. 5: Scientific change and revolutions ...........................................................................39
Incommensurability: .....................................................................................................................................41
Theory ladenness (of data): ..........................................................................................................................42
Lecture 8: Differences of science ........................................................................................... 43
Nelson 2016: Differences of science and why they matter. ..........................................................43
Physics as a model science: ..........................................................................................................................43
Differences between physics and other sciences: .........................................................................................43
Lecture 9: Paradigms .............................................................................................................. 45
Guba 1990: Alternative paradigm .................................................................................................45
Paradigm meaning: .......................................................................................................................................45
Guba paradigm preference: ..........................................................................................................................45
Ontology, Epistemology, Methodology: ......................................................................................................45
Positivism: ....................................................................................................................................................46
Post-Positivism: ............................................................................................................................................46
Constructivism:.............................................................................................................................................47
Guba 1990 ch2 snippet summary: ................................................................................................................48
Lecture 10: Thinking fast and slow ........................................................................................ 49
Kahneman Ch 1 & 7, Thinking fast and slow: .............................................................................49
Lecture 11: Science is not broken and Meta analysis ........................................................... 50
Aschwanden 2015: Science isn´t broken. ......................................................................................50
Harrer et al. 2021: Meta analysis. ..................................................................................................52
Meta-analysis pitfalls: ..................................................................................................................................53
Lecture 12: Research misconduct .......................................................................................... 56
DBORM 2020: Danish board on research misconduct ................................................................56
Research misconduct and questionable practice: .........................................................................................56
Task of DBORM: .........................................................................................................................................56
Board jurisdiction and scope: .......................................................................................................................56
Board organization: ......................................................................................................................................57
Procedures for investigation: ........................................................................................................................57
The decision: ................................................................................................................................................57
Data Colada 2023: ...........................................................................................................................58
Piper 2023: Can you trust a Harvard dishonesty researcher?....................................................58
Carter 2000, Plagiarism in academic writing ...............................................................................58
Lecture 13: Complexity, credibility crisis, 20´th science ....................................................... 59
Sullivan 2011: Embracing complexity:..........................................................................................59
Watts 2007: Twenty-first century science .....................................................................................60
Feldman-Barret 2021: Psychology in a crisis................................................................................60
Cross references from curriculum:
Article
Link
Kahneman 2011 → System 1 dominance, “quick-thinking”
Watts (2011) argues that we often after-rationalize concerning our answers and the information we
are presented for. In turn, this means that we as humans may have a tendency to reply, “I already
knew that” or “That would also be my guess” and similar to things that we find familiar, despite
this not being the case.
This relates to Kahneman (2011) as he argues, humans are subject to system 1 thinking where this
process of thought leads us to such biases. Hence, there may be a fallacy to “only see that world
that actually happened”.
Watts 2007 → Difficulty of Social Science, due to Complexity
Watts (2007) explains the background for the Watts (2011) article. The article discusses why social
science is difficult, due to its complexity. These dwell upon:
• social networks are not static structures
• they are not unitary, but multiplex, which serves different functions
• must be understood within the larger framework of collective social dynamics
Watts 2011
Hence, there is feedback systems in place and interdependency, which increasingly makes the
social science field difficult.
Nelson 2016 → Complexity and Change over time of Social Science
Both Watts (2011) and Nelson (2016) discuss the complexity of social science and why it cannot be
compared to physics and other natural sciences.
Nelson (2016) argues that the subject matters, as they are very different. In social science there is a
high degree of heterogeneity, whereas there may be more homogeneity in natural sciences, which
can increase the complexity, due to higher ambiguity in social phenomena. For instance, what
motivates you may be hard to answer and differ from person to person. Furthermore, there are
numerous, highly variable, and inseparable influences on the variables that we try to study within
the social science field. This may increase the complexity in social science too. Lastly, the subject
change over time, and don't stay constant - for instance, what motivates one now, may not work
later in life.
Barrett-Feldmann 2021 → Complexity due to non-obviousness of Social Science
Watts (2011) argues for how social science is not so obvious at one might think, and how it actually
consists of a complex environment. This is highly related to Barrett-Feldmann (2021), as she argues
for a move away from a “mechanistic mindset to a “complexity mindset” as the latter takes into the
complex environment of social science and the interdependency of the variables constituting this.
Watts 2011 → Complexity of Social Sccience
Both Watts (2011) and Nelson (2016) discuss the complexity of social science and why it cannot be
compared to physics and other natural sciences.
Nelson (2016) argues that the subject matters, as they are very different. In social science there is a
high degree of heterogeneity, whereas there may be more homogeneity in natural sciences, which
can increase the complexity, due to higher ambiguity in social phenomena. For instance, what
motivates you may be hard to answer and differ from person to person. Furthermore, there are
numerous, highly variable, and inseparable influences on the variables that we try to study within
the social science field. This may increase the complexity in social science too. Lastly, the subject
change over time, and don't stay constant - for instance, what motivates one now, may not work
later in life. These considerations are evident in the following quote: “The issue here partly is that
individual members of a heterogeneous class can respond to the same influences in very different
ways.” (Nelson, p. 7, 2016).
Watts 2007 → Interdependency between subjects and variables causing complexity
Nelson (2016) argues that among others, social science differs from natural sciences, as subjects in
Nelson 2016 social science like humans, organizations and similar are influences by numerous, highly variable,
and often inseparable conditions. This fits well to what Watts (2007) argue for, namely social
science being very complex due to its high interdependency between subjects and variables. This is
for instance seen by the quote “It is hard to understand, for example, why even a single
organization behaves the way it does without considering (a) the individuals who work in it; (b) the
other organizations with which it competes, cooperates and compares itself to; (c) the institutional
and regulatory structure within which it operates; and (d) the interactions between all these
components.” (Watts 2007). The interactions and interdependency affect others than merely the
subject in focus.
Guba 1990 → Nelson would according to Guba be a (social) constructivist
Using Guba (1990), Nelson could be characterized as a (social) constructionist, as much of his
views are in line with researchers making subjective decisions. Therefore, the epistemology is
characterized by subjectivity, where there is interaction which what is studied and the researcher
themselves as argued by Guba (1990). Further, regarding the ontology, it is more relative as the
findings depend on the methods used, where one may not be more correct than another.
Aschwanden 2015 → Social Science is hard and display complexity
Aschwanden (2015) argues that social science is hard and display its complexity as Nelson (2016)
agrees with by showing how choices of different variables in an election or in football can affect the
outcome of the study. Hence, Aschwanden (2015) actually displays the heterogeneity and various
decisions that Nelson (2016) argues for is being made. Choices, assumptions and boundaries must
be set to investigate whether the variables have influence, referring to the election and football
example.
Barrett-Feldmann 2021 → Going from mechanistic mindset to a complexity mindset
Nelson (2016) discusses the complexity of social science and why it cannot be compared to physics
and other natural sciences. This in turn also relates to Barrett-Feldmann (2021) pointing out the
how we increasingly need to consider a complexity mindset instead of mechanistic mindset. This is
thus a move away from merely considering few strong causal factors and instead considers lots of
weaker but interacting factors. Thereby, Nelson (2016) and Barrett-Feldmann (2021) agrees on the
complexity, among others due to interdependency.
Guba 1990 → According to Guba, Kahneman would be a constructionist
Using terminology from Guba (1990), one could argue Kahneman (2011) being constructionistic
based on how he describes system 1 and 2 of not existing in physical places in the brain or “real”
processes, but rather metaphors of describing the two ways of how the brain functions. Therefore,
the systems are constructed concepts.
Aschwanden 2015 → Researchers make assumptions and choices to reach results
Kahneman (2011) argues for people being subject to biases using system 1. In turn, this relates to
Aschwanden (2015) arguing that researchers make choices and assumptions, leading to the use a
different method or employing a different view than others due to these biases. Therefore,
researchers may not always reach the same results or agree on the methods used.
”the truth doesn’t always win, at least not initially, because we process new evidence through the
lens of what we already believe. Confirmation bias can blind us to the facts; we are quick to make
up our minds and slow to change them in the face of new evidence.” (Aschwanden, 2015, p. 15)
Kuhn 1962 → Paradigms affecting the rational behavior
Kahneman
Kahneman (2011) argues that we do not always behave rationally, which can relate to how Kuhn
2011
(1962) discusses scientific paradigms. Kuhn claims that we do not get an understanding of “reality”
as it really is, but rather that we employ our own interpretation through our own lens, methods, and
assumptions. This relates to how Kahneman argues for biases as a result of system 1 thinking,
which thereby drives our perceptions. This is incorporated by Kuhn on the individual level, for
instance using confirmation bias as Kahneman discuss, yet it applies to the broader society too,
since there may be bias in groups and in scientific communities via groupthink.
Watts 2011 → After-rationalizing results and informations
Watts (2011) argues that we often after-rationalize concerning our answers and the information we
are presented for. In turn, this means that we as humans may have a tendency to reply, “I already
knew that” or “That would also be my guess” and similar to things that we find familiar, despite
this not being the case.
This relates to Kahneman (2011) as he argues, humans are subject to system 1 thinking where this
process of thought leads us to such biases. Hence, there may be a fallacy to “only see that world
that actually happened”.
Kahneman 2011 → Paradigms affecting the rational behavior
Kahneman (2011) argues that we do not always behave rationally, which can relate to how Kuhn
(1962) discusses scientific paradigms. Kuhn claims that we do not get an understanding of “reality”
as it really is, but rather that we employ our own interpretation through our own lens, methods, and
Kuhn 1962 assumptions. This relates to how Kahneman (2011) argues for biases as a result of system 1
and PoS ch. thinking, which thereby drives our perceptions. This is incorporated by Kuhn (1962) on the
individual level, for instance using confirmation bias as Kahneman discuss, yet it applies to the
5
broader society too, since there may be bias in groups and in scientific communities via groupthink.
Aschwanden 2015 → Temporary truth is not related to the constitution of new paradigms
Aschwanden (2015) argues for there not being one objective “truth”, but instead it is temporary.
She further argues that the choices and assumptions researchers make are affecting the “truth” we
obtain. Additionally, the results are considered by Aschwanden as the temporary and the closest we
will get to the “real” truth, which stands in contrast to Kuhn (1962) due to this acceptance.
Kuhn (1962) argues for new theories may not be immediately accepted, since the new information
and knowledge is treated from previously accepted views. Thereby, one can become blind for the
new useful and important theories. It therefore becomes difficult to change the previous practices
and shift the paradigm to include the new information. Here, Kuhn (1962) comments on differing
paradigms by scientific communities, which has different standards and methods, that could relate
to Aschwanden discusses how bias, choices and assumptions differs and thereby affects the results.
One can thereby relate the two as discussed above by the following quote:
”the truth doesn’t always win, at least not initially, because we process new evidence through the
lens of what we already believe. Confirmation bias can blind us to the facts; we are quick to make
up our minds and slow to change them in the face of new evidence.” (Aschwanden, 2015, p. 15)
This namely relates to what Kuhn (1962) argues of new information that constitutes a new
paradigm may not initially be supported. In turn, Aschwanden (2015) believes this may be related
to the biases we adopt. Here, one could also relate to Kahneman (2011)
Barrett-Feldmann 2021 → Complexity mindset competing with paradigms
Barrett-Feldmann (2021) relates to Kuhn (1962) by how a mechanistic mindset and a complexity
mindset can be considered as competing paradigms. This is seen by how the two methods are very
different in terms of world views and they are collecting different data as well as using different
techniques.
Watts (2007) → The whole is more than the sum
Watts (2007) is according to Kuhn (1962) challenging some aspects of the current scientific
paradigm regarding, how scholars typically explore and study social science issues, as Watts
advocate looking at the social science as being more complex systems, whereby merely studying
one aspect of a phenomenon is not enough to grasp its entirety, due to the interdependency and
feedback loops. Instead, solely looking at some isolated aspect, may only yield and understanding
of this in isolation of the other, but it can be the case, that it does not apply to the rest, as in
complex systems, the whole is more than the sum of its parts (Watts 2007). Hence, this is
challenging the paradigm view, using Kuhn’s notation, that some scholars and scientific
communities have adopted.
Watts 2007
Watts 2011 → Difficulty of Social Science, due to complexity
Watts (2007) explains the background for the Watts (2011) article. The article discusses why social
science is difficult, due to its complexity. These dwell upon:
• social networks are not static structures
• they are not unitary, but multiplex, which serves different functions
• must be understood within the larger framework of collective social dynamics
Hence, there is feedback systems in place and interdependency, which increasingly makes the social
science field difficult.
Nelson 2016 → Difference of Social Science and Natural Science
Nelson (2016) argues that among others, social science differs from natural sciences, as subjects in
social science like humans, organizations and similar are influences by numerous, highly variable,
and often inseparable conditions. This fits well to what Watts (2007) argue for, namely social science
being very complex. This is for instance seen by the quote “It is hard to understand, for example,
why even a single organization behaves the way it does without considering (a) the individuals who
work in it; (b) the other organizations with which it competes, cooperates and compares itself to; (c)
the institutional and regulatory structure within which it operates; and (d) the interactions between
all these components.” (Watts 2007)
Aschwanden 2015 → Complexity of Social Science causing difficulties
Watts (2007) discusses why social science is difficult, due to its complexity. These dwell upon the
fact that social networks are not static structures, they are not unitary, but multiplex, which serves
different functions and lastly, must be understood within the larger framework of collective social
dynamics. Hence, there is feedback systems in place, which increasingly makes the social science
field difficult. This fits well with Aschwanden (2015) as she argues how social science is very
difficult, due to its high complexity. This is for instance seen in the election of football example,
where Aschwanden (2015) displays the many different choices and assumptions researchers must
make, which is termed the “researchers’ degree of freedom”.
Barrett-Feldmann 2021 → Interdependency of entities in Social Science
Watts (2007) is arguing why social science is hard, which is due to the field involves a large number
of entities, which influence each other. Thereby there is interdependency, which also constitutes a
feedback system in which subjects considers others’ behaviours. Barrett-Feldmann (2021) adds to
this by discussing the complexity mindset in which one should consider these interdependencies
instead of relying on the mechanistic mindset which only considers a few causal factors as the main
explanation.
Lecture 1: Paradigms
Richie 2020: View on science.
Richie believes science is the best method we have of figuring out how the universe works
and of bending it to our will.
He describes how social aspects of science are essential, and scientists work together, travel
the world, give lectures and conferences in societies to share research and refine and reach a
consensus.
- In doing so, they ensure that their findings are reliable and robust.
The focus is on the collaborative sift-out process that enables a new discovery to get
published and gain power in society.
Richie believes in science but also states that science and especially peer review is far from
perfect, currently.
This is due to various factors, one of them being that science has an economic aspect due to
the need for funding. This affects the process of science and especially what is being
published.
Richie states that science experiments should be repeatable.
Mertonian norms: Robert Merton 1942.
Universalism: It does not matter who published a scientific paper, long as his/her methods
are sound and follow established norms and guidelines for conducting scientific research.
Disinterestedness: Scientists should not do scientific research for the money, fame, or
reputation.
Communality: Scientists should share knowledge amongst each other.
Organized scepticism: Nothing is sacred. A scientific claim should never be accepted at face
value.
The Mertonian norms as a concept clash with Peer review.
Richie paradigm:
Ritchie's epistemology aligns with a combination of post-positivism and constructivism. Postpositivism shares many beliefs with positivism, the idea that our research can be objective,
value-free and independent of context. But unlike positivists, post-positivists believe that our
knowledge is always provisional and subject to revision when new evidence emerges.
Constructivism believes that our understanding of reality is always partial and subjective, and
we construct it through our experiences and interactions with the world. Therefore, Ritchie
combines post-positivism's critical approach with the constructivist's belief in a subjective
reality. (Ritchie 2020 ch 1, p. 21-22)
Watts 2011: Why social science is hard!
ARD take-aways.
Sociology: Social science with board range of interests like every-day events and large-scale
institutions. Also, issues like inequality, race, gender, motivation, and beliefs are of interest to
sociology.
Social science is hard. Perhaps not harder than any other science (or subject for that matter).
But non-the-less, HARD!
Why social science may seem obvious:
According to Watts (2011) this is due to some fallacies like familiarization, as we humans
face the social world, we believe that we know something about a topic, which we actually
don’t, thus having a false perception. This is opposed to the natural science building on
complicated maths, which not everyone understands. This is further highlighted in the
following quote: “… and that familiarity breeds a certain amount of contempt” (Watts 2011,
p. 32). Additionally, humans tend to after-rationalize, which create a false narrative, that
“we knew it all along”, which Watts display in the following: “we are almost always in the
much easier position of picking and choosing from our wide selection of common-sense
statements about the world to come up with something that sounds like what we now know to
be true.” (Watts 2011 p. 33)
This in turn relates to our automatic thought process, which is discussed by Kahneman
(2011).
Lastly, our common-sense has an answer to everything fuelling the believe that findings in
social science are obvious. In particular, common sense tends to reach conclusions, which
may not necessarily be supported by evidence, but instead it is based on conflicting
treatments of experience, culture, wisdom, and subjective viewpoints (Watts, 2011).
Example: Lazarfeld, “The American solider” who is better prepared for the life as a soldier,
people from the countryside or from the city? Our common-sense can argue for either side.
Social science may be hard because it often deals with situations and emotions many of us
can relate to, have experienced, and know about. This cause it´s subjects to be familiar, while
also, people seldom agree about the implications of certain feelings or how an event affected
them.
Human behaviour is not (at least what we know of) governed by laws of nature like the ones
found in physics.
Common sense:
Common sense is problematic, not because it does not give us sensible advice, but because
what is sensible turns out to depend on a surprisingly large number of factors, most of which
are not specified by the advice itself.
-
To some extent, we are all guilty of considering our own beliefs as common sense.
o (Link to Kahneman System 1 - fast thinking)
Common sense explanations of the world can mislead us, and that randomness indeed has a
powerful play in the outcome of the world as we know it and how things behave.
- Chatbot
The complex world consisting of a web of independency: Is one other argument. Watts
argue that in the real world many independent actors influence each other. This makes
studying social phenomena quite difficult and not obvious to study at all.
“Unlike in physics, therefore, essentially every problem of interest to social scientists
requires them to consider events, agents and forces across multiple scales simultaneously.”
(Watts, 2011 p. 32).
Also, if we try to simplify this world through assumptions and manipulation we may miss out
on vital information and insights. Meanwhile, studies cannot control for everything, so they
need to be simplified to some extend.
Another argument by Watts (2011) is relating to the self-reflexive aspect, meaning people
and actors in social science is able to change perception, behaviour, and aims. This makes it
hard, as conditions are not as stable compared to e.g., physics, which Watts (2011) shows via:
“… all of which interact with each other via networks of information and influence, which in
turn change over time” (Watts, 2011 p. 31).
Lastly, Watts (2011) comments on the most important things never happened. This is
related to the fact that we don’t know what could have happened, due to certain
implementations. Therefore, we don’t know which implications our actions actually had.
Furthermore, Watts (2011) agrees with Nelson (2016) in the difference between physics and
the social sciences. This is seen by how Nelson argues that we live in a complex social world,
with interrelated entities that are affected by a lot of highly variable elements which change
over time. This in turn relates to Watts arguing that individuals are found on a verity of
different levels, none of whom are alike: “... social systems - like other complex systems in
physics and biology - exhibit “emergent” behaviour, meaning that the behaviour of entities at
one “scale” of reality is not easily traced to the properties of the entities at the scale below”.
(Watts, 2011, p. 31).
Key Points for text:
1. Sociology's Complexity: Duncan Watts, originally a physicist turned sociologist,
emphasizes the complexity of sociology compared to traditional sciences like physics
and mathematics.
2. Perception of Sociology: Sociology is often misunderstood and perceived as less
rigorous or scientific than "real" sciences like physics or biology.
3. Challenges in Social Science: Understanding social phenomena involves studying
emergent behavior and interactions across various scales, making it inherently
difficult.
4. Common Sense Misconceptions: Common sense often leads people to oversimplify
social phenomena, leading to misconceptions about the complexities involved.
5. Randomness in Success: Success in fields like music or literature isn't solely based
on quality but can be heavily influenced by random chance and social dynamics.
6. Limitations of Common Sense: Common sense explanations lack consistency and
can be contradictory, making them unreliable for understanding complex social
issues.
7. Need for Industrial-Scale Science: Tackling societal problems requires a more
rigorous approach akin to industrial-scale science, which necessitates
multidisciplinary collaboration and substantial resources.
Quotes:
1. "Humanity has sent space probes out of the solar system and put men on the Moon;
we have built atomic weapons and atomic clocks; [...] By contrast, we have great
difficulty designing effective economic-development programmes."
2. "Physics, it is worth noting, has made tremendous progress precisely by studying
physical phenomena that take place at different scales in relative isolation, and by
solving approximations of problems that are more complicated in reality."
3. "Yet because most results in social science accord with something we have either
heard about, or even experienced ourselves, it is hard not to write them off as
something we already knew."
4. "What these results suggest is that in the real world, where social influence is much
stronger than in our artificial experiment, enormous differences in success may indeed
be caused by small, random fluctuations early on in an artist’s career."
5. "Common sense, in other words, is extremely good at making the world seem
sensible, quickly absorbing even the greatest surprises into a coherent-seeming
worldview."
6. "Both our perceptions of social science and our ability to conduct it could be changing
with the explosive growth of the Internet [...] has the potential to revolutionize social
science, just as the invention of the telescope revolutionized physics."
7. "Industrial-scale science, however, is time-consuming and expensive, and support for
such an enterprise is inconceivable as long as Gribbin’s attitude – that social science
is, in essence, an oxymoron – prevails."
Watts Paradigm:
Watts' approach to sociology seems to fall under the paradigm of post-positivism.
While he relies on mathematical models to study social phenomena, he recognizes that social
reality is too complex to be understood through purely objective observation. Instead, he
argues that researchers should be transparent about their assumptions and engage in ongoing
dialogue with other researchers to improve their understanding. (Page 1)
Lecture 2: Correlation not equal causation, experiments
Isager 2023: Correlation and causation
Correlation is not always equal to causation.
Causation does however imply correlation.
Counterfactual thinking involves analysing alternative explanations and variables before
making conclusions.
Directionality issue:
Is x correlated with y, or is y correlated with x? The same correlation at the end, but the
relationship is different, which is important to note.
- Which direction does the correlation follow?
Messing up the correlation relationship either due to confounding variables, and or selection
bias. (Look at POS cheat sheet for notes on both)
Correlation means something interesting might be going on, but we need to investigate
further. (Do more research, collect more data, and utilize our expert knowledge) to figure out
exactly what is going on.
Two ways of creating causality models:
1) We imagine how one variable could cause a specific reaction in another variable and
try to figure out which causality model may explain this. Then we gather data and test
the hypothesis.
2) We look at empirical observations and try to imagine which causal models could
produce such results.
We then end up with many (hopefully) causal models, from with we need a method of
selecting the right one. In comes: The scientific method.
The scientific method:
The scientific method is used in causal inference to falsify all the plausible but wrong models
until only the one true causal model remains.
It involves imagining all causal models that could plausibly explain the data, and then using
empirical observations to test and refine those models. Reference: Page 8-9. (Isager 2023)
Paradigm Isager:
The Paradigm of Peder M. Isager: Post-positivism
Post-positivism is a paradigm that acknowledges the complexity of research, emphasizing
the importance of rigor, transparency, and reproducibility in scientific inquiry. This paradigm
accepts that while absolute truth might be unattainable, scientific methods can approximate
understanding through systematic, empirical investigation and critical analysis.
Reasons Why Peder M. Isager Belongs to Post-positivism:
1. Advocacy for Open Science: Isager is a strong proponent of open science practices,
which aim to improve transparency, reproducibility, and credibility in research. This
aligns with the postpositivist emphasis on rigorous and transparent methodologies.
2. Focus on Replication: He has been involved in efforts to replicate psychological studies,
addressing issues of reproducibility and reliability in scientific findings.
3. Methodological Rigor: His work often involves critical analysis of research
methodologies, promoting the use of robust and systematic approaches to scientific
inquiry.
4. Quantitative Research: Isager frequently employs quantitative methods in his research,
consistent with the postpositivist paradigm's emphasis on empirical testing and data
analysis.
BRM ch. 3 - Experiments:
This text provides a discussion of research designs and criteria for evaluating quantitative and
qualitative research. It touches on topics such as confirmability, internal validity, external
validity, and ecological validity. The distinctive features of qualitative research are also
examined. The file runs for a total of 10 pages.
Internal validity: Soundness of scientific findings (Typically Quantitative research).
External validity: Generalizability of scientific findings (Typically Qualitative research).
Ecological validity: Naturalness/realness of scientific findings (Both Qual and Quant).
Scientist who approves this methodology:
Lecompte and Goetz (1982), Kirk and Miller (1986), Perakyla (1997), and Gibbert and
Ruigrok (2010) have all approved and used this methodology of evaluating scientific
contributions.
Scientist who opposes / alter the methodology:
While Kirk and Miller (1986) approved of the methodology, they slightly altered some of the
terms.
Meanwhile, Lincoln and Guba (1985) opposed the methodology and suggested that there is a
need for alternative ways of assessing qualitative research other than
internal/external/ecological.
They proposed “Trustworthiness” as a more appropriate way of doing so. Furthermore,
trustworthiness of a scientific contribution was contextualized with 4 “categories” which
should be fulfilled in order for a science project to be trustworthy.
Credibility (Internal validity),
Transferability (External validity),
Dependability (Reliability of results / replicable results),
Confirmability (Objectivity of results and by the scientist who achieved them).
Lastly, Hammersley (1992a) does say that “validity” of scientific contributions is of massive
importance, but also states that “Relevance” is important.
Naturalism is common in Qualitative research methods as this type of research often seeks to
gather data in naturally occurring situations and environments.
- Ecological validity is therefore arguably more Qualitative in application.
Naturalism also applies very well to ethnographic research which is one of the most applied
research methods used by qualitative scientists. Naturalism is however also common for
certain interview-types.
Research design types: Look at POS Cheat sheet for notes.
(Experiments, Cross-sectional, Longitudinal, Case Study, and Comparative design).
Bergenholtz 2024a: Setting up randomized control trials (RCT’s)
Text objective: The central aim of this text is to help explain and illustrate what a
randomized controlled trial (RCT) is. RCT is a tool to isolate causality, rather than just
showing correlations. It's also about recognizing the many decisions that shape an RCT's
architecture and acknowledging how variations in these decisions might lead to different
outcomes.
RCT: Sample participants are randomly assigned into two groups. The treatment group
(receives the treatment (independent variable effect) / more of it), and the control group (does
not receive independent variable effect / receives less of it).
All else must be equal, in order for the study to qualify as RCT. Otherwise it will be categorized
as a Quasi experiment where some requirements of RCT is fulfilled, but not all.
Internal validity: When the rules of RCT are followed strictly, internal validity will be
established because the effects of the independent variable will be highly isolated.
- We can still statistically test the uniformity of our two groups to see if any
significant differences exist. This is especially prevalent for smaller sample
sizes.
Internal validity relies on participants not knowing the conditions which other participants are
in. Knowledge of this could influence their behaviour.
External validity: things to consider when evaluating:
Sample: Is it generalizable towards other samples / populations / countries / economics?
Setting: The overall environment of the study, tasks presented to participants, and time of day
when study is conducted. How generalizable to other situations is this environment and similar
/ different tasks. Is time-of-day generalizable to other times?
History: The time of experiment (seasonality), and historical events taking place during
experiment (Example: Corona or war). Has the study been caried out during multiple seasons
or in times of non-crisis and crisis? If yes, this will enhance the generalizability of the study
(Of-course depending on the study results).
Hawthorne effect: Consider whether generalizability could be affected, simply by the fact that
participants know they are being observed? This has historically been shown to alter the
conduct, readiness, and opinions of people. - We may not obtain fully raw-cut opinions of
participants.
These criteria for evaluating external validity of a study should not be used in a yes/no or
established / not established fashion, but rather create a framework for discussing possible
generalizable scenarios, and situations which the study likely cannot be generalized towards.
The study may not be generalizable towards everyone, everywhere, but could be generalizable
to X and Y under Z conditions.
Important to note: External validity build upon established internal validity. Thus, if internal
validity is non-existent, external validity is non-establishable.
• If we cannot determine some confidence in the validity of the independent variable
affecting participants in some way, it is basically impossible to generalize this “effect” to
other people / scenarios.
Lecture 3: Quasi experiments and research designs
Bergenholtz 2024b - Quasi experiments
Quasi experiments = Research design that resemble but cannot fully qualify as a true
experiment.
- They often lack random assignment of participants.
- They have “As good as” random assignment, but not fully.
Quasi experiments cannot to the same degree as RCT´s explain causal effects of the
independent variable. They are however still useful in exploring relationships in complex,
real-world environments. The flexibility and practicality of quasi experiments make them
popular in many fields, especially when ethical or logistical constraints limit the feasibility of
true experiments.
Quasi experiments are often used in studies regarding economics and management, where
real-world complexities can make randomization challenging.
Consider: Researchers want to measure effect of increase in monetary incentives (pay) for
employees against performance levels.
- Managers are highly unlikely to agree to a completely random distribution of
employees who receive the higher and the lower monetary incentives, as this
goes against their wish of rewarding the best performing, most hard-working
employees. This, they have no control over in this example.
It would also be difficult to keep participants (employees) from discovering the relative pay
of their colleagues. This could affect the internal validity of the scientific research.
Key quotes:
• DEFINITION OF QUASI-EXPERIMENT "A quasi-experiment is a research design
that, while resembling an experiment, does not fulfill all the criteria of a true experiment."
• SIMILARITIES TO CONTROLLED EXPERIMENTS "While they share similarities
with controlled experiments, such as manipulation (i.e. treatment) of a variable, they typically
lack the random assignment of subjects to different conditions – a key feature of a proper
randomized controlled trial."
• RELEVANCE IN ECONOMICS AND MANAGEMENT "Quasi-experiments are
especially interesting in fields like economics and management, where real-world
complexities can make a proper randomization challenging, either due to logistical
constraints or ethics."
• ASSUMPTION OF GROUP SIMILARITY "In an actual experiment, one would have
randomized access to job training. Here we have to assume that the differences are
meaningless, and that the groups are - almost - identical."
• UTILITY IN COMPARING SIMILAR GROUPS "This method is useful because it
compares people who are very similar, except for being in or out of the program, helping to
understand the program's true effect."
• RELIANCE ON ASSUMPTIONS "Quasi-experiments thus rely on assumptions – here
the assumption is that the difference is not relevant."
• ASSUMED RANDOM ALLOCATION "Researchers could argue and assume that the
allocation of time slots was as good as random."
• AVOIDING COMPLEX COLLABORATIONS "Exploiting the natural allocation of
time slots at schools in this way implies that researchers do not have to set up a large,
complicated, resource-intensive collaboration with a range of schools to randomly assign
students to certain test times."
• NO SYSTEMATIC RELATION "The argument thus is that there was no systematic
relation between expectations of ability and when a kid did a test."
• DIFFICULTY IN DISENTANGLING CAUSAL EFFECTS "With multiple possible
causes, it is very difficult to statistically disentangle causal effects. Consequently, despite its
initial appearance as a quasi-experiment, the presence of these confounding variables
prevents a clear understanding of the bonus system's direct impact."
• LIMITED CAUSALITY "While quasi-experiments cannot establish causality as
definitively as randomized controlled trials, they are still very useful in exploring
relationships in complex, real-world environments."
• FLEXIBILITY AND PRACTICALITY "Their flexibility and practicality make them a
popular choice in many fields, especially when ethical or logistical constraints limit the
feasibility of true experiments."
BRM Ch. 3 - Research design:
A Research design is characterized as the design or structure of the scientific process which
guides the research (BRM 3).
The research question should guide the research design.
“The practical problem confronting researchers as they design studies is that (a) there are no
hard and fast rules to apply; matching research design to research questions is as much art
as science; and (b) external factors sometimes constrain researchers’ ability to carry out
optimal designs” (McGrath, 1981). AMJ Editorial (2011, p. 657).
Cross sectional design:
At the core, experiments are an art of comparison. Experiment results are greatly improved in
internal validity when we compare the effects of our treatment with the effects of its absence.
Cross sectional research designs: Studies more than one case.
We are interested in variation. (People, organizations, nations, states, …)
- Variation can only be measured when looking at more than one case.
In cross sectional studies researchers usually look at more than two cases. This is done in
order to capture (hopefully) all variation across multiple variables.
Data collected more or less simultaneously, at a single point in time.
Often, quantitative, or quantifiable data in order to better observe variation. Quantification is
helpful because it supplies the researcher with a benchmark for valuation.
In cross sectional research we cannot determine the direction of causality. We can only see
correlation between variable and say that they are thus related.
- Unlike experiments, cross sectional research has no time ordering of variables,
thus minimizing causal inference.
The three criteria for evaluating quantitative research:
(Reliability, Replicability, and Validity).
Cross sectional research and the three criteria:
Reliability: Will generally relate more to the quality of the study, and survey questions posed
by the researcher to his participants than the research design itself. (Reliability can thereby
be high, or low according to quality of study).
Replicability: Usually high for cross sectional research if the researcher describes his
procedures for selecting participants, study set-up, questions asked, type of interview
(Structures / Unstructured) or self-selection (questionnaire), and analysis methods employed.
Validity: split into internal-, external-, and ecological validity.
Internal validity: Typically weak due to causal relationships being limited to investigate.
External validity: Strong when sample and groups tested are randomly selected and divided.
If they are not, the external validity becomes questionable.
Ecological validity: Due to the nature of most cross-sectional studies being self-reported
questionaries, ecological validity (naturalness) of study results is typically lower.
Because variables in most business research are impossible to manipulate, researchers often
use cross-sectional design in these situations rather than experiments.
In Qualitative research a method is often employed which is very similar to cross sectional
analysis: Unstructured interviews and semi-structured interviews.
Cross sectional research designs are described as “nomothetic” because they are concerned
with generating statements that apply regardless of time and place.
Longitudinal design:
Quantitative evaluation of longitudinal design:
Much the same as cross-sectional design, because longitudinal design in business and
management research is often presented in a cross-sectional manner, spanning over more than
one point in time.
This is especially due to the higher cost associated with conducting a true longitudinal study
in business settings.
The mayor benefit of longitudinal design is also what cross-sectional analysis lack. The
causal explanation-power. Because the same variables are studied over time, causal relations
are easier to discover.
Longitudinal design consists of 2 sub-categories:
Panel study: Randomly selected participants who are studied on more than one occasion,
often several.
Cohort study: Selects either an entire cohort or part of as participants and study them. This
cohort usually consist of people with the same specified characteristics required for the study.
(Born in the same week, having certain experience, or married on a certain weekday).
Cohort study is rarely used in business settings.
Problems with longitudinal design: (Both present in panel-, and cohort studies).
Sample attribution: People change jobs, companies close or move, etc.
Subject revocation: Participants often opt out of the study, also when second or third wave
of data need to be collected. Longitudinal research is thus quite uncertain regarding final
sample size.
Poor planning: Many longitudinal research studies are poorly planned and end up with loads
of data and little storyline.
Panel conditioning effect: The mere fact that participants are involved in the study in several
waves has been shown to affect their behaviour, causing issues for interpretation of data and
generalizability of results.
Case study:
Case studying is detailed and intensive analysis of a single case. It is concerned with the
complexity and particular nature of the case in question.
Case study can regard a single: Organization, location, event, person.
Case study is not solely used in qualitative research methods. Case studies often use both
qualitative and quantitative methods.
In some instances, when an investigation is based exclusively upon quantitative research, it
can be difficult to determine whether it is better described as a case study or as a cross
sectional research design.
- The same notion often applies to case studies focusing solely on qualitative
data.
The idiographic approach describes the true meaning of case studies as a research method
where the scientist wants to highlight the unique features of the specific case.
Three types of case studies:
Intrinsic: Undertaken primarily in order to gain insight into the particularities of a situation,
rather than to gain insight into other cases or generic issues.
Instrumental: Focus on using the case as a means of understanding a broader issue or
allowing generalizations to be challenged.
Multiple or collective: case studies undertaken jointly to explore a general phenomenon.
Experimental and cross-sectional study designs are often deductive in their approach.
Research design and data collection are governed by specific research questions that derive
from theory.
However, qualitative implementations of the cross-sectional design is often inductive.
The same is true for case study designs. When predominantly qualitative design: Inductive
approach, when predominantly quantitative design: Deductive approach.
Case study and choice of paradigm:
Positivistic: When the goal is to extract variables from their context to create generalizable
propositions and built theory, often by conducting multiple cases studies and using multiple
data methods to improve the validity of results.
Alternative approach: When the aim is to produce rich, holistic, and particular explanations
that are located in situational context by using multiple methods of data collection to uncover
conflicting meanings and interpretations.
Eisenhardt (1989) & Yin (2014).
Types of case study cases: (5 types ac.t: Yin 2014).
Critical case: Clearly specified hypothesis and a case is chosen because it is believed to be
able to support the hypothesis and explain the phenomenon.
Unique case: Gain knowledge about something unusual or puzzling and can enable
researchers to get a point across in a dramatic way.
Revelatory case: When an investigator has the opportunity to observe and analyse a
phenomenon previously inaccessible to scientific investigation. (Unnecessary to restrict it
solely to situations of completely new research.)
Typical case: Explore a case that exemplifies an everyday situation or type of organization.
Longitudinal case: How situations change over time.
Case studies can involve a combination of the above mentioned 5 types of case study.
Lee et al. 2007 defends the notion that case studies have limited generalizability by stating
that this is not the purpose of case studies.
How to distinguish multi-case studies from cross-sectional studies:
If the emphasis is on the cases and their unique contexts = multiple-case study.
If emphasis is on producing general findings, with little regard for unique contexts of each
case. = Cross-sectional study.
Comparative design:
Comparing two or more cases. Comparative analysis embodies the logic of comparison, in
that it implies that we can understand social phenomena better when they are compared in
relation to two or more meaningfully contrasting cases or situations.
Can be both Quantitative and Qualitative.
Two general types of comparative structure: Cross-cultural and international research.
Cross-cultural research: Cross-cultural research involves the collection and/or analysis of
data from two or more nations. There are various ways to conduct cross-cultural research,
including collecting data in multiple countries; coordinating research among national research
organizations or groups; carrying out secondary analysis on data collected in different
countries, and recruiting teams of researchers in participating countries who conduct their
own investigations while maintaining comparability in research questions, survey questions,
and procedures. However, cross-cultural research presents challenges related to the accuracy
and representativeness of the data and requires caution when using secondary data for crosscultural analysis.
International research: International research involves the collection and/or analysis of data
from more than one country. It may be part of cross-cultural research, but not all international
research is cross-cultural. International research can focus on issues such as trade, migration,
diplomacy, and global health, among others. Researchers conducting international research
may face challenges such as navigating differences in cultural norms and values, language
barriers, and political and legal systems.
Levels of analysis:
Business researchers must consider the level or primary entity of analysis. This could be:
Individuals: Specific kinds of individuals (managers, analysts, doctors, etc.)
Groups: Certain types of groups, like HR, board of directors, …
Organizations: one firm or a specific industry.
Societies: Nationality, political orientation, social circle, environment, economic context.
It can be problematic to interpret and derive information about something analysed at one
level and generalize it towards another level.
Table: Research strategy and research design:
Research
design
Research strategy
Quantitative
Qualitative
Approach: Deductive, testing theories and
hypotheses.
Data-type: Numerical, allowing for statistical
analysis.
Research design: Structured, involving
control and manipulation.
Generalizability: Aims for generalizability to
a larger population.
Researcher role: Detached, aiming for
objectivity.
Approach: Inductive, generating theories and
understanding based on data.
Data-type: Textual or visual, focusing on
meaning and interpretation.
Research design: Flexible, adapting to the
context and participants.
Generalizability: Focuses on depth and
richness, often not generalizable.
Researcher role: Engaged, acknow-ledging
subjectivity.
Cross-sectional Approach: Structured and standardized,
focusing on measurement and generalization.
Data-type: Numerical, allowing for statistical
analysis.
Research design: Snapshot approach with
structured data collection.
Approach: Flexible and exploratory, focusing
on depth and context.
Data-type: Textual or visual, focusing on
meaning and interpretation.
Research design: Contextual approach with
flexible data collection methods.
Experimental
Longitudinal
Generalizability: Aims for generalizability to
a larger population.
Researcher role: Detached, aiming for
objectivity.
Outcome: Identifies prevalence and
relationships between variables.
Generalizability: Focuses on depth and
richness, often not generalizable.
Researcher role: Engaged, acknowledging
subjectivity.
Outcome: Provides in-depth understanding of
experiences and meanings.
Approach: Structured and consistent,
focusing on measurement and generalization
over time.
Data-type: Numerical, allowing for statistical
analysis of changes over time.
Research design: Structured design with
consistent data collection methods at specified
intervals.
Generalizability: Aims for generalizability of
findings over time to a larger population.
Researcher role: Detached, aiming for
objectivity and consistency over time.
Approach: Flexible and adaptive, focusing
on depth and contextual changes over time.
Data-type: Textual or visual, focusing on
evolving meanings and interpretations.
Research design: Adaptive design with
flexible data collection methods at multiple
points in time.
Generalizability: Focuses on depth and
richness of evolving experiences, often not
generalizable.
Researcher role: Engaged, recognizing, and
adapting to changes in participants'
experiences.
Outcome: Provides in-depth understanding of
how experiences and meanings evolve over
time.
Outcome: Identifies patterns, trends, and
causal relationships over time.
Case study
Comparative
Approach: Structured and consistent,
focusing on measurement and analysis within
the case.
Data-type: Numerical, allowing for statistical
analysis of the case.
Research design: Structured design with
consistent data collection methods.
Generalizability: Aims to contribute to
broader generalizations through analysis of
the case.
Researcher role: Detached, aiming for
objectivity within the case analysis.
Outcome: Provides in-depth, contextualized
understanding specific to the case.
Approach: Flexible and adaptive, focusing
on depth and contextual understanding within
the case.
Data-type: Textual or visual, focusing on
meaning and interpretation within the case.
Research design: Adaptive design with
flexible data collection methods.
Generalizability: Focuses on depth and
richness of understanding, often not
generalizable beyond the specific case.
Researcher role: Engaged, recognizing and
interpreting the subjective experiences within
the case.
Outcome: Produces measurable, objective
findings specific to the case.
Approach: Employs a structured and
standardized approach to ensure the reliability
and comparability of data across groups.
Data-type: Collects numerical data using
structured instruments such as surveys and
tests.
Research design: Structured and
standardized. Uses consistent instruments and
procedures across cases or groups to ensure
comparability.
Approach: Uses a flexible and adaptive
approach to capture the complexity and
context of each case.
Data-type: Collects rich, detailed data using
methods like in-depth interviews, focus
groups, and observations.
Research design: Flexible and adaptive.
Methods are tailored to each case or group,
focusing on depth and context rather than
standardization.
Generalizability: Results are often
generalizable to a larger population due to
representative sampling and statistical
analysis.
Researcher role: Researcher remains
detached and objective, focusing on
measurement and statistical analysis.
Outcome: Identifies significant differences,
correlations, or causal relationships that can
be generalized to a broader population.
Generalizability: Results are typically not
generalizable but provide in-depth insights
into specific cases or contexts.
Researcher role: Researcher is more engaged
and subjective, interpreting data and
contextual nuances.
Outcome: Provides a deep, contextualized
understanding of differences and similarities,
offering rich insights rather than generalizable
findings.
AMJ 2011 - Research designs
A small paper regarding the reasons why many scientific papers get rejected at AMJ.
“The practical problem confronting researchers as they design studies is that:
a) there are no hard and fast rules to apply: matching research design to research question is
as much art as science.
b) external factors sometimes constrain researchers’ ability to carry out optimal designs”
•
McGath 1981
Choice of research design is very important. Both because it will shape the methodology of
the study greatly, but also because it must and cannot be changed during the revision process.
This enhances the validity of the statements and findings provided by the author of a study.
Three broad research design problems that often occur:
1) Mismatch between research question and design.
2) Measurement and operational issues.
3) Inappropriate and incomplete model specification.
Matching research question and design:
Cross sections data is commonly rejected, not because it is inherently flawed, but because
many if not most research question regard change in some form or another.
The problem with cross-sectional data is that they are mismatched with research questions
that implicitly or explicitly deal with causality or change.
To test for causality or change it requires strong test which measures more than once, which
is not possible with cross sectional data.
For this type of question, one would need longitudinal, panel or experimental data are needed
for this type of research to be valid.
Inappropriate sample and procedures: if one wants to assess student wellbeing, it may not
be relevant at all to collect data from the staff and professors. Similarly, when investigation
motivational drive of CEO’s, managers and full-time employees may be irrelevant to
investigate. It depends of-course on the research question.
Inappropriate application of existing measures: Another way to raise red flags with
reviewers is when researchers use existing measures to assess completely different constructs.
Model specification: It is practically impossible to control for all variation using control
variables. Many papers are however rejected for their lack of key control variables.
There are three important rules regarding control variable:
1) A strong expectation the variable is correlating with the depended variable.
2) A strong expectation that the control variable is correlated with the hypothesised
independent variable(s).
3) A logical reason that the control variable is not more central in the model.
Lecture 5: Spring exam case (No Text)
Lecture 6: Essence of theory
Bergenholtz 2024c - What is theory?
The purpose of a theory is to explain something, so if you provide a theory, you have to also
provide an explanation.
Meaning of “Theory”:
Everyday life:
Having a hunch, qualified guess.
Theories in everyday life are often not based on rigorous scientific investigations, but
rather based on (more or less systematic) personal experiences, common knowledge or
stereotypes we adhere to (Gill & Johnson 2010).
Theories in Science:
theories are like maps that help us navigate the world by simplifying the landscape they
represent and telling us what we can expect to find in a given place (Zimmer 2016, Toulmin
1953).
If you have no theories, it is difficult to make sense of data, since one needs an overarching
framework and helpful concepts to organize data and tell you what relations one can expect
to find in data.
A key point is that knowing why there is a relation provides a substantial advantage, since if
we don’t why a relation exists; we will struggle to know if a relation will continue to hold
when the context changes (Hofman et al. 2017).
What is theoretical explanation:
“An [theoretical] explanation is an attempt to answer the question of why a particular
phenomenon occurs or a situation obtains, that is, an attempt to provide understanding of the
phenomenon or the situation by presenting a systematic line of reasoning that connects it with
other accepted items of knowledge” (Henk W. de Regt, Understanding Scientific
Understanding, 2020 [2017], Oxford University Press, pp. 24-25).
Strategic reduction and abstracting away:
Let us refer to the metaphor of a theory as a map of the world mentioned earlier. If a map of
the world is as big as the world, it would be useless, so obviously any geographical map also
‘abstracts away’. A good geographical map of the world highlights the elements on is
interested in (Toulmin 1953), be it roads, elevations, cities, temperatures etc. The same world
can thus be mapped in many ways. Similarly, a good map of the world of business
administration, highlights the elements we are interested in.
A theory should enable us to strategically boil down a complex situation to the core of what
we wish to investigate. Thus filtering away some of all the noise.
Causal explanations v. Understanding explanations
Brief explanations:
Causality camp: we should be concerned with identifying the underlying mechanisms or
processes that produce a particular phenomenon and be able to predict the relationships
between different factors or variables (Pearl and Mckenzie 2018).
-
Often using analytical tools (Diagrams or statistical models) to identify causal
relationships and make predictions of how different factors will affect
outcomes.
Understanding camp: focused on being able to understand and make sense of a given
situation, in order to offer a rich and nuanced perspective. These kinds of explanations often
focus on contextual or qualitative factors and may use qualitative methods such as
interviews or case studies to provide a rich and nuanced understanding of the situation.
Causality camp: Heavy focus on Quantitative studies.
Understanding camp: Heavy focus on Qualitative studies.
In-depth explanations:
Causal explanations:
“If we cannot explain a phenomenon in terms of underlying cause and effect, then we have
not really explained anything” (Puranam 2018: p. 20).
Causal explanations aim to identify the underlying mechanisms or processes that produce a
particular phenomenon and predict the relationships between different factors or variables
using analytical tools such as diagrams or statistical models. The goal is to establish causal
links between different concepts and make theoretical predictions.
Quantitative causal explanations:
Quantitative cause explanations involve specifying the direction and strength of an effect
between variables in a precise, numerical way. In addition to indicating whether incentives will
influence performance (in a binary ‘yes’ or ‘no’ sense), a theory should also provide an estimate
of how large the effect is expected to be. However, due to the complexity of social situations,
very precise predictions are generally not possible.
Qualitative causal explanations:
Qualitative causal explanations focus on understanding the subjective meanings and
experiences of individuals in order to understand their behaviour and actions. Rather than
fixating on causes and effects, the aim is to identify patterns and contexts that underlie
behaviour. This approach can involve ethnographic observation, in-depth interviews, or other
qualitative methods. By understanding why people behave the way they do, researchers can
identify underlying causes and generate explanations that capture the complexity and diversity
of social situations.
Understanding explanations:
discusses the concept of Verstehen, which is a perspective that emphasizes the interpretation
and understanding of social and business phenomena from the human actors' point of view.
Verstehen explanations focus on how individuals perceive and make sense of their
environment, as well as on their motives, intentions, beliefs, and reasons.
While the Verstehen approach emphasizes understanding over causality, it does not reject the
concept of causality altogether. The perspective suggests that causality may not be the most
important or relevant aspect of social phenomena to consider when trying to make sense of
the world. Verstehen explanations can make causal claims, but the primary aim is still to
make sense of individuals.
The section notes that an explanation of social phenomena should draw upon and explain
why people behave as they do, addressing motives, intentions, beliefs, reasons, etc. The goal
is to understand how different people in different situations behave, rather than focusing on
how external factors shape their behaviour in a causal sense. By understanding the subjective
meanings and experiences of individuals, researchers can generate explanations that capture
the complexity and diversity of social situations.
Causality and Understanding perspective and paradigm:
The causality perspective is traditionally associated with positivism or post-positivism.
The understanding perspective is more closely linked with constructivism.
This is however not set in stone. Causal studies can deploy a constructivist view, and
understanding studies can be positivistic and post-positivistic.
Causality and Understanding perspective and methodology:
The causality perspective considers experiments to be the gold standard.
The understanding perspective use methods like (observations, interviews, document analysis
and so forth).
Different levels of theory: structural and individual
For example, we could try to investigate why some geographical regions have more
entrepreneurs than others. This could be due to the individual characteristics of the people
living in the region, like willingness to take risks. Or it could be due to structural factors, like
regional policies. Both explanations could of course be true and complementary, but they
could also be competing.
Another example of this distinction could be the cause of obesity. Some have suggested that
obesity is caused by individual factors such as laziness or a lack of self-control, while others
have argued that it is the result of broader structural factors such as the price and availability
of healthy food. The structural explanations focus on the structure of the environment that
individuals are embedded in, while the individual explanations focus on characteristics,
abilities, intentions, and actions.
Theories v Models:
“I understand theories as bodies of knowledge that are broad in scope and aim to explain
robust phenomena. Models, on the other hand, are instantiations of theories, narrower in
scope and often more concrete, commonly applied to a particular aspect of a given theory,
providing a more local description, or understanding of a phenomenon. Evolution and
gravitation are theories, but their application to mate selection and the motion of the planets is
best done via models. From this perspective, models serve as intermediaries between theories
and the real world." (Fried 2021: p. 336).
Theory testing:
Theory building is an inductive process theory testing is a deductive process.
Inductive: Starts with observations which you use to construct a theory.
Deductive: Starts with a theory, from which observations are derived.
In order to prove that research has in fact been conducted under a deductive approach, one
can benefit from pre-registration of a theoretical expectation. (Post hypothesis and expected
analytical approach before data-collection).
Theory v Hypothesis:
A hypothesis is more specific, but also more speculative. It makes a precise prediction about
what should happen, in particular conditions.
Theories provide fairly general (why) questions.
If a theory is supported, is it true?
If support is found for the hypothesis, one has also found support for the theory. Does this
mean the theory is true?
Underdetermination:
The concept of underdetermination suggests that theories cannot be definitively tested or
proven to be true due to the inherent complexity and limitations of empirical testing.
Underdetermination means that we can never fully test a theory because empirical validations
inherently rely on assumptions that may not be examined and could be arbitrary.
Even if a theory has been supported by empirical evidence, the underdetermination of
theories means that alternative operationalizations or even different theories could produce
the same empirical results. Therefore, the fact that a theory is supported by evidence does not
mean that it is conclusively true. Future studies could potentially uncover evidence that
contradicts the theory, which means that theories can only be considered provisional rather
than definitively true.
Theory building:
The section on theory building describes an inductive reasoning process that involves
beginning with a range of observations or data and inductively building theory based on
patterns identified and created in the data. Theory building is an open approach because we
do not set out to find support or not for a theory. Instead, we start with a focus point, such as
a research question or an existing theory, but keep an open mind and try to identify patterns
in the data, which can then be used to develop or modify a theory.
Theory building is not limited to testing specific hypotheses or theories but is characterized
by exploration and discovery, which can lead to new insights into phenomena. However, due
to the complexity and subjectivity of social situations, precise predictions are generally not
possible, and theories are always subject to revision based on new data or observations. The
section emphasizes the importance of remaining open to new insights and discoveries in
theory building rather than restricting our focus to pre-existing hypotheses or assumptions.
Induction, deduction, or a mix?
The section on "Induction, Deduction, or a mix?" discusses different reasoning approaches in
theory research. Deductive reasoning is a top-down approach that starts with a general theory
and works its way down to specific hypotheses by deriving logically necessary consequences
from the theory that can be tested empirically. Inductive reasoning, on the other hand, is a
bottom-up approach that starts with specific observations and data, searches for patterns, and
then seeks to develop general theories based on that data.
The section notes that most research projects do not fit neatly into these categories, with
researchers using both inductive and deductive reasoning, as well as abductive reasoning,
which is a more contextual approach that tries to find the best explanation for the data.
Abductive reasoning starts with the observation of a surprising or puzzling event and then
generates a range of possible explanations for it based on previous theory and contextspecific knowledge.
The section advises that focusing on a primary aim in the early stages of a research project
can be useful for guiding the research process and communicating it to others. Deciding
whether you are mostly testing a pre-existing theory or seeking to understand an empirical
context not well-defined by existing theories, could provide a helpful starting point for the
research. However, the distinctions between these reasoning approaches are not too clear-cut,
and exceptions might always exist.
Abduction:
Abduction is the process of generating and selecting among hypotheses, models, and data in
response to surprising findings or specific problems in the data or existing theories. This
reasoning process is neither exclusively inductive nor deductive but instead borrows elements
from both of them. Abduction refers to an approach of "inference to the best explanation," in
which researchers aim to find the best possible explanation for a set of observations or data.
Abductive reasoning involves a continuous interplay between data and theory, in which new
data or findings may trigger a researcher to rethink the original theory, generate new
hypotheses, and seek additional data in an iterative process. Abduction can help reconcile the
mismatch between data and theory and lead to new insights in the development of concepts
or hypotheses.
The section notes that abduction is commonly used when existing explanations of data and
the world are unsatisfying, and new, alternative theories must be considered. Abductive
reasoning is not a separate process but refers to the interconnected process of generating
theories and hypotheses, testing them, and revising them based on new data or observations.
How theories improve:
It is often assumed that science progresses in a rational, accumulative manner, with the best
evidence and data leading to stronger theories that build upon and surpass previous ones.
However, it is worth considering whether this is always the case in practice. In upcoming
lessons, we will delve into questions such as:
•
Does scientific progress always happen in a rational and accumulative way?
•
Should we consider well-tested theories to be true, almost true, or simply supported by
multiple observers? Or should we rather than aiming to find out if theories are true, try to
find out if theories are false?
•
Do these theories represent and directly mirror external realities in the world, or are they
constructed by researchers?
NYT 2016 - Not just a theory
Carl Zimmer 2016
Theories are neither hunches nor guesses, they are the crown-jewels of science.
Kenneth Miller, Brown university:
A theory is a system of explanations that ties together a whole bunch of facts. It not only
explains those facts, but predicts what you ought to find from other observations and
experiments.
Theories represent an attempt to represent some territory. - Dr. Godfrey Smith.
To judge a map´s quality, we can see how well it guides us through its territory. In a similar
way, scientists test out new theories against evidence. Just as many maps have proven to be
unreliable, many theories have been cast aside.
- Dr. Godfrey Smith.
Lecture 7: Popper, Scientific change, tacit knowledge
ABH 2018: Critical rationalism of Popper.
Popper's Asymmetry Thesis: Popper argued that while a single observation can disprove a
scientific theory, no number of observations can prove it definitively due to the problem of
induction.
Purpose of Science: Science aims to test theories to disprove them (falsificationism). This
view contrasts with verificationism, which seeks to prove theories true.
Criterion of Demarcation: Popper proposed that a scientific theory is distinguished by its
falsifiability. If a theory can be tested in reality, it is scientific.
Scientific Method: Popper emphasized the importance of hypothesis testing through
empirical observation. Theories are formulated, hypotheses are derived from them, and these
hypotheses are then tested through experimentation.
Realism: Popper supported a realist position, asserting that science aims to uncover the world
as it exists independently of human perception.
Criticism and Development of Theories: Popper argued that scientific progress occurs
through the continuous testing and falsification of theories. When a theory is falsified, it leads
to the development of new, improved theories.
Application and Challenges: While Popper's ideas offer a framework for understanding
scientific inquiry, the practical application of his philosophy can be challenging. Scientists
often do not outright reject theories when they are falsified but may instead revise or refine
them in light of new evidence.
Overall: Popper's critical rationalism provides valuable insights into the nature of scientific
inquiry and the iterative process of theory development and testing.
Paradigm:
Karl Popper belongs to the critical rationalism paradigm. Critical rationalism is characterized
by the following key aspects:
Emphasis on Falsifiability: The core principle of critical rationalism is that scientific
theories should be falsifiable. Popper argued that the demarcation between science and nonscience is whether a theory can be tested and potentially refuted by empirical evidence.
Rejection of Inductive Reasoning: Unlike traditional scientific methods that rely on
induction (generalizing from specific observations), critical rationalism relies on deduction
and the process of conjecture and refutation. Popper believed that no amount of empirical
data can conclusively verify a theory, but a single counterexample can falsify it.
Provisional Nature of Knowledge: According to Popper, all scientific knowledge is
provisional. Theories are accepted as long as they withstand rigorous testing and are replaced
or modified when they are falsified.
Continuous Improvement: Scientific progress is seen as an iterative process where theories
are constantly tested, criticized, and improved upon. This continuous cycle of conjecture and
refutation drives the advancement of knowledge.
Popper's critical rationalism fundamentally changed the philosophy of science by shifting the
focus from the accumulation of positive evidence to the rigorous testing and potential
falsification of scientific theories.
Why is Popper not post-positivistic:
Popper is often associated with critical rationalism rather than post-positivism, although his
ideas do share some similarities with post-positivist thought. Here are the reasons why
Popper is not typically classified within post-positivism:
Demarcation Criterion: Popper's criterion of falsifiability is a strict demarcation line that
distinguishes scientific theories from non-scientific ones. Post-positivists, on the other hand,
often reject strict demarcation lines and emphasize a more holistic approach to science, where
theories are judged on a continuum of reliability rather than a binary of scientific versus nonscientific.
Rejection of Induction: While post-positivism still relies on inductive reasoning to some
extent (acknowledging that observations can support a theory without definitively proving it),
Popper completely rejects induction as a valid method for scientific inquiry. Instead, Popper
focuses on deduction and falsification.
Role of Empirical Evidence: Popper emphasizes the role of empirical evidence solely in
terms of falsification. For post-positivists, empirical evidence is crucial for building and
refining theories, but they also recognize the theory-ladenness of observations and the
fallibility of human perception, often incorporating a broader methodological pluralism.
Philosophical Roots: Post-positivism emerged as a response to the limitations of logical
positivism, incorporating elements of hermeneutics, critical theory, and social constructivism.
Popper's critical rationalism, while also a response to logical positivism, does not engage as
deeply with these broader philosophical traditions and remains more focused on the logical
and methodological aspects of scientific inquiry.
Nature of Scientific Theories: Post-positivists often embrace a more pragmatic view of
scientific theories, seeing them as useful tools rather than absolute truths about reality.
Popper, although he acknowledges the provisional nature of scientific knowledge, still
upholds a realist perspective, asserting that science aims to uncover truths about an objective
reality.
In summary, while Popper’s critical rationalism shares some concerns with post-positivism,
such as the fallibility of scientific knowledge and the critique of verificationism, his strict
adherence to falsification and his rejection of inductive reasoning set him apart from the
broader and more flexible framework of post-positivism.
Kuhn 1962: Tacit knowledge and intuition
Tacit Knowledge vs. Rules: The text discusses the perception of tacit knowledge versus
explicit rules in the context of science. Tacit knowledge, shared among successful groups, is
acquired through training and shared experience rather than being individual intuitions.
There's ongoing experimentation to analyse these properties at an elementary level using
computer programs.
Perception and Interpretation: Perception is emphasized as distinct from interpretation.
Perception involves neural processing and differs from person to person or group to group
based on education, culture, and experience. Interpretation, on the other hand, involves
deliberate analysis and the application of rules or criteria.
Role of Experience in Perception: Prior experience and training significantly shape
perception. The text suggests that individuals within a group learn to see similar things when
confronted with the same stimuli through exposure to examples and training.
Scientific Progress and Relativism: The text argues that scientific progress is not relativistic
but rather driven by the ability to solve puzzles presented by nature. It suggests that later
scientific theories are better at solving puzzles than earlier ones, indicating a directionality in
scientific development. However, it questions the idea that scientific theories approximate
"truth" in representing nature, suggesting that the notion of a match between theory and
reality is elusive.
Interpretation of Scientific Theories: The text challenges the notion of successive theories
growing closer to "truth" or representing reality more accurately. It argues that there's no
theory-independent way to define what is "really there" and that the ontological development
of theories lacks a coherent direction.
Overall, the text delves into the complexities of perception, interpretation, and scientific
progress, challenging traditional notions of truth and representation in scientific theories.
Summary of text:
Thomas Kuhn's 1962 work, "The Structure of Scientific Revolutions," introduced the concept
of paradigm shifts in scientific development. Kuhn argued that science does not progress
through a linear accumulation of knowledge, but rather undergoes periodic revolutions where
an existing paradigm is replaced by a new one. These shifts are driven by anomalies that the
current paradigm cannot explain, leading to a crisis and the adoption of a new framework.
Tacit knowledge and intuition play crucial roles in this process, as scientists rely on their
implicit understanding and intuitive judgments to identify and solve problems within a
paradigm. The transition between paradigms is not purely rational but involves a complex
interplay of social, psychological, and cognitive factors.
Evaluation of Kuhn's Paradigm
Ontology, Epistemology, or Methodology:
Kuhn's work primarily belongs to epistemology, which is the study of knowledge—its
nature, origin, and limits. Kuhn's exploration of how scientific knowledge evolves and
changes through paradigms is fundamentally epistemological, focusing on the processes by
which scientists come to understand and explain the world.
Positivism, Post-Positivism, or Constructivism:
Kuhn is often associated with post-positivism and constructivism.
•
•
Post-Positivism: Kuhn challenges the positivist notion that science steadily
accumulates objective knowledge. Instead, he proposes that scientific knowledge is
provisional and subject to change through paradigm shifts, acknowledging that our
understanding of reality is influenced by human perspectives and social contexts.
Constructivism: Kuhn's idea that scientific paradigms shape and are shaped by the
community of scientists aligns with constructivist views. Constructivism holds that
knowledge is constructed through social processes and interactions, and Kuhn's
emphasis on the role of the scientific community and the subjective elements of
paradigm shifts supports this perspective.
In summary, Kuhn's "The Structure of Scientific Revolutions" fits within the epistemological
domain and is aligned with post-positivist and constructivist paradigms. His work
underscores the complexity of scientific progress and the significant roles of tacit knowledge
and intuition in shaping scientific understanding.
Popper 1962:
Summary: Popper 1962
Karl Popper's "Conjectures and Refutations" presents several crucial aspects of his
philosophy of science, emphasizing the dynamic and critical nature of scientific knowledge
and inquiry:
Tentative Nature of Knowledge: Popper asserts that scientific knowledge advances through
conjectures and tentative solutions to problems. These conjectures are not definitive truths
but are open to criticism and refutation. Even if they withstand tests, they can never be fully
justified or confirmed.
Role of Criticism: Criticism is essential in scientific inquiry as it helps identify and
understand the difficulties and limitations of current theories. Refutation is not a setback but a
progression towards truth, as it exposes errors and stimulates the search for better theories.
Scientific Rationality: According to Popper, the rationality of science is characterized by its
critical and progressive nature. The capacity to critically evaluate and argue about a theory's
problem-solving ability compared to its rivals is central to scientific rationality.
Testability and Falsifiability: Popper differentiates between testable and non-testable
theories. Only theories that can be empirically tested and potentially refuted qualify as
scientific. Non-testable theories, such as those in psychoanalysis, fall outside the scope of
science.
Observation and Theory: Popper challenges the idea that science starts with observation.
He contends that observations are inherently selective and influenced by existing theories.
Therefore, theories are necessary for interpreting observations and formulating hypotheses.
Logical Argumentation: While observations are important, logical argumentation,
especially deductive reasoning, is crucial for analyzing and critiquing theories. Deductive
logic helps uncover weaknesses in theories and evaluate their implications.
Tentativeness of Truth: Popper emphasizes that all scientific laws and theories are
provisional, even when they appear certain. Theories can be contradicted by new evidence,
highlighting the tentative nature of scientific knowledge.
Objective Existence of Truth: Although attaining absolute truth is challenging, the concept
of truth is essential because it implies the possibility of error or doubt. Recognizing and
correcting errors helps move closer to the truth.
These points reflect Popper's view of science as an evolving and critical process, where
theories are continuously tested, refuted, and refined in the quest for truth.
Paradigms: Popper 1962
Epistemology, Ontology, and Methodology:
Epistemology: Popper's epistemology is rooted in fallibilism, where knowledge is seen as
conjectural and corrigible. He emphasizes the role of critical scrutiny and falsification in the
growth of knowledge.
Ontology: (No), Popper does not extensively focus on ontology but implies a realist
perspective by acknowledging the objective existence of truth and the external world that
theories aim to describe.
Methodology: His methodology centers on falsificationism, where scientific theories must be
testable and refutable. He advocates for rigorous testing and critical analysis as the primary
methods for scientific progress.
Positivism, Post-positivism, and Constructivism:
Positivism: (No), Popper is critical of classical positivism, particularly its verification
principle. He rejects the notion that scientific theories can be conclusively verified through
empirical observation.
Post-positivism: (Yes), Popper aligns more closely with post-positivism. He acknowledges
the limitations of empirical observation and verification, emphasizing falsifiability and the
tentative nature of scientific knowledge. Post-positivism accepts that knowledge is not
absolute but subject to revision based on new evidence and critical analysis.
Constructivism: (No), Popper is generally not associated with constructivism. He maintains
a realist stance, believing in an objective reality that science attempts to describe, rather than
the constructivist view that knowledge is primarily a social construct shaped by human
perceptions and interactions.
In summary, Karl Popper's philosophy of science is best characterized as a form of postpositivism with a strong emphasis on critical rationalism, falsifiability, and the tentative
nature of scientific knowledge.
POS chap. 5: Scientific change and revolutions
Scientific ideas change, fast. Virtually any subject today, will have current theories which
vary greatly from those established 50 years ago, they are likely even more different from
those 100 years ago.
In post-war times, logical positivism was the prevailing philosiphical movement.
Logical positivism, also known as logical empiricism, was a philosophical movement that
emerged in the early 20th century. It was primarily governed by the following principles:
1. Verification Principle: This principle states that a statement or proposition is
meaningful only if it can be empirically verified or is tautological (true by definition,
such as in logic or mathematics). According to this view, metaphysical and ethical
statements, which cannot be empirically verified, are considered meaningless.
2. Empiricism: Logical positivists emphasized that knowledge should be derived from
sensory experience and empirical evidence. They believed that all meaningful
statements about the world must be either empirically verifiable or logically
necessary.
3. Logical Analysis: The movement advocated for the use of formal logic and
mathematical tools to analyze language and scientific theories. Logical positivists
aimed to clarify philosophical problems by reformulating them in logical terms, thus
making them more precise and clear.
4. Anti-Metaphysics: Logical positivism rejected traditional metaphysics as
meaningless. Since metaphysical statements cannot be empirically verified, they were
dismissed as nonsensical. The focus was on observable and testable phenomena.
5. Unity of Science: The movement promoted the idea that all sciences share a common
language and methodology. This was intended to bridge the gap between the natural
and social sciences, advocating for a unified scientific approach.
6. Reductionism: Logical positivists often endorsed the idea that complex phenomena
can be reduced to simpler components, typically those of physics. This reductionist
approach was aligned with their emphasis on empirical verification and the unity of
science.
7. Syntax and Semantics: They distinguished between the syntax (formal structure) and
semantics (meaning) of language. Understanding the structure of scientific language
was seen as essential for analyzing scientific theories.
Logical positivism was followed by a group of scientists at that time. Including, Moritz
Schlick (Carl Hempel) and Karl Popper.
These positivists believed that it does not matter how a hypothesis is built, what matters is the
process by which it is tested.
Overview:
Chapter 5 delves into the dynamics of scientific progress, exploring how scientific theories
evolve and sometimes undergo revolutionary shifts. The chapter pays particular attention to
the seminal contributions of Thomas Kuhn, whose 1962 work, "The Structure of Scientific
Revolutions," fundamentally altered our understanding of scientific development.
Key Concepts:
Normal Science:
o Paradigms: Kuhn introduced the concept of paradigms, which are overarching
frameworks within which scientific research operates. These paradigms define what is
to be studied, the kind of questions to be asked, and the rules for interpreting results.
o Puzzle-Solving: During periods of normal science, scientists engage in puzzle-solving
within the confines of the existing paradigm. The aim is not to discover new theories
but to extend and refine the existing framework.
Anomalies:
o Emergence of Anomalies: Over time, certain observations or experimental results may
not fit within the established paradigm. These anomalies initially are often ignored or
explained away but can accumulate and become significant.
o Crisis: When anomalies reach a critical mass, the existing paradigm can no longer
adequately explain the phenomena, leading to a crisis in the scientific community.
Scientific Revolutions:
o Paradigm Shift: A scientific revolution occurs when the old paradigm is replaced by a
new one. This shift is not merely a gradual accumulation of knowledge but a
fundamental change in the underlying assumptions and methodologies of the field.
o Incommensurability: Kuhn argued that competing paradigms are incommensurable;
they involve different standards and interpretations, making it difficult for proponents
of different paradigms to fully understand each other.
o Non-Cumulative Progress: Unlike the traditional view of scientific progress as
cumulative, Kuhn suggested that paradigm shifts are more akin to a series of
intellectual revolutions, where the new paradigm redefines the field and reshapes its
theoretical structure.
Impact of Kuhn’s Work:
o Philosophical Implications: Kuhn's ideas challenged the prevailing notion of science
as a steady, objective accumulation of knowledge. They introduced the concept that
scientific progress is influenced by sociological and psychological factors.
o Debate and Criticism: Kuhn's work sparked extensive debate. Critics argued that
Kuhn overstated the discontinuity between paradigms and the irrationality of paradigm
shifts. However, his ideas have been influential in shaping contemporary
understandings of scientific development.
Post-Kuhnian Developments:
o Further Refinements: Subsequent philosophers and historians of science have
expanded on Kuhn's ideas, exploring the nuances of scientific change and the nature of
scientific communities.
o Integration with Other Theories: Kuhn's insights have been integrated with other
theories of scientific change, including those emphasizing the role of technological
advancements, social structures, and cognitive processes in scientific revolutions.
Kuhn was especially interested in scientific revolutions. Periods of great upheaval when
existing scientific ideas are replaced with radically new ones.
Kuhn´s definition of Paradigm:
A paradigm consists of two main components: firstly, a set of fundamental theoretical
assumptions that all members of a scientific community accept at a given time; secondly, a set
of 'exemplars' or particular scientific problems that have been solved by means of those
theoretical assumptions, and that appear in the textbooks of the discipline in question. But a
paradigm is more than just a theory (though Kuhn sometimes uses the words interchangeably).
Collective implementation:
When scientists share a paradigm, they do not just agree on certain scientific propositions, they
agree also on how future scientific research in their field should proceed, on which problems
are the pertinent ones to tackle, on what the appropriate methods for solving those problems
are, on what an acceptable solution of the problems would look like, and so on. In short, a
paradigm is an entire scientific outlook - a constellation of shared assumptions, beliefs, and
values that unite a scientific community and allow normal science to take place.
In “Normal periods” as Kuhn calls them, scientists act as puzzle-solvers. Trying to solve for
things which their paradigm cannot completely or easily solve. This is done in a thoughtful
manner, with as minor changes to the paradigm as possible.
Scientific revolution:
When anomalies are few in number, they tend to just get ignored. But as more and more
anomalies accumulate, a burgeoning sense of crisis envelops the scientific community.
Confidence in the existing paradigm breaks down, and the process of normal science
temporarily grinds to a halt. This marks the beginning of a period of revolutionary science’.
as Kuhn calls it. During such periods, fundamental scientific ideas are up for grabs. A variety
of alternatives to the old paradigm are proposed, and eventually a new paradigm becomes
established. A generation or so is usually required before all members of the scientific
community are won over to the new paradigm.
Scientific revolutions are thus the shift from one paradigm to a new one.
Kuhn paradigmatic beliefs:
Moreover, Kuhn questioned whether the concept of objective truth actually makes sense at al.
The idea that there is a fixed set of facts about the world, independent of any particular
paradigm, was of dubious coherence, he believed. Kuhn suggested a radical alternative: the
facts about the world are paradigm-relative, and thus change when paradigms change. If this
suggestion is right, then it makes no sense to ask whether a given theory corresponds to the
facts a's they really are', nor therefore to ask whether it is objectively true. Truth itself
becomes relative to a paradigm.
Incommensurability:
concepts cannot be explained independently of the theories in which they are embedded. This
idea, which is sometimes called 'holism', was taken very seriously by Kuhn. He argued that
the term 'mass' actually meant something different for Newton and Einstein, since the
theories in which each embedded the term were so different.
Kuhn later altered his view on incommensurability between paradigms, stating stat
comparison is possible, but difficult.
He did however maintain incommensurability of standards between paradigms, stating that
different paradigms can have different ways of measuring and understanding the world.
Theory ladenness (of data):
To grasp this idea, suppose you are a scientist trying to choose between two conflicting
theories. The obvious thing to do is to look for a piece of data that will decide between the
two - which is just what traditional philosophy of science recommended. But this will only be
possible if there exist data that are suitably independent of the theories, in the sense that a
scientist would accept the data whichever of the two theories she believed.
The text discusses the impact of Thomas Kuhn's influential work, "The Structure of Scientific
Revolutions," on the philosophy of science. It highlights Kuhn's criticism of the logical
positivist philosophy of science, which emphasized the objectivity and rationality of scientific
inquiry. Kuhn's concept of paradigm shifts challenged the positivist view by suggesting that
scientific change is not always rational or objective. He argued that scientific revolutions
occur when existing paradigms are replaced by new ones, leading to fundamental changes in
scientific worldviews. Kuhn also introduced the idea of incommensurability, suggesting that
paradigms are often so different that they cannot be directly compared. Furthermore, he
argued that data are theory-laden, meaning they are influenced by the theoretical assumptions
of scientists. Kuhn's work prompted a re-evaluation of traditional views of science,
emphasizing the importance of historical and social factors in scientific development. Despite
controversy, his ideas have had a lasting impact on the philosophy and sociology of science.
Lecture 8: Differences of science
Nelson 2016: Differences of science and why they matter.
Physics are often viewed as the having the ideal model with quantitative characterization of
subject matters and mathematical theory specifications. It is however argued, that trying to
mimic the near-perfect model of physics in other sciences may hinder their progression.
Scientist also tried to use the physics model on other fields of research like behavioural
science. This may not be possible however, because most biological sciences and a number of
physical sciences and very unlike physics.
Physics as a model science:
Advantages of physics:
- Quantitative specification of the phenomena being studies.
- Mathematical sharpness and deductive power of theories.
- Precision and causal depth of understandings in physics.
Physics has become one of the most powerful and envied sciences because it is capable of
being measured and understood with great precision in the language spoken by the universe,
mathematics.
Differences between physics and other sciences:
Physics is remarkable in the extent to which it is able to make predictions (Often verifiable)
or provide explanations of phenomena based on mathematical calculations associated with
the “laws” it has identified or proposed and the existence of particular conditions.
In many other sciences than physics, phenomena characterized within the same group often
have large variation in their outcome or reaction from one instance to the next.
- Mayr 1985.
Sciences other than physics rarely have the same type of laws established which dictate to a
precise degree what ought to happen under XYZ conditions and at what time.
Much of the phenomena studied in social sciences are described qualitatively and verbally.
Description is then often followed by numerical explanations also.
Many of the subjects studied by social- and behavioural scientists are quite heterogeneous.
A good part of the reason why one or a few numbers alone generally do not cover adequately
the subject matter being studied, and the numbers themselves often are somewhat fuzzy,
is that the phenomena studied have several aspects and each of these has blurry edges.
Many concepts, especially in social sciences are blurry or fuzzy, which makes precise
statistical predictions very difficult.
I am not arguing that numbers are not useful in this kind of research, but that they need to be
understood as parts of a description that also is qualitative to a considerable degree. I also am
proposing that paying attention exclusively or largely to the numbers that can be used in
research on this kind of subject not only involves ignoring or playing down other kinds of
knowledge that are at least as relevant, but also in many cases can lead to a very distorted
view of what is going on.
Also, regarding economic theory, which would be better described as mathematical
modelling, as almost no one believes these models to explain actual laws of economics or
business, but rather as ways to interpret with some precision, our current understanding of the
world of economics.
Besides being fuzzy matters, social scientists must also recognize that the subjects which they
study are changing and may be completely different today than they were 1 year ago or a
decade ago.
Cohen 2010.
Lecture 9: Paradigms
Guba 1990: Alternative paradigm
Paradigm meaning:
A basic set of beliefs that guides action, whether of the everyday garden variety or action
taken in connection with a disciplined in quiry.
Guba paradigm preference:
Guba states himself that he is a constructivist.
Constructivism: Reject objectivity, celebrate subjectivity.
He is also relativist:
refers to the idea that knowledge and truth are not absolute but are instead shaped by social,
cultural, historical, and contextual factors. This perspective challenges the notion that science
is purely objective, or that scientific knowledge can be understood outside of the context in
which it is developed.
Reason for supporting constructivism:
Flexibility: Constructivism allows for a more flexible and responsive approach to studying
complex social phenomena.
Multiple Realities: Recognizes the existence of multiple, socially constructed realities,
providing a more comprehensive understanding of social phenomena.
Co-Construction of Knowledge: Emphasizes the importance of the interaction between
researchers and participants, leading to richer and more nuanced insights.
Contextual Understanding: Allows for an in-depth exploration of context and meaning,
which is essential in understanding human behaviour and social interactions.
Ontology, Epistemology, Methodology:
Three basic questions which can describe / characterize existing and emerging paradigms.
Positivism:
The basic belief system of positivism is rooted in a ontology, that is, the belief that there
exists a reality driven by immutable natural laws.
Business of science is to discover the “True” meaning / reality and how it works.
the positivist is constrained to practice an epistemology. If there is a real world operating a
cording to natural laws, then the inquirer must behave in ways that put questions directly to
nature and allow nature to answer back directly. The inquirer, so to speak, must stand behind
a thick wall of one-way glass, observing nature as "she does her thing."
But how can that be done, given the possibility of inquirer bias, on the one hand, and nature's
propensity to confound, on the other? The positivist's answer: by the use of a' manipulative
methodology that controls for both, and empirical methods that place the point of decision
with nature rather than with the inquirer.
Basic positivistic belief system:
•
Ontology: Positivism asserts that there is a single, objective reality that can be
discovered and understood through empirical observation and measurement.
•
Epistemology: Knowledge is seen as objective and independent of the researcher.
The goal is to uncover universal truths through rigorous scientific methods.
•
Methodology: Emphasizes quantitative methods, controlled experiments, and
statistical analysis to test hypotheses and establish causal relationships.
Post-Positivism:
Post-positivism is best described as a modified version of positivism.
Prediction and control continue to be the aim.
Ontologically, post-positivists move from what today is thought of as “naïve” towards
critical realism. Here it is believed that a real world driven by real natural causes does exist,
but it is impossible for humans truly to perceive it with their imperfect sensory system and
intellectual mechanisms.
- Cook & Cambell 1979
Basic post-positivistic belief system:
Ontology: Post-positivists acknowledge a reality that exists but is imperfectly and
probabilistically understood.
Epistemology: Knowledge is seen as conjectural and subject to revision. Researchers
interact with what is studied, acknowledging that their subjectivity influences findings.
Methodology: Emphasizes rigorous scientific methods but incorporates critical realism,
recognizing the limitations of human understanding.
Constructivism:
Basic constructivist belief system:
Ontology: Constructivists believe that reality is socially and experientially based.
Multiple, socially constructed realities exist.
Epistemology: Knowledge is seen as subjective, co-created by the interaction between the
researcher and participants.
Methodology: Research methods are more qualitative and interactive, emphasizing the
co-construction of meaning and understanding through dialogue and reflection.
Constructivist reasons for not believing in positivism and post-positivism:
a) For scientific tests to be reliable in evaluating hypotheses or questions, the language
used for theories and observations must be independent of each other. The collected
"facts" should be separate from the theoretical statements. However, philosophers
now agree that facts are always influenced by some theoretical framework. This
means that discovering the true nature of things is impossible because "reality" only
exists within a mental framework used to think about it.
b) No theory can ever be completely tested due to the problem of induction. Seeing one
million white swans doesn't prove that all swans are white. There are always many
theories that can explain a set of facts, so a definite explanation is never possible.
Many interpretations are possible, and there's no fundamental way to choose between
them. "Reality" can only be understood through the lens of some theory, whether we
are aware of it or not.
c) Constructivists agree that research cannot be free of values. Just as "reality" can only
be understood through a theoretical perspective, it can also only be seen through a
lens of values. This means many different interpretations are possible.
d) The relationship between the researcher and what is being studied is interactive. Even
post-positivists admit that true objectivity is impossible; the results of research are
always influenced by this interaction. There is no completely neutral standpoint. If
this interconnectedness exists in physical sciences, it is even more likely in social
sciences. This interaction challenges both positivism and post-positivism by:
1. Making the distinction between what can be known (ontology) and how it
is known (epistemology) obsolete; they become a unified whole.
2. Turning research findings into a product of the research process itself,
rather than a direct report of external reality.
3. Viewing knowledge as a human construction, always evolving and never
absolutely true.
Guba 1990 ch2 snippet summary:
This third question brings up a crucial issue: how do we evaluate the quality of constructions
(interpretations or understandings) in constructivist inquiry? Constructivists argue that
traditional criteria used in positivist and post-positivist paradigms, such as validity and
reliability, are not suitable for evaluating constructions. Instead, they propose alternative
criteria:
1. Credibility: This replaces internal validity. A construction is credible if it accurately
represents the multiple realities held by the people being studied. Techniques like
member checks, where participants review and confirm the findings, are used to
enhance credibility.
2. Transferability: This substitutes for external validity. Instead of generalizing
findings, constructivists provide detailed descriptions so that others can determine if
the findings apply to other contexts. This is often achieved through thick description,
giving enough detail for someone else to make such a judgment.
3. Dependability: This corresponds to reliability. Dependability involves showing that
the findings are consistent and could be repeated. It is often established through an
inquiry audit, where an independent reviewer examines the process and findings.
4. Confirmability: This replaces objectivity. Confirmability ensures that the findings
are shaped by the respondents and not researcher bias. It is achieved through practices
like audit trails, documenting each step of the research process to show how
conclusions were reached.
The discussion highlights that these criteria aim to ensure that the findings of constructivist
inquiry are trustworthy and grounded in the realities of those being studied. Constructivist
researchers recognize that these realities are subjective and constructed, but they still strive
for rigor and systematic inquiry.
Lecture 10: Thinking fast and slow
Kahneman Ch 1 & 7, Thinking fast and slow:
Positivist mindset: People are rational and when they are not it is explained by deep complex
emotions like hate, love, fear, etc.
Kahneman challenge this conviction.
System 1: operates automatically and quickly, with little or no effort and no sense of
voluntary control.
System 2: allocates attention to the effortful mental activities that demand it, including
complex computations. The operations of system 2 are often associated with the subjective
experience of agency, choice, and concentration.
System 1 is good at recognizing patterns and implement appropriate behaviour quickly and
effortlessly.
System 2 is more deliberate, require more fuel to operate but can also operate in foreign
situations and dilemmas where intention is required by the mind and body.
System 1 is also quick to jump at conclusions if you are quite certain that the answer is right
and the possibility of failing from time to time is okay.
System 2 is sometimes busy and often lazy. This can also offer at least partial explanation to
why people are more likely to be influenced by empty persuasive messages like commercials
when they are tired and depleted.
What you see is all there is…
WYSIATI supports the creation of coherence and cognitive ease, helping us accept
statements as true and make sense of partial information. This often leads to reasonable
actions but also explains several biases:
• Overconfidence: Confidence depends on the coherence of the story we create from
limited information, often neglecting critical missing evidence.
• Framing effects: Different presentations of the same information can evoke different
emotions (e.g., "90% survival rate" vs. "10% mortality rate").
• Base-rate neglect: Salient information (like a personality description) can
overshadow statistical facts (e.g., more male farmers than male librarians), leading us
to ignore base rates.
Lecture 11: Science is not broken and Meta analysis
Aschwanden 2015: Science isn´t broken.
Science overall is not in a bad state. Science is however very ******* hard.
p-values reveals almost nothing about the strength of the evidence, yet a p-value of 0.05 has
become the ticket to get into many journals. - Michael Evans
After all, what scientists really want to know is whether their hypothesis is true, and if
so, how strong the finding is. "A p-value does not give you that - it can never give you
that," - Regina Nuzzo.
Instead, you can think of the p-value as an index of surprise. How surprising would these
results be if you assumed your hypothesis was false? - Regina Nuzzo.
Many times, a phenomenon can be quantitatively measured and tested using a wide variety of
different methods and data. This however sometimes leads to people trying many
combinations until a certain threshold may be reached. Like 0.05 p-value… Also called
“researcher degrees of freedom”. Or P-hacking. Fiddling around with the data until
publishable results appear. P-hacking is generally thought of as cheating.
Scientists can also fall back to HARKing, meaning to: Hypothesize after the results are
known. However, a single analysis is not sufficient to find a definitive answer. Every result is
a temporary truth, one that´s subject to change when someone else comes along to build, test
and analyse anew.
Self-Correction in Science:
• Science is powerful because it's self-correcting; false findings get overturned by new
studies.
• However, the track record of scientific publishing in self-correction is problematic.
Retraction Watch:
• Ivan Oransky and Adam Marcus launched Retraction Watch in 2010 to document
retractions in scientific literature.
• Initially expected to post monthly, the blog now posts multiple times a day due to a
high volume of tips and widespread frustration in the scientific community.
• The blog receives 125,000 unique views per month and has expanded its focus to
broader scientific misconduct and errors.
Reasons for Retractions:
• Retractions occur for various reasons, with plagiarism and image manipulations being
the most common.
• A significant portion of retractions result from misconduct rather than honest
mistakes.
•
Increase in Retractions:
• Retractions have increased tenfold from 2001 to 2009. There is debate whether this is
due to increased misconduct or better detection.
• Despite the rise, retractions are still a small fraction of total publications.
Issues in Peer Review:
• Peer review often fails to catch errors due to lack of thorough checking, especially in
methods and statistics sections.
• Some authors exploit flaws to conduct fraudulent peer reviews.
• Predatory journals publish papers without proper peer review, exacerbating the
problem.
Internet's Role:
• The internet facilitates rapid, post-publication peer review and discussions, potentially
improving scrutiny and accountability in scientific publishing.
Challenges in Correcting Scientific Beliefs:
• Scientists and the public often hold onto incorrect ideas due to confirmation bias and
the human tendency to cling to established beliefs.
• Even with overwhelming evidence, changing entrenched beliefs is difficult.
Impact of Human Fallibility:
• Human cognitive biases, such as naive realism and confirmation bias, affect how
scientists interpret and react to new evidence.
• The persistence of outdated ideas, even after being disproven, is common in scientific
literature.
Misinterpretation and Overstatement:
• Both media and scientists can overstate findings, contributing to misinformation.
• The process of science involves multiple studies and evolving evidence, which can be
misrepresented as inconsistent or unreliable.
Science as a Rigorous but Imperfect Process:
• The scientific method, though rigorous, is inherently challenging and messy.
• Variations in findings should be viewed as part of the process of tackling complex
problems, not as a failure of science.
• Respect for science should come from understanding its difficulties and the ongoing
nature of inquiry and correction.
Harrer et al. 2021: Meta analysis.
Science is cumulative. We built on the creations and findings of those before us.
Isaac Newton: If we want to see further, we can do so by standing on the shoulders of giants.
A ton (literally) of new research is published every day. More than a million medical articles
are now published every year.
Back in the 1970s, the brilliant psychologist Paul Meehl already observed that in some
research disciplines, there is a close resemblance between theories and fashion trends. Many
theories, Meehl argued, are not continuously improved or refuted, they simply “fade away”
when people start to lose interest in them (Meehl 1978).
What are “Meta-analyses”?
Analyses of analyses.
In conventional studies the unit analysed is typically people, specimens, countries, objects,
etc. In meta-analysis, the unit of analysis is other analyses.
The aim is to combine, summarize, and interpret al available evidence. However, there exist
at least three methods of doing so:
Traditional/Narrative Reviews:
• Common until the 1980s, written by experts.
• No strict rules for study selection or scope definition.
• Conclusions are subjective, potentially biased by the author’s opinions.
• When balanced, they provide a broad overview of a research field.
Systematic Reviews:
• Use predefined, transparent rules to summarize evidence.
• Research questions and methodologies are explicit and reproducible.
• Aim to cover all available evidence, assessing validity with set standards.
• Present a systematic synthesis of outcomes.
Meta-Analyses:
• An advanced type of systematic review.
• Clearly defined scope, systematic selection of primary studies.
• Combine results from previous studies quantitatively into one numerical estimate.
• Quantify effects, prevalence, or correlations, requiring studies with similar designs
and measurements.
• More exclusive than systematic reviews due to the need for comparable quantitative
data.
Fourth method, depending on definition:
Individual Participant Data (IPD) Meta-Analysis:
• Definition and Approach:
o IPD Meta-Analysis involves collecting and combining the original data from
all studies into one large dataset.
o
•
•
•
Traditional meta-analyses use aggregated results (e.g., means, standard
deviations) found in published literature.
Advantages:
o Allows for imputation of missing data and uniform application of statistical
methods across all studies.
o Facilitates exploration of participant-level variables (e.g., age, gender) as
potential moderators, unlike traditional meta-analyses which rely on studylevel variables (e.g., year of publication).
Current Status:
o IPD meta-analysis is a newer method and less common due to the difficulty of
obtaining original data from all relevant studies.
o Most meta-analyses remain traditional because collecting original data has
historically been uncommon in many disciplines.
o Challenges in IPD include low data availability, with studies able to obtain
IPD from only a portion of eligible studies.
Relevance:
o Not covered in detail due to its relatively recent adoption and the challenges in
data availability, despite its advantages over traditional meta-analysis
methods.
Meta analysis was first created in the nineteenth century. It has later become more and more
popular, and today it is a quite respectable study method.
Meta-analysis pitfalls:
High-quality primary studies are often very costly and can take many years before final
results can be analysed.
Meta analysis can on the other hand be produced at fairly low cost and within reasonable
time.
This has caused meta-analyses to be quite popular and will often be published even if not of
the highest quality. This unfortunately creates natural incentive for researchers to produce
many meta-analyses at the expense of scientific considerations.
An abundance of redundant and misleading meta-analyses is produced each year. They may
also be heavily influenced by biases by corporate interests like in the agricultural and
pharmaceutical industries.
Reproducibility is one of the four scientific quality-measures. Many meta-analyses are
however often very limited.
Another common problem is that different meta-analyses on the same-, or overlapping
subject will come to different conclusions.
Main problem 1: Apples and Oranges
Diverse Studies: Meta-analysis often involves combining studies with varying characteristics
(sample, intervention delivery, study design, measurements), akin to mixing apples and
oranges.
Meaningfulness of Estimates: Although a numerical estimate can always be statistically
derived, its meaningfulness hinges on the studies sharing relevant properties for the research
question.
Examples of Extremes:
• Too Diverse: Combining studies on job satisfaction and medication effects yields
meaningless results.
• Over-Specific: Focusing too narrowly (e.g., only Canadian males in their sixties
treated with a specific dose of medication) limits generalizability.
Balancing Scope: Meta-analyses should aim to answer practically relevant research
questions. They should strike a balance between diversity and specificity based on the
research question.
Appropriate Scope:
• Broad inclusion (varied age groups, regions) is suitable for evaluating general
effectiveness.
• Restricting specific aspects (e.g., training program details) helps maintain relevance
and clarity.
Handling Heterogeneity: Properly conducted meta-analyses can incorporate and make sense
of heterogeneity, providing valuable insights beyond individual studies.
Conclusion: The "Apples and Oranges" issue depends on the meta-analysis's research
question. Variations can be beneficial if they align with the study's aims and are appropriately
managed.
Main problem 2: Garbage in → Garbage out
Quality Dependence: The validity of a meta-analysis relies heavily on the quality of the
studies it includes.
Bias and Errors: If included studies are biased or incorrect, the meta-analysis results will
also be flawed.
Mitigation: Assessing the quality and risk of bias of included studies can help but cannot
fully compensate for poor-quality data.
Outcome of Poor Data: When most included studies are of low quality, the meta-analysis
will likely indicate the need for more high-quality research rather than providing reliable
conclusions.
Guiding Future Research: Even identifying a lack of trustworthy evidence can be valuable,
directing efforts towards conducting better-quality studies.
Main problem 3: The “File drawer” problem
Missing Research: The file drawer problem highlights that not all relevant research is
published, leading to incomplete meta-analyses.
Non-Random Missing Studies: Missing studies are not random; positive and innovative
findings are more likely to be published than negative or inconclusive results.
Trend of Fewer Negative Findings: Over recent decades, fewer negative findings are being
published, especially in social sciences and biomedical fields (Fanelli 2012).
Publication Bias: There is a systematic underrepresentation of studies with negative or
disappointing results in the published literature, known as publication bias.
Managing Bias:
• Search and Selection Methods: Improving how studies are searched and selected can
help mitigate this bias.
• Statistical Methods: Techniques exist to estimate the presence and impact of publication
bias in a meta-analysis.
Main problem 4: The “Researcher agenda” problem
Researcher Choices: Meta-analysts make numerous decisions regarding the scope, study
selection, and outcome measures, allowing for significant discretion.
Degrees of Freedom: This freedom can lead to arbitrary decisions or ones influenced by
personal preferences.
Expertise and Bias: While subject-specific expertise helps in framing research questions, it
can also introduce bias. Experts may have strong opinions or vested interests that influence
the meta-analysis outcomes.
Varying Conclusions: Evidence shows that even experienced analysts can draw different
conclusions from the same data set, indicating potential biases (Silberzahn et al. 2018).
Intervention Research: Bias is particularly concerning in intervention research where
analysts might favorably interpret results if they are invested in the intervention being
studied.
Mitigation Strategy: Pre-registration and publishing a detailed analysis plan before data
collection can help reduce bias by outlining the methodology in advance.
Summary: The "Researcher Agenda" problem highlights the influence of researchers'
decisions and biases in meta-analysis. Analysts have significant freedom in study design,
which can lead to arbitrary or biased choices, particularly if they have strong opinions or
vested interests in the research area. Even with the same data, different analysts might reach
different conclusions. This issue is pronounced in intervention research where researchers
might be inclined to interpret outcomes positively. Pre-registering and publishing a detailed
analysis plan can mitigate this problem by ensuring transparency and reducing bias.
Lecture 12: Research misconduct
DBORM 2020: Danish board on research misconduct
New law on scientific conduct was established in 2017 with the main purpose of having a
clearer division of responsibility between the central national misconduct body (DBORM)
and the danish research institutions. DBORM will handle all cases of research misconduct,
remaining instances of questionable research practice is handled by the research institution in
question.
Research misconduct and questionable practice:
Research misconduct:
Fabrication, falsification, and plagiarism committed wilfully or grossly negligent in planning,
performing, or reporting of research.
Fabrication: Undisclosed construction of data or substitution with fictitious data.
Falsification: Manipulation of research material, equipment or process as well as changing
or omitting data or results making the research misleading.
Plagiarism: Appropriation of others´ ideas, processes, results, texts, or specific terms without
rightful crediting.
Questionable research practise:
Breaches of current standards on responsible conduct of research, including those of the
Danish code of conduct, and other applicable institutional, national, and international
practices and guidelines on research integrity.
Task of DBORM:
Research complaints are usually handed over to DBORM by the research institution from
where the research in question originate. Thus, complaints must first be filed to that
respective institution.
In case an institution wants to file a complaint themselves, direct messaging to DBORM may
be filed.
The board must file an annual report regarding complaints received. This is done to
strengthen the integrity of Danish research.
Board jurisdiction and scope:
Jurisdiction & Scope: Danish cases of publicly funded and research carried out at a public
Danish research institution.
Privately funded research can be investigated with the consent of the private company or
similar consent.
Board organization:
Chairman + 8-10 academic members jointly representing board with different scientific
disciplines. Each academic member has an alternate who can enter the board in case of
absence.
Academic board members are recognized scientists appointed by Danish Minister for Higher
Education and Science.
Chairman is a high court judge and appointed by the Minister.
Procedures for investigation:
For board to begin an investigation of an allegation of research misconduct, the following
conditions must be met:
•
The allegation must relate to a scientific product, for example a scientific paper, a PhD
thesis or similar.
•
The case must concern a researcher who has contributed to the scientific product.
•
The case must concern “research misconduct”. Scientific disagreements or quality of
research is outside the mandate of the committee.
Complaints are sent to research institutions who must evaluate whether above criteria are
met. If so, the institution must compose a report on case matters and send it to the DBORM.
The board must ensure all information relevant to the case is obtained, including giving the
accused a chance to provide a statement.
The decision:
Decisions of the board are sent to the relevant parties (Accused and his/her research
institution.) Also, anonymous format is publicly available.
Data Colada 2023:
Uri, Joe, & Leif 2023.
“Two different people independently faked data for two different studies in a paper about
dishonesty.” …
Researchers conducted a study on dishonesty, claiming that having people sign agreements
stating that the information they give is true will lower the chance of cheating if the
agreement is placed at the top part of the page relative to having it at the bottom part, or not
at all.
The data received by the investigators showed anomalies which to their knowledge was
impossible to occur due to filtering. Therefore, cells had either been moved or altered.
Researchers say that if the data had been altered, they would expect to see the data points
strongly supporting the hypothesis of the original study, which they did.
Using calcChain investigators was able to trace back what the original Excel file may have
looked like before tampering was done to it.
Piper 2023: Can you trust a Harvard dishonesty researcher?
Talks about dishonesty in science, especially among the same researchers and studies
provided by them like the one discussed in Data Colada.
Our peer review process isn´t very good at finding outright, purposeful fabrication.
Many scientific malpractices can be limited by preregistering studies, being willing to publish
null results, being careful of testing multiple hypothesis in search for publishable results.
This, however, does nothing against someone who cheats.
It is not hard to detect fabrication if the data is made publicly available / available for review,
it is however hard to change / alter datapoints without leaving a trace.
Carter 2000, Plagiarism in academic writing
Plagiarism involves using someone else´s work without giving proper credit.
“The false assumption of authorship: the wrongful act of taking the product of another
person´s mind and presenting it as one´s own.”
One can also self-plagiarize, which constitutes not citing the original work conducted by
oneself when referring or writing about it subsequently.
Lecture 13: Complexity, credibility crisis, 20´th science
Sullivan 2011: Embracing complexity:
Complex adaptive systems:
- System consists of a number of heterogeneous agents. Each agent makes
decisions of how to behave. These decisions will evolve over time.
- Agents interact with each other.
- Emergence. In a very real way, the whole becomes greater than the sum of its
parts.
Example of complex systems: Ant colony.
Each ant has a role in the overall colony. Each ant interacts with the other ants.
If you examine the colony on overall levels, not looking at the individual ants, it appears to
be an organism. Robust, adaptive, has a life cycle.
Individual ants however work according to local information and cannot see the big picture
from above.
Other examples of complex systems:
Big cities, neurons in your brain, the stock market, cells in immune system, etc.
Basic features of complex systems:
- Heterogeneous agents,
- Interactions,
- Emergent global system.
Using the complex adaptive system model instead of the rational-expectations model or zeroarbitrage model can be beneficial for businessmen, if they accept that no linear mathematical
models can be established for this notion.
Taking care of complex systems:
Complex systems are many times very good at staying alive if “left as is”. Small changes,
seemingly independent of the complex system can however affect it greatly.
Take for example the implementation of feeding wild deer in Yellowstone Park and how this
both changed the organisms living there, but also ended up changing the land-scaping.
It can be dangerous to change systems without great thought and deliberation first.
Watts 2007: Twenty-first century science
If handled appropriately data about internet-based communication and interactivity could
revolutionize our understanding of collective human behaviour.
Discusses the challenges in understanding social phenomena and how recent developments in
network science and the proliferation of internet-based communication have the potential to
revolutionize our understanding of collective human behaviour.
The article postulates that data from long-term observations of real-time interactions of
millions of people online could eventually provide insights on human behaviour, but that it is
harder to understand social phenomena than physical and life sciences problems because they
involve large numbers of different entities that interact in many different ways.
Social phenomena involve the interactions of large (but still finite) numbers of heterogeneous
entities, the behaviours of which unfold over time and manifest themselves on multiple
scales.
Drawing a parallel to physics, one must solve the equivalent of quantum mechanics, general
relativity, and the multi-body problem at the same time, to act like a social scientist.
Feldman-Barret 2021: Psychology in a crisis
Recently, much debate in psychology has regarded what’s come to be known as the
repeatability crisis or credibility crisis. Basically, many scientific findings do not repeat when
other researchers study the same phenomenon or thing.
Much of the debate has regarded the credibility and methodology of the original scientists.
Many psychologies scientist believe that thoughts, feelings, etc. are the result of two strong
causes. This is known as a “Mechanistic mindset”.
Experiments often focus on one or two variables which must be isolated to find results that
hopefully replicate.
The mechanistic mindset states that if we cause people to feel angry, they should scowl,
blood pressure should rise, and the probability that they will act aggressively should increase.
All external factors like gender, age, amount of sleep, what the participant ate etc. are
considered as noise and their influence is ignored. Any problems with replicability is often
assumed to be at the fault of the original study.
Another view would be that psychological behaviour and reactions are not determined by a
few strong factors, but rather an intricate web of many weak interacting factors.
The view is called the complexity mindset.
We cannot manipulate one variable and expect all the other variables to remain the same.
Our brains are complex organs that can be affected by many factors.
Download