Interpreting & Documenting Research & Findings

advertisement
Interpreting & Documenting Research & Findings
Published by the Universities of Edinburgh, Glasgow and Strathclyde
W.L. Wilson
Acknowledgements
The material from this booklet has been developed from discussion groups and
interviews with the research staff of Glasgow and Strathclyde Universities
The advice and contributions of Dr Avril Davidson, Mr Keri Davies, Prof George
Gordon, Mrs Janice Reid, Dr Alan Taylor and Mrs Sheila Thompson are
acknowledged.
The advice of the project Steering Group: Prof Michael Anderson, University of
Edinburgh; Dr Nuala Booth, University of Aberdeen; Dr Ian Carter, University of
Glasgow; Ms Jean Chandler, University of Glasgow; Dr Avril Davidson, University of
Glasgow; Prof George Gordon, University of Strathclyde; Prof Caroline MacDonald,
University of Paisley; Prof James McGoldrick, University of Dundee; Dr Alan Runcie,
University of Strathclyde; Prof Susan Shaw, University of Strathclyde; Dr Alan Taylor,
University of Edinburgh; Prof Rick Trainor, University of Glasgow is also
acknowledged.
The project was funded by the Scottish Higher Education Funding Council.
Other titles in Series
Gaining Funding for Research
Gathering and Evaluating Information from Secondary Sources
Preparing the Research Brief
© Universities of Edinburgh, Glasgow and Strathclyde 1999
Cartoons D. Brown & W. L. Wilson
ISBN 0 85261 688 0 Printed by Universities Design and Print
Introduction
This booklet is one of a series of four aimed at researchers in the early stages of
their career life cycle. The comments within the booklet are based upon information
collected at a series of discussion groups and interviews at Strathclyde and Glasgow
Universities. The questions put to the discussion groups were based broadly upon
the performance criteria and knowledge requirements identified in the report "Draft
Occupational Standards in Research" (Gealy et al, 1997).
The booklet is in two sections. The first section, "Interpreting Research Results and
Findings" considers various aspects concerning the interpretation of results.
Generally the section considers how to confirm the reliability and analysis of results,
the avoidance of bias or over-interpretation of results, and the identification from the
results of potential areas of future research.
Section two, "Documenting Research Results and Findings," examines methods of
presenting research findings, the physical aspects of record keeping, and what
should be recorded within research records both to ensure their value to the
researcher and to ensure that they are legally and ethically correct.
The booklet is not intended to be read in one fell swoop, but rather to be dipped into
as and when the occasion arises.
Both sections within the booklet are subdivided into subsections each of which
consist of:



Introduction
Points of advice, and examples from experienced researchers to highlight
these points (colour linked). Information for the second section was collected
through a series of interviews and discussion groups, which were formed
from lecturers, PhD students, and Contract Research Staff (CRS).
Bullet points which highlight the main points. The bullet points refer to the
points and examples preceding them.
The booklet is not intended to be exhaustive or definitive. The issues raised are
those which most exercised the minds of the researchers providing the comments for
its preparation. These comments do offer interesting contrasts of opinion, either
because commentators disagreed about the way to approach a certain issue, or
because researchers from different subjects took different approaches in their
methodology. The nature of the examples provided in the booklet are a reflection of
the interests of those taking part in the discussions and interviews, and possess no
greater significance than that.
Contents
INTERPRETING RESEARCH RESULTS AND FINDINGS .................................4
HOW DO YOU CONFIRM THE RELIABILITY OF YOUR RESULTS? ......................................... 4
Introduction ................................................................................................ 4
Points to Consider ....................................................................................... 4
HOW DO YOU AVOID GETTING INTO A RUT WITH YOUR ANALYTICAL METHODS? .................. 6
Introduction ................................................................................................ 6
Points to consider ........................................................................................ 6
HOW WOULD YOU DEFINE INTERPRETATIVE METHODS? ................................................ 8
Introduction ................................................................................................ 8
Definitions of "Interpretative" ....................................................................... 8
HOW DO YOU RECOGNISE AND AVOID BIAS IN YOUR INTERPRETATION OF YOUR RESULTS? ..... 9
Introduction ................................................................................................ 9
Points to Consider ....................................................................................... 9
HOW DO YOU EVALUATE YOUR RESULTS IN THE LIGHT OF THE OBJECTIVES OF YOUR ORIGINAL
PROPOSAL? ..................................................................................................... 11
Introduction .............................................................................................. 11
Points to Consider ..................................................................................... 11
WHEN DO YOU THINK UNCERTAINTY MAY ARISE OVER RESULTS AND THEIR INTERPRETATION
AND, HOW DO YOU ENSURE THAT YOUR CONCLUSIONS ARE FULLY JUSTIFIED BY THE RESULTS?
................................................................................................................... 12
Introduction .............................................................................................. 12
Points to Consider ..................................................................................... 12
HOW DO YOU IDENTIFY POTENTIAL AREAS OF FURTHER RESEARCH FROM THE RESULTS?...... 14
Introduction .............................................................................................. 14
Points to Consider ..................................................................................... 14
DOCUMENTING RESEARCH RESULTS AND FINDINGS ...............................17
WHAT TECHNIQUES DO YOU USE TO PRESENT YOUR FINDINGS, AND POSSIBLE AREAS OF
FUTURE RESEARCH TO OTHER INTERESTED BODIES? .................................................. 17
Introduction .............................................................................................. 17
Points to Consider ..................................................................................... 17
HOW DO YOU RECORD YOUR RESEARCH AND FINDINGS? ARE THERE METHODS OF RECORDING
THAT YOU WOULD AVOID? .................................................................................. 18
Introduction .............................................................................................. 18
Points to Consider ..................................................................................... 18
WHAT DETAILS DO YOU PUT IN YOUR RESEARCH RECORDS? WHAT DETAILS SHOULD NEVER BE
MISSED OUT OF RECORDS, AND WHY? .................................................................... 20
Introduction .............................................................................................. 20
Points to Consider ..................................................................................... 20
HOW DO YOU CONFIRM THAT YOUR RECORDS MEET ALL RELEVANT LEGAL AND ETHICAL
REQUIREMENTS? .............................................................................................. 22
Introduction .............................................................................................. 22
Points to Consider ..................................................................................... 22
Interpreting Research Results and Findings
How do you confirm the reliability of your results?
Introduction
The exact nature of what is reliable will vary from field to field. Mathematical proofs,
which are unusual in that there is an absolute right, are usually developed over
years. In other fields, e.g. social planning and architecture, there may be no absolute
right or wrong, and the confirmation, or otherwise, may take 30 years of urban
development. Communication, experimental repetition, alternate approaches, good
background knowledge will all be applicable in some fields, but are unlikely to be
applicable in all fields.
Points to Consider
The most important initial stage is to be aware that your results may not
be reliable. Blind faith does not make for good investigative research.
Results may be misleading for a wide range of reasons, e.g. an atypical
sample, equipment error, or the simple vagaries of animal behaviour. The
latter point is nicely summed up by the Harvard Law of Animal Behaviour:
Given precisely controlled conditions, the animal will do as it damn well
pleases.
Example: During a study of prostitution habits the researcher found that it was
difficult to obtain reliable data on condom use. She could ask till she was blue in the
face, and in as any different ways as she could think of: one-on-one interviews,
focus groups, whatever. All interviewees reported 100% condom use unless they
happened to burst. Yet it was obvious to the researcher that there were women who
were working without condoms.
Peer review is a basic step in checks of reliability. Asking colleagues who have a
sound knowledge of the field, but have not been as close to the work as yourself, is
an essential and basic check of reliability. Better to have a colleague pick up a
discrepancy at an early stage rather than a paper or grant referee at a later one.
It is important to ensure that you have an adequate number of repetitions
within your experimental data (allowing for events such as pseudoreplication). However, repetitions can add new variables to the process.
There is inevitably a balance between the demands of the objectives and
the demands of precision.
Example: The value of repetition was emphasised by one researcher who remarked
that he would not report on any data which had not been confirmed within his own
laboratory. For experimentation which required statistical analysis the precise
number of replications was dependent upon the expected level of variability within
the measurement. In order to ensure statistical accuracy when it is not possible to
run a number of replicates simultaneously, the researcher reruns the complete
experiment. The precise number of repetitions depends upon the variability between
trials. His recent study examining the rearing of halibut fish highlights this latter
point. The experiment required four different tanks, each tank providing a different
environment. Normally the researcher would aim to do these in triplicate, providing a
total of 12 tanks. However, because the experiment was within a production style
system, the scale of the project made simultaneous trials impossible, thereby
requiring the entire experiment to be repeated. This unavoidable variability requires
an increased number of repetitions beyond the average. On the other hand, the big
advantage of using a production style system is the avoidance of the extra variables
inherent in scaling from the very small upwards.
Using several techniques on the same sample provides an alternate form of
experimental repetition. Thus the reliability of tests for genetic mutations in tumours
is regularly checked by using three different techniques on the same tumour sample.
Refer to previously published work and review your results within the context of
previous publications to obtain a feel for general trends. They are some trends which
may be expected to emerge. You must ensure adequate quality controls to avoid
bias, i.e., inadvertently creating the result expected from ‘trend’. Bear in mind when
checking reliability in the light of previous trends that many breakthroughs in science
have at first been regarded as completely implausible. Plausibility is determined by
present knowledge.
It is important to be thoroughly familiar with the background and content
of the project. This is especially important where moving into new fields,
where some less than obvious fact may pass unnoticed.
Example: Whilst out collecting crabs a postgraduate researcher observed that some
crabs reacted to other individuals of the same species by rearing up and attacking.
Lower shore crabs were more likely to be aggressive than upper shore crabs. Several
years later the researcher discovered that there were actually two species of crab on
those shores, but that the two species were virtually identical. Fortunately the
researcher had not published the study, and learnt a valuable lesson cheaply.
One engineer suggests the following summary for his own speciality:
a) Derive from first principles to establish ‘plausibility’. This would help to highlight
erroneous results.
b) Meticulous calibration.
c) Error analysis. (Error analysis being the system used to measure the parameter
will consist of different parts, each with an associated uncertainty. When this
uncertainty can be obtained from calibration,then the uncertainty of the whole
should be quantifiable.)
When working with human subjects it is essential to ensure that the
sample is as representative as possible in order to check for a variety of
different responses. One method of checking the accuracy of responses is
to rephrase the question and then compare the new response with the
answer to the earlier question. It is important to ensure that the analysis
of the data is as inclusive of the varied responses as possible. One
technique by which this can be done is the inductive procedure of deviant
case analysis.
Example: Deviant case analysis proceeds through examination of the universe of
responses provided to a certain topic. If exploring the question of condom use, a
basic hypothesis may be that prostitutes would encourage their clients to use
condoms to ensure their own protection against HIV. However, there might be
women who do not articulate their use in these terms at all, but refer to other
reasons (e.g. they form a means of distinguishing between the sex they have with
their private partners and the sex they provide to clients). The overall explanation for
condom use as a barrier would still fit but the argument would have to be modified
to incorporate the broader spectrum of responses. If, for example it was observed
that most women report other reasons for condom use which do not fit within the
barrier explanation then the original argument must either be modified, or
discounted entirely.





Bear in mind that your results may not be as you need them.
Check all results thoroughly.
Use alternative techniques to check results.
Examine your results in the light of other work.
Know your background information well.
How do you avoid getting into a rut with your analytical
methods?
Introduction
The best way to avoid becoming stuck in a rut is to remind yourself regularly of the
risk of staying there. Most researchers will develop favoured techniques, and it is
always easier to fall back on well used comfortable techniques than to seek out new
and novel approaches which require the additional effort of getting up to speed.
Communication and keeping up with the literature (not just in your own field) appear
to be the best ways of remaining fresh.
Points to consider
Keep in touch with the research world around you. It takes some time for new
methods/techniques to appear in the literature. As with so many aspects of research,
networking is vital. Perhaps the most common reason for the development of new
techniques has been where the present technique was very labour intensive. In
these circumstances it is worth asking around to see what other investigators are
doing.
Look to other fields for inspiration.
Example: One research team studying the prostitute population of the red light area
in a large city decided to adopt the biological technique of ‘capture/mark/recapture’.
The study required identifiers, so the team ‘tagged’ each individually ‘captured’
subject with a unique ID, then used these identifiers to model changes in the
prostitution population over a period of time. This is a particularly elegant example
as the technique originated from an 1800’s study in Paris in which the number of
priests was used to estimate the total population of the city. Then the technique was
adopted by ecologists to model animal population dynamics. In this instance the
technique has moved from social science research to biological research and back to
social science.
Discuss what you are about to do with your colleagues who may well
contribute some good ideas. Discussion can help you avoid becoming
enmeshed in minutiae and missing the bigger picture. It may be helpful to
brainstorm with a group of your peers on a regular basis.
Example: One researcher remarked that she had been experimenting for months
with a new technique to identify differentially expressed genes, all the while going
nowhere. Then after several months she discovered that throughout that period a
colleague in the group had been arguing the case for an alternative technique.
Greater efforts at communication would have saved her several wasted months.
For some fields, such as English Literature, opening a line of communication with the
author you are studying may provide a useful insight into their work. For other fields,
contacting authors of publications will allow you to discuss new techniques being
developed, or perhaps highlight publications which you may have inadvertently
missed.
Look for weakness in the methodology, for instance, is the present
technique of a lower sensitivity than required, can it be improved? Regular
reappraisal of the techniques used, and consideration of their less
satisfactory points, should help avoid complacency.
Example: A research group was interested in measuring virus-specific immune
responses. When the researcher joined the group there were a number of techniques
available to measure antibody responses to the virus. However, there were no
reliable techniques available to measure the T-cell response. Unlike antibodies, Tcells recognise virus infected cells, or tumour cells, and kill them. The researcher’s
first task was to develop such a technique. He succeeded in developing a technique
which was then adopted globally. There remained concern that the technique was
underestimating the true magnitude of the host T-cell response. The team is now in
the process of designing novel assays to measure virus-specific T-cells. Using these
assays it will not only be possible to verify the data that they (and other laboratories)
have obtained, but the improved sensitivity afforded by the new techniques will allow
detection of T-cells in circumstances which would otherwise have been overlooked.
There can be problems in securing funding for completely new approaches..
However attractive a new methodology may appear, it is important to ensure that
the methodology will not discourage funding bodies if it is included within your grant
application. This can be a "catch 22". You want to be adventurous, but cannot move
forward because funding bodies or collaborators will be cautious of the ‘excessive’
novelty of your new idea. On the other hand, in the highly competitive world of
research funding, you may need that bit of novelty as an added attraction. If in
doubt it is well worth contacting your prospective funding bodies in advance. Some
funding bodies run schemes to promote "blue skies" research such as the Research
Council, Realising our Potential Awards (ROPA’s). Though original and novel are not
one and the same, a pilot run will move "novel to original" and help you convince the
more sceptical reviewer.



Keep up with the literature.
Networking is essential.
Look to other fields for inspiration.

Cast a critical eye over your methodologies, identify the weak points, seek
alternatives which ameliorate them.
How would you define interpretative methods?
Introduction
It became obvious during the discussions and interviews used in the creation of this
booklet that the definition of ‘interpretative’ was not consistent. The question "how
would you define interpretative methods?" was put to participants to try to gain
some idea of the definitions of different fields. In order to avoid interpreting the
interpretations of "interpretative" and inadvertently shifting the definitions towards a
biologist’s view of the world, this section has been kept in the form of the original
quotations.
Definitions of "Interpretative"
"This would partly be related to the way that the experiments have been set up - you
set up experiments with defined objectives, the interpretation of which would initially
be based on that background information. You analyse results by plotting them in
various ways, carrying out statistical analysis and comparing them with your
expected views from the experiment."
"As an architect you interpret your model of parts of cities against a set of criteria
which you hope are generally agreed upon. There is a huge amount of literature on
what a sustainable city should look like, although there is also huge disagreement.
However, there are certain consistent demands upon a city, e.g. public transport, low
degrees of pollution and eliminating congestion. You can set these as the targets for
your models and test your models to see how they influence these criterion, and to
what degree. But as the models are not real, there is no actual physical proof. Thus
there are two sets of interpretations, what criterion should be used to judge the
model and, in the absence of physical proof, how accurately does the model reflect
what would happen in reality. The difficulty in our field is that everybody has his or
her own set of criteria to judge against, and so we never agree."
"I think if something meets with your understanding of the subject. In my own
research I have interpretative methods that would anticipate my critics. It can be
very objective in the sense that you can interpret according to the aim of what you
are trying to do. Thus in terms of interpretative methods researchers need to be
aware of how the results would be interpreted, by the media, peers in research and
the community at large. This is important when the research involves some
controversial subject. A further advantage of attempting to foresee how some
arguments will be interpreted is that a prior response can be prepared."
"Interpretation in English Literature often possesses the implicit danger of
interpreting things along the lines of your own preconceived notions. In my view,
that has happened too frequently, with theories transposed onto (and into) texts,
and the resulting criticism has been not so much a criticism of the text but an
expression of the critic’s own opinions. So interpretation becomes too greatly bound
up with opinion. I think covering yourself to anticipate your critics is necessary to a
certain extent, in the sense that your thesis must be as logical and consistent as
possible, but this should not be at the expense of your being totally inflexible, and
blinding yourself to any shortcomings in your thesis. By all means go into your
project armed with notions which will challenge the received wisdom in your chosen
area, but be aware of dealing with gaps in your own argument too!"
How do you recognise and avoid bias in your interpretation of
your results?
Introduction
One researcher remarked that he wanted a particular solution because he was sure
that it was the correct solution, when in fact it was the wrong solution. It was the
interpretation of a brief for a housing development scheme. He tried to test what
was actually meant by the relationship of the different functional elements within the
scheme. In this instance he thought he knew the answer because he had worked on
similar schemes previously. Thus, when told by a colleague that there was a mistake
in his interpretation he failed to check it. The housing scheme was developed to the
full, and then collapsed because of that mistake. It was, he remarked, a painful
exercise often remembered, never to be repeated!
Few of us can claim to be completely free of such bias, the following section
attempts to identify some of the areas where bias commonly arises, and outline
some techniques for its recognition.
Points to Consider
Bias can arise in the construction of the experiment rather than in the
interpretation. It is important to ensure that the experimental design, or
the behaviour of the researcher, does not introduce bias long before the
interpretative stages are reached.
Example: A team of ethologists were attempting to breed a more intelligent strain
of rat, intelligence being measured by maze learning abilities. As the project
proceeded it appeared that a superior strain of intellect had been bred. At least that
was the conclusion until the techniques of the workers involved were more closely
examined. The researchers stroked the more ‘intelligent’ rats before introducing
them to the maze, but did not stroke the supposedly less intelligent rats. Improved
learning was not a function of superior breeding, but rather of more pleasant
handling conditions.
Discuss your results and interpretation of these results with colleagues. It
is especially helpful to seek out colleagues from different backgrounds and
experience. It is important to ensure that the review of your conclusions
will be genuinely critical, there is little value to be gained from seeking
excessively polite or friendly colleagues.
Example: One researcher remarked that a recently retired professor tended to think
in a different way to most of his colleagues. There were a few colleagues who, when
a document was put in front of them, would react in a predictable manner. But this
professor tended to throw up quite different points from the document. He had a
very different background from the rest of the group, as well as having a wider
range of experience. He had worked in industry for a number of years, and had had
a lot of experience in vaccine development and trials, and marine biology. The areas
in which the researcher was interested, but from a quite a different perspective.
In interview-based research, the perception of the interviewer and his/her
experience of life, has considerable potential to radically colour his/her
interpretation of events around them.
Example: The researcher who had been conducting a research project studying
prostitutes on the streets of Glasgow initially assumed that they would be afraid of
the police, when many did not care about the police at all. The researcher’s
interpretation of how they would react to being apprehended by the police was
influenced by her own background. She stood to lose a lot by being prosecuted for
anything, the prostitutes, on the other hand, felt that they had little to lose.
Reflexivity is at the core of interviewing. To some extent interviewing an
individual is like looking in a mirror. There is a strong tendency for the
interviewee’s response to be coloured by how they perceive the
interviewer. It is not only how you perceive the experimental subject, but
how they perceive you.
Example: When interviewing drug addicts about needle sharing, the addicts rightly
construe the interviewer as being someone who thinks that needle sharing is not a
great idea. As a result, they will tend to deny sharing needles because they do not
wish to give a poor impression to the researcher. Drug users require, in their search
for drugs, good manipulatory skills, and become very skilled social actors. They will
often only give an interviewer as much information as they estimate he/she already
possesses. As the researcher entered more deeply into the field she gained a
relatively deeper understanding of what was going on. In becoming more aware of
the tendency to present selective information, the researcher established ways of
getting beyond the surface presentation of the facts.
Imagine that you are presenting your conclusions to your worst enemy, where would
they pick flaws, and how would you defend yourself against their arguments?
Anticipate your critics.
Bear in mind that an absolutely objective truth may be unobtainable, perhaps only
another kind of truth will be possible. The focus of your research is complex, the
interpretation more so.
Bias can appear through the unconditional acceptance of previous work. It
is in circumstances such as those described in the example below which
make networking and a broad awareness of the background vital.
Researchers should never accept blindly the bias of the past, if you think
something is wrong - perhaps it is.
Example: During a PhD viva the examiner asked about a certain result which the
research student had quoted from a classic mathematics textbook, remarking, ‘Of
course, you know the proof is wrong’. The researcher recalled having struggled with
this proof. He had never managed to follow the proof’s logic, but as it was cited in a
classic textbook, accepted the result as true, and quoted it. It was not a big issue at
the time, but there is an element in mathematics (as in most subjects) where a
theorem can be correct almost through folklore. Even if you go to the source of the
proof you will find reference to some other source, or simply a statement of the
result but no proof. More often than not these results are correct, but there have
been occasions when results have made it into the folklore of mathematics. It raises
the issue of how far back you should go before you accept that a result is true
without having to iterate the entire proof yourself.
Seek out the counter arguments to your own interpretations, and consider whether
or not you have given the alternates a fair hearing.




Take advice on your interpretation, especially from colleagues with a different
view of things to yourself.
Be aware that your perception of the results is coloured by your life
experiences and expectations.
Equally you must be aware that just as your interpretation is biased by your
experiences, so can the interviewee’s responses be biased by their perception
of the interviewer.
Do not accept historical wisdoms blindly.
How do you evaluate your results in the light of the objectives
of your original proposal?
Introduction
Adaptation of new methodological techniques was identified as a frequent source of
problems. Either the methodology proves unable to deliver all that it promised, or it
takes too long to train the staff/yourself in the use of the new techniques. The next
most likely cause of failure was simply setting one’s sights too high, a common
temptation with the increasing competitiveness of winning grant funding.
Points to Consider
The use of untested methodologies can result in a lower than expected level of
success.
Example: The researchers were examining the genetic basis of the production of
particular types of toxin by bacteria. This was of interest as these were toxins which
were generally considered to be produced by algae, but not by bacteria. The
methods that the group chose to use were not as well developed as they had
anticipated. They found that the sensitivity of the methods was not good enough to
allow the screening of bacterial toxin production, although the method worked well
for algae, which produce much more toxin. To effectively screen bacteria the project
required a screening system which could run 1,000 samples rather than the 50-60
needed for algae, a demand beyond the capabilities of the technique. In retrospect,
overconfidence in an untried methodology resulted in the project’s failure.
It can take longer than anticipated to train or re-train and provide the necessary
level of new experience required for staff (and yourself) to change fields. Time must
be allowed within your proposal for training, failure to do so can result in projects
being less successful than anticipated.
Were the objectives unattainable? This can occur when relying on claims
made for techniques without any definite evidence that these claims are
accurate. This may occur for a range of reasons e.g., extrapolation beyond
the reasonable range of the original results, altered conditions, attempting
to push the system beyond its capacity.
Example: The research team had been working on the microbiology of turbot
larvae, which are susceptible to very large losses when they are at the first feeding
stage. A new project was proposed which was to be run in collaboration with a
European company. The funding was granted on the grounds that the commercial
company claimed to possess an improved technique for preparing water to allow the
rearing of larvae in defined conditions. However, major collaborative trials using the
methodology failed to reach the standards which the company had claimed were
possible in their publications. The reason for the failure was probably that the
research team had inadvertently exceeded the capacity of the system and
extrapolation of the technique to larger systems was impossible.




How certain are you that the claims made by others are accurate?
Ensure an adequate allowance of time for retraining and developing untested
techniques.
Consider that you may not have failed, perhaps the results were not positive,
but that in itself need not constitute failure.
Unexpected results may result in the research progressing along a different
route.
When do you think uncertainty may arise over results and
their interpretation and, how do you ensure that your
conclusions are fully justified by the results?
Introduction
There are a variety of reasons for which interpretation and results may become
confused or over-interpreted. These range from decisions made regarding
modification of the data, to allowing the expectations for the results to blind the
researcher to their actuality. This section offers various suggestions on how to avoid
such problems arising and, as with so many of the discussions in these booklets,
emphasises that consultation with colleagues is vital.
Points to Consider
Avoid overstating the result: inconclusive results should not result in conclusive
statements.
Present your results and conclusions to colleagues and ask them for comments.
Make certain they understand that you are looking for constructive criticism, and not
just a pat on the head.
A good rule of thumb may be to go for the simplest explanation.
One method of avoiding confusion between interpretation and results is to leave the
work alone for six months. After that period when you have forgotten some of the
background, and are free of any unhelpful habits of thinking you may have
inadvertently fallen into, then you should be better able to spot any confusion
between interpretation and results, or for that matter bias in interpretation.
When repeated trials provide conflicting results any decisions taken regarding the
relative reliability or accuracy of the various results is an interpretation of the results,
and should be noted as such.
Clearly differentiate between the results per se and your extrapolation/interpretation
of them.
A common error is to confuse a correlation between two variables and an actual
cause and effect. The magnitude, significance and direction of the correlation is the
result: conclusions regarding cause and effect are interpretation.
It is important not to let your expectations of results predetermine your
view of them. Firmly drawn conclusions should be sustainable by the data
alone, and not reliant on the theories of previous work.
Example: A researcher collated a large set of data on school children’s use of drugs
and alcohol. This was initially analysed using logistic regression. The results
appeared to fit with ideas which she had previously expressed in a literature review.
Both results and discussion supported the researcher’s interpretation of the
literature. She later re-analysed the data using a different series of tests. The
explanation of the data she provided on the basis of these tests, although informed
by the literature review, was not dictated by it. The re-interpretation was much
closer to the data. For example, peer formation is significant in any behaviour but
particularly in relation to drug use. Previously the researcher had been enthusiastic
about theories that suggested that core family background would allow predictions of
negative peer engagement. The literature had provided fairly clear evidence that this
was the case, and although she had not claimed that the data demonstrated this,
she had used the theory as a possible explanation of the data. However, her reanalysis suggested that the original conclusions had been an over-interpretation of
the data.
Perhaps the most common reason for conclusions and discussion not being
justified by results occurs when the discussion is extrapolated well beyond
the limitation of the results.
Example: A paper described research involving sampling maturing fish over the
course of a year or more and looked at changes in the amount of fat stored. The
paper recorded differences between maturing and immature fish, especially in the
patterns of depletion and re-building of mesenteric fat stores. The discussion section
of the paper considers this in the light of ecological implications of fasting on
maturation rates. However, more than half of the lengthy discussion proposed a
model for the hormonal control of maturation. Hormonal control had not been
mentioned prior to the discussion, and the research was only tangentially related to
it, leaving the reader with the impression that the authors had failed to complete a
sufficiently comprehensive review article, and simply added an incomplete review at
the end of the paper.




Avoid any confusion in data recording.
Correlation is not proof of causality.
Decisions made regarding the reliability of the results should be labelled as
interpretations not results.
Do not allow your expectations to predetermine your conclusions.
How do you identify potential areas of further research from
the results?
Introduction
Opportunism is one watch word in the identification of potential areas of new
research. Opportunism takes many forms, e.g. capitalising on current trends and
fashions, and spotting weakness in present methodologies. The second key is to
keep an eye on the long term objective. The ability to avoid becoming lost in the
woods can be improved by a regular interchange of ideas with colleagues.
Points to Consider
Going away to a major conference can be a good way of focusing on the results of
the past year. It both allows you escape the distractions of your usual routine and, in
conversation with others, identify the direction in which your field is heading.
Example: "At the conference last week, one area of interest was in a particular fish
disease and I had gained quite a lot of information on that. When you tie it in with
existing information you can spot the areas which are obvious for development. For
example, this particular organism is a rickettsial infection. Rickettsiae are bacteria
which must replicate within a eukaryotic cell so they are rather unusual. This is a fish
disease which is particularly important in Chile but has been reported in Northern
Europe as well and what we are interested in is identifying particular components of
the outer membrane of the bacteria which stimulate an immune response, so from
the meeting I got a clear idea of which antigens to concentrate on."
There are short term objectives and long term objectives. It is the examination of
the latter which is most likely to indicate the potential for further research.
Be prepared to shift the focus of your research as the political, social and
scientific priorities of the wider community move. It is not unheard of for
projects which have been rejected for funding to become, at a later date,
greatly sought after by the funding bodies.
Example: A researcher’s first project included an examination of the behaviour of
drug injectors. At the time this was just a small part of the project, but with
increasing HIV awareness injecting behaviour gained a much higher profile politically.
It became obvious that this was an area in which research funding would become
available.
The identification of weakness in present techniques, e.g. high cost of
production, or the potential to develop a more efficient system, will often
provide new avenues of research.
Example: The majority of currently available vaccines are based on either
inactivated virus or bacteria, or comprise a synthetic or "recombinant" protein which
has been produced in bacteria. The production costs for these types of vaccine are
high, and some improperly inactivated vaccines have been responsible for outbreaks
of disease. These problems have encouraged researchers to evaluate the potential of
nucleic acid or DNA vaccination as an alternative. Using this technique the DNA is
injected directly into the animal or person, there is no risk of infection since the
whole virus or bacteria is not used, and there is no costly production and purification
of recombinant protein. To optimise the immune responses produced following
vaccination a chemical called an "adjuvant" is often included. Recently, the team has
pioneered the use of genetic adjuvants in veterinary medicine. The results have now
opened up a whole new area of research, not only in the application of this
technology to other infectious agents of man and animals, but also in improving our
knowledge of the way the genetic adjuvant is exerting its effect.




Keep an eye focused on the longer term objectives of your project.
Monitor the shift in public and political priorities, the timing of a proposal can
be vital.
Look for weakness in the existing and preferred techniques.
Keep discussing your work, its progress and its potential with your
colleagues.
Documenting Research Results and Findings
What techniques do you use to present your findings, and
possible areas of future research to other interested bodies?
Introduction
Other interested bodies are a varied group, both in their understanding of your
subject, and their specific interest in it. In order to achieve maximum impact it is
important to vary your approach according to interest and understanding. The
following section considers when it is appropriate to take different approaches to
presenting research findings, and offers suggestions as to what these alternates may
involve.
Points to Consider
Industrial workshops can be a useful way of putting your message across to
potential funders as workshops often allow a much freer exchange of information
than conferences. This occurs, at least in part, because the presentation is to
potential funding bodies, whereas at conference the presentation is to potential
competitors and less recent results are often presented. You would have more
opportunity to describe your capabilities and past achievements. However, when
visiting industrial workshops it may be wise to take advice on intellectual property
rights before making your presentation. Consult your Research Support staff who can
advise on the institutional policies in this area.
When seeking research funding it is important to make your objectives absolutely
clear. A good technique is to provide a very succinct list of aims and objectives.
When presenting to potential industrial collaborators, funders or users of your
research, a one page A4 summary (in bullet point form) of your objectives, and the
commitments that you require from the industrialist, can be very helpful.
If you are talking to a group of people who know little of your subject then it
becomes especially important to avoid jargon. Use clear, plain English. Get a nonexpert to review your presentation or paper.
How you dress may be important, the more casual dress code common in academia
will certainly be less acceptable to potential funders from industry.
Try to target your audience’s interests, tailor your presentation
accordingly. Talking to members of the audience will give some idea of the
sort of language they use, what they are likely to be interested in, and
what they will understand.
Example: When a researcher presented data on her project to her Co-operative
Award in Science and Engineering (CASE) funding partner, a fish farming company,
she altered the emphasis of the presentations. The presentations were more or less
the same as those she gave at scientific conference, but with one significant
difference. Scientists think in terms of the length of fish, whereas fish farmers think
in terms of the weight. Thus for presentations to fish farmers she re-analysed her
data to take account of the difference in approach.
For a larger audience one researcher remarked that he would use Powerpoint and a
slide projector, but for a smaller more informal audience, a board and a pen or
overheads. If the lights are on you can better gauge if your audience is interested
and enjoying your presentation. Standing and writing also has the further
advantage/disadvantage of adding to the informality of the proceedings.
As all who have written a thesis or a major report will know, most people will never
read them in their totality. One solution may be to present an executive summary of
the research. This increases the likelihood of its being read by focusing all of the
ideas into a short and concise section, but of course it leaves out all the proof,
evidence, arguments and counter arguments. Multi-media productions offer
considerable potential in this area. Although more complex and expensive to
produce, they allow readers to look through your research and pick out what
interests them, by jumping from one point to the another.
When seeking funding consider emphasising the ‘benefits’ rather than the ‘features’.
Thus instead of a fully integrated software package which is easy to use, highlight
the benefits, e.g. minimal training required, financial savings.






Know your audience
Target the interests of your audience and be prepared to vary your approach
according to those interests.
Attend informal workshops set up by relevant industries.
Always produce executive summaries of large reports.
When presenting to industrial and other end users keep it simple and straight
to the point.
Avoid jargon.
How do you record your research and findings? Are there
methods of recording that you would avoid?
Introduction
The emphasis of this section is centred upon the physical aspects of record keeping.
The second half of the discussion considers the importance of duplicates,
accessibility and longevity of the records, and when records can be discarded.
Points to Consider
Records must possess longevity. Use good quality paper, which should last at least
30 years. Do not use pencils or strange coloured inks, the ink must not be water
soluble or solvent reactive, it should not smear and should be light stable (BTG plc)
Research records should be kept in a form which ensures that their authenticity can
be appropriately defended. Claims of originality and scientific priority are best
supported by records whose provenance and date are beyond reasonable doubt.
This is especially important for the protection of Intellectual Property Rights (IPR)
when negotiating contracts for the exploitation of research results, and seeking to
establish ownership of background IPR. To fulfil such obligations to maintain
accountable and dependable records, best practice suggests that all experimental
data should be meticulously and permanently recorded, in a bound notebook with
numbered pages, with all entries dated, signed and witnessed. Computer printouts
and instrumental data printouts should be incorporated permanently into the
notebook. Where these are pasted in, the witness should sign and date across the
join.
Arrangements should be made to keep duplicates of all irreplaceable data records.
Important material stored on computer should be systematically backed up, ideally
there should always be at least three copies, one of which is off site. Loss of
experimental records, data, grant applications, and drafts of publications in fires,
floods, or other disasters can vary in effect from extremely frustrating to
catastrophic.
Loose leaf laboratory records can be very useful if the data contains lots of ancillary
documents (e.g. photographic plates, spectrophotometer printouts or sample
interview sheets). Such records are always difficult to file and a loose leaf folders can
serve as a good supplementary, or replacement, to a laboratory notebook, though a
loose leaf folder is less convincing evidence in any IPR disputes. The indexing of
ancillary data, such as that described above, is critical. At the very least, each item
should be annotated with the data and location in the notebook of the corresponding
experiment. Many items of computer-controlled equipment provide printouts of
instrument settings, as well as date and time. Ensure that the clock is correctly set, it
may be important in future IPR debates. Make certain that you, or the computer,
compensate for leap years and seasonal time changes(Beynon, 1993).
Keep duplicate records, if you are using electronic records make sure that you have
off-site as well as on-site records. Ideally all computer records should be in triplicate,
the hard disc, one floppy on-site, and one floppy off-site. Remember though that
floppies, if unused for long periods, can cease to function properly, and data may be
lost as a result. It should be borne in mind that because dates can easily be altered
on electronic records they are poor evidence in the event of IPR disputes.
"It is ironic that many laboratories seem to give more consideration to the storage of
reprints, which are copies of existing literature, than to notebooks, which are
irreplaceable originals" (Beynon, 1993).
Make sure your records are well labelled. It may seem obvious now, but a year or so
down the line the chances of remembering what the data columns represent are
slim.
Consider building your research records like a tree, allowing connecting ideas to
follow through a particular branch of the tree. Once you have discarded the other
branches over the years, the remainder can be discarded. On the other hand,
archiving old material, even if you do not believe you will return to it, may allow you
to refer back to a solution to a problem which you have had to deal with previously.
Make sure that your record system is accessible, it is of little use if you have to walk
through half the building to access it, or if you have one type of computer system at
home and another in the office. Similarly, try not to put it on some obscure
computing system that is likely to vanish within the next few years.
Take care of where and how you record your list of ‘things to do’. Consider
mentioning your objectives to others at coffee time - in six months time
they may remind you.
Example: One researcher admitted that on moving office recently she found a list of
‘things to do’ dated four years earlier - none of which she had done.






Results and methods should be recorded in a manner which can leave no
doubt as to their authenticity.
All records should be signed, dated, and witnessed.
Keep duplicates.
Record labelling must withstand the test.
Records must possess physical longevity.
That list of ‘things to do’ must be high profile and visible.
What details do you put in your research records? What
details should never be missed out of records, and why?
Introduction
Consistency is an important aspect of record keeping. Records should be kept in a
consistent manner regardless of the experiment. Failure to do so is likely to result in
records becoming incomprehensible a few years down the line. Records, if they are
to be of real value, are to be kept over a period of years. A consistent style of record
keeping will reduce the risk of the records appearing incomprehensible if there is a
need to examine them after a prolonged period of neglect. Exactly what should be
incorporated within the records depends primarily upon their end purpose. Ideally,
your main record collection should allow you to repeat any course of work, be able to
secure your IPR and help defend you against unjust accusations. For further
explanations on material on patent law, etc. please refer to THEROS: Technology
Ventures - Intellectual Property Guidelines (1998). On the whole it is better to err on
the side of caution when deciding what to leave in and what to leave out.
Points to Consider
US and UK patent laws are not identical. Thus in the US evidence of the date of
conception of an invention, and proof of diligence in its reduction to practice is
required for patenting.
"Errors and mistakes should not be erased or obliterated beyond recognition. Neither
should liquid paper be used. Simply crossing out an error so that it is apparent what
the error was should be adequate. Explain all errors and mistakes as they occur and
initial them. Never remove pages from the notebook." (BTG plc)
Record novel concepts and ideas relating to the work though avoid the expression of
opinions (BTG plc).
Many of the researchers in our discussion group kept a day book in which they
recorded everything they had done that day. The book might include the chemicals
used and in what quantity, and anything that had gone wrong. Any results which
come from a printer should be put into that day book (if these are perishable
printouts, copies must be made). Tables of results would go into a separate folder.
When recording data it is self-evident which data set is which. Five years down the
line and it is highly likely that you will have forgotten which data set is which. It is
vital to sort and clearly label computer held data from day one, especially if the
computer records will include earlier and later versions of the same data set. Records
should be in a form which can readily be understood by everybody. This is necessary
partly because in debate over patent rights it is vital that the records should be easily
understood, and partly for the reasons above - you will feel silly if at sometime in the
future you admit you cannot understand your own records.
Log the incidental. There may be contextual events or activities which affect the
data, climatic conditions, on-going political/newspaper campaign, delay in sampling
(Brown et al., 1995).
Although the comments made in the example below were valid, the
potential for the records to serve more than one purpose should be borne
in mind. The main aims were to ensure that the work was repeatable, and
that the details were adequate for protecting intellectual property rights
(however, see "Diligence"). In this light the criticism of research assistants
including irrelevant details was valid. The records would allow the
researcher to point out that while they accepted that a particular objective
was not met, nonetheless, it was not their responsibility.
Example: Include anything which is remotely likely to be required or useful. You
come to recognise through experience (which does not take long to acquire) when
you have failed to record points that are going to be needed in as much detail as
possible. However, it is interesting to see the irrelevant details that some people
have recorded. Huge amounts of irrelevant information, for example, I was away on
holiday, or something has not yet arrived. The critical issues are the date, and a
couple of lines on the objective and methods used. In terms of the details, the
experimental method (especially if it deviated in any way from the standard
protocol), and the results, should go straight into the book which is the day-to-day
record - in the book not just on any piece of paper.
However, BTG would tend to support the researcher’s method of recording.
"Diligence in the reduction to practice of an invention means that, as far as possible,
generally steady, uninterrupted and constant work occurred following the conception
of an invention. In an interference action (where IPR is challenged) periods of
inactivity could lose the case, especially in a situation where each day is critical. All
activities must be logged, even if it is only to note that you were waiting for, say,
sample analysis that resulted in delay in the proceedings," (BTG plc).
When recording the results of pilot studies be careful not to be more lax than is
normal in terms of the quality of the information recorded.
Data must be recorded carefully. Resist the temptation to record the data
in rough form and transpose it to your notebook at a later date - this
provides an extra opportunity for the introduction of errors (assuming you
get round to it in the first place). When recording data from instruments,
note the settings on the instrument panel.
Three examples:
(i)A fluorimeter value of ‘10.4 units’ is meaningless and cannot be rechecked without
notes on scale widths, scale expansion factors, wavelengths and all other machine
settings.
(ii) During electrophoresis note the current and voltage; this will allow you to
calculate the resistance of the gel, and spot a buffer of incorrect conductivity.
(iii) In a chromatography run, note flow rate, column back pressure, detector
settings, column type and, if there is more than one column of that type in the
laboratory, the serial number of the column (Beynon, 1993).




Do not exclude data which you only think may be significant in the future, err
on the side of caution.
Keep record keeping consistent.
Remember that you must be able to recognise data files not just next year
but in three or four years time.
You can use your records not only to record experimental details, but also to
cover yourself against future unfair accusations.
How do you confirm that your records meet all relevant legal
and ethical requirements?
Introduction
The ethical problems of working with human subjects are considered not only from
the perspective of their rights, but whether or not they understand these rights. The
final section of the discussion considers where sources of ethical advice may be
located.
Points to Consider
In ethical terms make certain that your volunteers understand the ethical promises
you have made them. Although your explanation may seem clear to you, ask the
subjects some questions on what you have promised them, they may not have
understood after all.
Example: One researcher working on prostitution and HIV explained to all her
subjects that a ‘double blind’ system was being used. This meant that the results of
the HIV test could not be identified with the person from whom the samples had
been taken. Despite this, the researcher regularly received requests from her
subjects as to their HIV status, thus it became obvious that many of the volunteers
had not understood the ethical commitments that the research team had made.
Research upon human beings can carry the added complication of political
overtones. Data, results and conclusions should not be modified for
political purposes, but neither can researchers deny that their conclusions
are liable to be used for such purposes. Especially where vulnerable groups
are involved, consideration should be given as to how the project will be
presented.
Example: Prostitutes are a stigmatised and vulnerable group. The group the
research team tested for HIV prevalence came up with a relatively low percentage,
about 3%. However, had that percentage been 50% then that would have raised a
completely different set of ethical issues.
Animal experimentation is a continuous process. Once begun the animals (even
during non experimental periods) require constant supervision. This must be taken
into account when planning the project. Appropriate experimental records to meet
the requirements for the annual Home Office returns must be kept. Organisations
such as the Ministry for Agriculture and Fisheries (MAFF) will want to know whether
you have conformed to standards such as ISO 2000 or Good Laboratory Practice
(GLP), ISO being a standard for experimental procedure and the recording of data.
GLP again has particular requirements. These often tend to be in the form of
standard checks. Thus results may have to be confirmed by a superior who may
have to initial a page in a notebook to say that they had read and checked the
records. If there are Home Office requirements then there is a very clear line of
responsibility and there will generally be someone in your Department responsible
for Home Office requirements. Your institution will have staff who are responsible for
such issues. Be sure to seek out their advice at an early stage of project planning.
It is not unheard of for data to be destroyed when they have not provided the
expected results, or in order to avoid a closer scrutiny of conclusions. This is
unethical. Your institution will have a policy on scientific conduct. Make yourself
familiar with this document. Your Research Support staff will be able to advise you
on such issues.
Plagiarising the work of others is unethical.
Example: One researcher recently published her dissertation - a German
dissertation - but when she went to the examination in Germany discovered that her
work had been published by a member of staff.
You must obtain written permission from respondents to cite extracts from interviews
in publications (even if they have been anonymised).
If you are concerned with legal aspects of animal research your institution
will have an office designated to answer such enquiries. If they cannot
help, you must contact the Home Office. They are very helpful and will
usually give you an answer immediately.
Example: When the researcher was working on a fish pathogen (rickettsiae) he
telephoned the Scottish Office to clarify the situation regarding the import and export
of these bacteria and whether or not a licence is required, because it is also the
provider of such licences.
For information on safety requirements there will always be someone
within your Department who will be able to provide you with the
necessary information. Your institution may have a central Health and
Safety Office
Example: In microbiology the pathogenic category of the material you are using
should be identified before the project commences. The pathogenicity will have been
categorised one, two, three and so on. Most institutions will have little difficulty for
categories one and two, but categories three and upwards require special facilities.
These categories tend to be reasonably virulent organisms which require special
conditions for safely recording growth. It is important that you clarify the position
before you begin. This applies to most high risk material, e.g. for ionising radiation,
someone within your Department will be delegated to look after radiation matters,
but your institution will probably have a Radiation Protection Officer.
If your subject is to be allowed to make an informed decision on whether they wish
to participate in the experiment then you should respect participant/research subject
autonomy, i.e.
doing what you said you would do, and nothing more or less.
The following example from the experiences of Stephen Waters (Bell,
1993) provides interesting insights into the difficulties of carrying out
research on one’s own colleagues. Although the example is predominately
one of gathering the basic data, nonetheless, there are points to be learnt
regarding what happens to that data after they become secondary
published data.
Example: Stephen Waters was a teacher who decided, as part of an Open
University course, to investigate the role of his own Head of Department. He went
through a fairly prolonged negotiation period to reassure colleagues as to his
trustworthiness before embarking on his programme of research. Of interest here
are some of the comments he later made regarding the guarantees he had given
prior to undertaking the research. He had promised all participants an opportunity to
verify statements prior to production of the final report. This proved almost
impossible, most participants not having time to read the entire manuscript.
Therefore, lacking time to identify all their comments within the manuscript. He had
further promised all participants a copy of the final report, which ultimately cost
rather more than anticipated. The ethical agreement he reached with contributors
was only made verbally. This created problems at a later stage, when it transpired
that none of the contributors could precisely recall the conditions agreed upon. In
retrospect he regretted not providing them with a written copy of the agreement.
However, it was in seeking to publish the data that the greatest problems arose. All
contributors had been promised anonymity. A promise which could be met
externally, nobody outwith the school could identify the contributors. However, as all
the contributors came from the same school, it proved impossible to provide
anonymity internally.
Careful consideration should be given to any possible conflict of interest, or the
appearance of such. If this problem arises during the course of your research,
experienced advice should be sought.
Independent work can bring researchers into conflict with their institution. This may
occur if the independent work utilises the results of research which the institution
may regard as being part of its intellectual property. Conflicts of interest may arise
when a person involved in a research project has the opportunity to influence
institutional funding decisions impinging upon that project. In any of these types of
situation it is essential to get advice from your own Research Support staff.









Make certain that human subjects understand the ethical conditions under
which you are operating.
You have a duty to the people you are working upon, consider how your
results will affect their lives.
Keep a record of any agreements made.
Seek out the person within your department who is responsible for safety
issues.
Do not plagiarise.
Remember that legal and ethical are not one and the same, and that the
absence of a written code does not excuse an absence of ethical behaviour.
If there are doubts about the legal aspects of animal experimentation then
the Home Office is the place to go.
All researchers will have a range of sources of advice available to them e.g.,
funding bodies, hospital/institution/ professional bodies ethical committees.
Finally ask yourself if the standards you practice are those by which you
would like to be treated?
References
Bell, J. (1993) Doing Your Research Project: A Guide for First-Time Researchers in
Education and Social Science. Open University Press, Buckingham. 176 pp.
Beynon, R.J. (1993) Postgraduate study in the biological sciences: A researcher’s
companion. Portland Press, London 151 pp.
Brown, S., McDowell, E., and Race, P. (1995) 500 Tips for Research Students.
Kogan Page Ltd, London. 127 pp.
BTG plc Keeping a Laboratory Notebook. Gulph Mills, USA. 12pp.
Gealy, N. and Clarke, D. (1998) Development of an Interim Workplan for the
Researcher’s Lead Body. Maloney and Gealy, 24-26 Mossbury Rd. London. 30 pp.
Gealy, N., Westlake, D., & Clarke, D. (1997) Draft Occupational Standards In
Research. Maloney and Gealy, 24-26 Mossbury Rd. London. 59 pp.
Skelton, F. and Walker, L. (1995) Pilot Study to Assess the Benefits of Gathering
Evidence of Research Competencies for PhD Students to Improve Their Subsequent
Employability. Glasgow University. 21pp.
THEROS: Technology Ventures - Intellectual Property Guidelines (1998)
Download