Autumn-2012-lab-manual - School of Physics and Astronomy

advertisement
SCHOOL OF PHYSICS AND
ASTRONOMY
FIRST YEAR LABORATORY
PX 1123
Introductory Practical Physics I
Academic Year 2012 - 2013
NAME:
Lab group:
Welcome to the first (Autumn) semester of the 1st year laboratory, module
PX1123 IntroductoryPractical Physics I. This module will be followed in
the Spring semester by PX1223 IntroductoryPractical Physics II. This is the
manual for PX1123 only. You will need to bring this with you to every
laboratory session as you will find all the relevant information you need for
the laboratory classes. It is essential that you read carefully through the
manual as it contains: the instructions that you will need to follow in order to
undertake the individual experiments; logistical information; tips on how to
keep your laboratory diary and how to write up your end-of-term reports;
background notes on fundamental topics with which you need to be familiar;
and health & safety issues that relate to the experiments themselves. You are
expected to have pre-read each relevant section prior to coming to your
weekly laboratory session.
This manual is divided into 3 sections, described in more detail overleaf, and
should be your first port of call for any information about the laboratory
work.
If you cannot find the information that you are looking for, please ask any
member of the teaching team - your Lab Supervisor, the demonstrators or
the module organizer (Dr. C.Tucker, room N1.15).
Lab Supervisor:
Contact email:
Demonstrators:
1
CONTENTS:
I:
II:
Introduction and logistics of the 1st Year laboratory
3
Organisation and administration of the laboratory
3
Recording experimental results in your lab notebook
6
Writing-up full reports of experiments
8
Safety In the Laboratorys: Risk Assessment and
11
Code of Practice
12
Experiments
13
Timetable and list of experiments
13
Check list for experiments
14
Laboratory notes for experiments
15 - 61
III:
Background notes
62
III.1
Background notes to experiments
62
Introduction to electronics experiments
How to use a Vernier scale
The oscilloscope
The multimeter
III.2
Analysis of experimental data: Errors in Measurement
71
III.3
Use of Microsoft Excel 2007
102
III.4
Reporting on experimental work
104
An example of how to write a long report
109
Checklists
117
2
I:
INTRODUCTION AND LOGISTICS OF THE 1ST YEAR
LABORATORY
ORGANISATION AND ADMINISTRATION OF THE LABORATORY
INTRODUCTION
There are 11 laboratory sessions in the Autumn Semester and 11 in the Spring Semester.
They are designed with several objectives.
1. To provide familiarity and build confidence with a range of apparatus.
2. To provide training in how to perform experiments and teach you the techniques
of scientific measurement.
3. To give you practise in recording your observations and communicating your
findings to others.
4. To demonstrate theoretical ideas in physics, which you will encounter in your
lecture courses.
The majority of the work you will do in the laboratory will be experimental, and will be
performed individually. However there will be 1 or 2 sessions designed to give you
practise on experimental technique, the handling of errors, and a small number of group
experiments.
ATTENDANCE
Class Times. Labs run from 13:30 to 17:30 on Monday, Tuesday and Thursday
afternoons. Students will be assigned one laboratory afternoon.
Attendance is compulsory; absence requires a self certificate or medical certificate.
Registration. Attendance will be recorded. Students are expected to sign out of the
laboratory if leaving before the end of the session.
GEOGRAPHY AND MANNING OF THE LABORATORY
The main laboratory suite consists of room N1.34. In addition, there are two dark rooms
which are used for optics experiments and for experiments using gases or radioactive
material. The far end of the laboratory is set aside for tea-time refreshments.
The laboratory is maintained by a technician Mr. Nic Tripp, from whom you can buy your
laboratory diary.
ORGANISATION AND SUPERVISION OF PRACTICAL WORK
The lecturer in charge of the teaching of your laboratory is the Lab Supervisor. In
addition there will be 3 demonstrators who, between them, are familiar with all of the
experiements you undertake. These people are there to help you, and answer any
questions associated with your experiment. In addition they will assess, mark and provide
feedback on your work. Use them!
3
All observations made during an experiment should be entered in your laboratory diary
(available from Mr. Nic Tripp at the price of £2.00). Each week you will be allocated an
experiment and you will normally be expected to complete this, performing appropriate
calculations, drawing graphs etc. by 17:30hrs of that day. You will then be given until
16:00 hrs the following day to complete any analysis and draw conclusions on your work,
ready for handing in. The hand in deadline of 16:00 of the day following your
laboratory session is hard and fast! If you have extenuating circumstances as to why
you cannot attend a laboratory session or cannot make the hand-in deadline, you are to
inform the Lab Supervisor prior to this and make alternative arrangements. Further
details on the handing in of laboratory diaries will be given at the beginning of the session
and are laid out below.
At the end of a lab session you are to have your lab diary signed out by a demonstrator.
This will allow us to assess how much work you have achieved during the lab session,
how much finishing off work has been required and that you employ proper use of a Lab
Diary.
It is essential that you put aside about ½ hour before you come to the practical
class in order to read through some of the experimental notes associated with the practical
that you will be undertaking. It is anticipated that you should read any introductory
section up to the expereimental part itself. This will enable you to gain familiarity with the
physics behind the experiment – you should not worry so much about any new lectured
material but refresh your understanding from A-level and school studies. Get yourself
happy with what is expected of you so you can plan your experiment, which will save you
time on the day. Also you must think about the safety considerations that are required for
your experimental work and write a risk assessment, which will be signed off prior to
commencing any practical work.
ASSESSMENT OF PRACTICAL WORK
The responsibility for handing your work in at the correct time is yours, and failure to do
so will usually mean that a mark of zero will be recorded. However any completed work
will be marked for your benefit and to provide you with feedback. Exceptions to this rule
will normally be made only for illness for which you have notified the School. If you do
think you have another valid reason for missing the hand-in time, or for not attending the
laboratory class in the first place, you should discuss this with the Lab Supervisor running
the laboratory prior to your absence or as soon as possible thereafter.
In addition to your weekly lab-diary assessment, in each of the two semesters you will be
required to write up one experiment in the form of a formal report. This will be allocated
by your Lab Supervisor towards the end of each semester. Formal reports should NOT be
written in your lab diary but wordprocessed on sheets of paper that are either bound or
stapled. Marked reports will be returned you, with feedback, and you should keep these as
they should provide a basis for the reports you will have to write in subsequent years.
Each experiment and each report will be marked out of 20 in accordance with the scheme:
16+ = very good; 12+ = good performance which could be improved; 10+ = competent
performance but with some key omissions; 8+ = bare pass; 7- = fail. Your final module
mark (see Undergraduate Handbook) will be made up as follows:
4
Formal report
33.3%
Experimental lab diaries
66.7%
While the experimental notes of all experiments and reports will be assessed and
individual marks logged, your total marks will normally be obtained by expressing the
total marks you obtain during the session as a percentage of the total which you could
have obtained during the session. Exceptions for missed work will normally be made in
the cases of absence due to illness for which a medical certificate has been supplied;
absence for an unavoidable reason of which you notified a member of staff; difficulty
with an experiment for reasons which were not your responsibility and which you
discussed with the demonstrator.
REFRESHMENT ARRANGEMENTS
Tea, coffee, squash and chocolate, will be available in the laboratory about halfway
through the afternoon and provide a mid-point break.
Tea and coffee: Payment for these must be made at the beginning of the semester and will
cover the whole semester. Prices will be announced at the first laboratory class.
Snacks/chocolate: Payment individually at the time of purchase, but cheap .
5
RECORDING EXPERIMENTS IN YOUR LAB. BOOK / DIARY
AIM: to RECORD the results of your work
The aim of keeping a good laboratory diary is to record your work in a manner clear
enough that you or a colleague could understand and attempt to repeat the experiment. It
is a record of your observations, measurements and understanding of the experiment. It is
not a neat essay containing the background theory or paragraphs copied form other
sources, but a real-time account of your experiemental method and findings.
When assessing your laboratory write-up, the demonstrator is interested in your
measurements, observations, results and conclusions. You should aim to present to
him/her a set of measurements and results taken and recorded in such a way that they can
understand easily what each number means, what results you have derived, and what
conclusions you have drawn. You should also make notes of any difficulties experienced
and sources of uncertainty or error. Ideally the record should be such that you could
yourself reconstruct the course of the experiment later - perhaps 20 years later - without
difficulty. The measurements presented to the demonstrator should be those taken during
the performance of the experiment they should not be rewritten before presentation.
A full written report of the background physics, purpose and extent of the experiment is
not required with the experimental results; that task is performed once a semester when
you are asked to produce a full report for a single experiement only.
A successful and quality record of experimental work is within the reach of all students,
providing:
1) all the measurements needed, or which you think might be needed, are
made at the time the experiment is performed;

Before you begin the collection of data, decide what you are going to do and how
you are going to do it. To achieve this you need to have thought about the
experiment before you begin it, to try out the apparatus and perhaps to have made
some trial measurements.
2) the measurements are recorded clearly and completely;

A sketch of the apparatus, or of parts of the apparatus, labelled to correspond with
the measurements, often helps, and serves as a very useful reminder of the
experimental arrangement. You will find the equipment you use have unique
identification numbers; make a note of these in your lab diary as these will allow
the teaching team to keep a track of acceptable results and any systematic errors.

Make brief, succinct notes of what you have done, rather than a long and detailed
prose. Mention any specific problems and how you have overcome them. Mention
good experiemtnal practise.

Record measurements systematically and concisely and, whenever possible,
tabulate them.
6

Always record first the actual measurements made and only then derive the values
of other quantities from them eg. if you are measuring the distance between two
points, record first the position of the two points against a scale and then subtract
the readings and also record the result. This minizes mistakes and allows you to
check results at a later date..

Record units and remember that a statement of precision is an essential part of
every measurement. A typical complete observation is  = 8.69  0.01 mm.

Do not clutter the layout of measurements with arithmetic calculations - do these
on a separate page or part of the page.

If during the experiment you make a mistake, neatly cross out the incorrect values
and repeat them. NEVER rip out a page of a lab diary.

Whenever possible, plot graphs as the measurements are made – outlier/rogue
data points can be identified readily, enabling repeat measurements to be made as
required. Any trends in the data can also be identified – eg. peaks, discontinuities
etc – in time for the experimenter to take more frequent/closely sampled readings
to confirm the observed behaviour.

Label the axes of graphs. Choose scales for the axes which make plotting easy
and, if possible, which allow the experimental precisions to be recorded sensibly.
Axes do not have to start at the origin; “zoom in” sensibly to best display the
results.
3) the results and conclusions are presented clearly. These in their turn will
be achieved by attention to the following points.

Present the results with a statement of precision and units. Always check that the
results that you have are sensible – are they “in the ball park” that you might
expect?

Quote the generally accepted value of the quantity you have measured, for
example from Kaye and Laby's tables, and try to account for any difference that
you see.

Comment briefly on the experiment and results, and discuss how you might
extend and improve the experiment. This is important, as it demonstrates that
you have both thought about and understood well what you have been doing.
7
WRITING UP FULL REPORTS OF EXPERIMENTS
AIM: to PRESENT the results of your work
The person marking your full report is interested in your description of the experiment.
They are not concerned with the actual measurements or quality of the results but are
concerned with the way these are presented in the report. You should aim to present a
clear, concise, report of the experiment you have performed, at a level able to be
understood by a fellow 1st Year student, who does not have expert knowledge of your
experiment. An example of a full report and further advice are given in section III. Very
importantly, your report must be original and not a copy of any part of the notes
provided with the experiment. It should be a report of what you did; not of what you
would like to have done or of what you think you should have done. That said, credit will
be given for discussions on how one might extend and improve an experiment, and what
might be done if the experiment were to be repeated.
It is normal practise in writing scientific papers to omit all details of calculations, and you
should also do this. Providing your report includes a statement of the basic theory which
you used, together with a record of your experimental observations (summarized if
appropriate) and the parameters which you obtain as a result of your calculations, it will
be possible for anyone who so wishes to check the calculations you perform.
The principles of report writing are simple: give the report a sensible structure; write in
proper, concise English; use the past tense passive voice, for example "... the
potentiometer was balanced ...". The following structure is suggested. It is not mandatory,
but you are strongly recommended to adopt it.
1) Follow the title with an abstract. Head this section “Abstract".

An abstract is a very brief (~50-100 words) synopsis of the experiment
performed. An example is "The speed of sound in a gas has been measured using the
standing wave cavity method for one gas (air) for a range of temperatures near room
temperature and for gases of different molecular weights (air, argon, carbon dioxide)
at room temperature. The speed in air near room temperature was found to be
proportional to T½, where T is the gas temperature in Kelvin, and the ratio Cp/Cv for
air, argon and carbon dioxide at room temperature was found to be 1.402 ± 0.003,
1.668 ± 0.003 and 1.300 ± 0.003 respectively".
2) Follow the abstract, on a separate page, with an introduction to the
experiment. Head this section “Introduction”.

Here, you should state the purpose of the experiment, and outline the
principles upon which it was based. This section is often the most difficult to write.
On many occasions it is convenient to draft all the rest of the report and write this last.
Remember that the reader will, in general, not be as familiar with the subject matter as
the author. Start with a brief general survey of the particular area of physics under
investigation before plunging into details of the work performed.
8

Important formulae and equations to be used later in the report can often, with
advantage, be mentioned in the introduction as, by showing what quantities are to be
measured, their presence helps in the understanding of the experiment. Formulae or
equations should only be quoted at this stage. Derivations of formulae or equations
should be given either by references to sources, for example text books, or in full in
appendices. References should be given in the way described below.
3) Follow this with a description of the experimental procedure. Head this
“Experimental Procedure”.
 Write the experimental procedure as concisely as possible: give only the
essentials, but do mention any difficulties you experienced and how they were
overcome. Division of the description of the experimental procedure into sections,
each one dealing with the measurement of one quantity, is often convenient. If the
introduction to the experiment has been well designed this division will occur
naturally. Relegate any matters which can be treated separately, such as proofs of
formulae, to numbered appendices. Give references in the way described below.
 All diagrams, graphs or figures should be labelled as figures. Give each a
consecutive number (as Figure 1 etc.), a brief title and, where possible, a brief caption.
Give each group or table of measurements a number (as Table 1 etc.) and a brief title,
and use the numbers for reference from the text e.g. “the data in Figure 1 exhibits a
straight….”
4) Follow this section with the results of the experiment, discussion of them and
comments. Head this “Results and discussion”.
 The result of the experiment can be stated quite briefly as "The value of X
obtained was N +  (N) UNITS". For example "The viscosity of water at 20°C was
found to be (1.002  0.001) x 10-3 N M-2 s".
 Discussion of the result, or of measurements, method etc., can be cross-referenced
by quoting the figure, table or report section numbers.
5) Follow this section with your conclusions. Head this “Conclusions”.
 The conclusions should restate, concisely, what you have achieved including the
results and associated uncertainties. Point the way forward for how you believe the
experiment could be improved
6) Follow this section with references. Head this “References” or “Bibliography”.
 The last section of the main body of the report is the bibliography, or list of
references. It is essential to provide references. There are two main styles used (along
with many subtle variations) to detail references. In the Harvard method, the name
9
of the first author along with the year of publication is inserted in the text, with full
details given, in alphabetical order, at the end of the document. The second style,
favoured here is known as the Vancouver approach, is slightly different. At the point
in your report at which you wish to make the reference, insert a number in square
brackets, e.g. [1]. Numbers should start with [1] and be in the order in which they
appear in the report. References should be given in the reference or bibliography
section, and should be listed in the order in which they appear in the report.
Where referencing a book, give the author list, title, publisher, place published, year
and if relevant, page number eg. [1] H.D. Young, R.A. Freedman, University Physics,
Pearson, San Francisco, 2004.
In the case of a journal paper, give the author list, title of article, journal title, vol no.,
page no.s, year. e.g. [2] M.S. Bigelow, N.N. Lepeshkin & R.W. Boyd, “Ultra-slow
and superluminal light propagation in solids at room temperature”, Journal of Physics:
Condensed Matter, 16, pp.1321-1340, 2004.
In the case of a webpage (note: use webpages carefully as information is sometimes
incorrect), give title, institution responsible, web address, and very importantly the
date on which the website was accessed eg. [3] “How Hearing Works”,
HowStuffWorks inc., http://science.howstuffworks.com/hearing.htm, accessed 13th
July 2008
6) Follow this section with any appendices. Head this “Appendices”.

Use the appendices to treat matters of detail which are not essential to the main
part of the report, but that help to clarify or expand on points made. Give each
appendix a different number to help cross referencing from other parts of the report
and note that to be useful appendices must be mentioned in the main body of the
report.
Health Warning: In subsequent years it may be necessary to develop this standard
report layout to deal with complex experiments or series of experiments.
10
SAFETY IN THE LABORATORY
The 1974 Health and Safety at Work Act places, on all workers, the legal obligation to
gurad themselves and others against hazards arising from their work. This act applies to
students and teachers in university laboratories.
Maintaining a safe working environment in the laboratory is paramount. The following
points supplement those contained in "School of Physics Safety Regulations for
Undergraduates", a copy of which was given to you when you registered in the School.
1.
It is your responsibility to ensure that at all times you work in such a way as to
ensure your own safety and that of other persons in the laboratory.
2.
The treatment of serious injuries must take precedence over all other action
including the containment or cleaning up of radioactive contamination.
3.
None of the experiments in the laboratory is dangerous provided that normal
practices are followed. However, particular care should be exercised in those
experiments involving cryogenic fluids, lasers, gasas and radioactive materials.
Relevant safety information will be found in the scripts for these experiments.
4.
If you are uncertain about any safety matter for any of the experiments, you
MUST consult a demonstrator.
5.
All accidents must be reported to a laboratory supervisor or technician who will
take the necessary action.
6.
After an accident a report form, which can be obtained from the technician, must
be completed and given to the laboratory supervisor.
7.
Please alert your Laboratory Supervisor of any medical condition (for e.g. having
a pacemaker) which may affect your ability to perform certain experiments.
UNDERGRADUATE EXPERIMENT RISK ASSESSMENT
The experiments you will perform in the first year Physics Laboratory are relatively free
of danger to health and safety. Nevertheless, an important element of your training in
laboratory work will be to introduce you to the need to assess carefully any risks
associated with a given experimental situation. As an aid towards this end, a sheet entitled
Code of Practice for Teaching Laboratories follows. At the commencement of each
experiment, you are asked to use the material on this sheet to arrive at a risk
assessment of the experiment you are about to perform. A statement (which may, in
some cases, be brief) of any risk(s) you perceive in the work should be recorded as an
additional item in your laboratory diary account of the experiment.
11
SCHOOL OF PHYSICS & ASTRONOMY: CODE OF PRACTICE FOR
TEACHING LABORATORIES
Electricity
Supplies to circuits using voltages greater than 25V ac or 60V dc
should be "hardwired" via plugs and sockets. Supplies of 25Vac, 60V
dc or less should be connected using 4 mm plugs and insulated leads,
the only exceptions being"breadboards". It is forbidden to open 13 A
plugs.
Chemicals
Before handling chemicals, the relevant Chemical Risk Assessment
forms must be obtained and read carefully.
Radioactive
Sources
Gloves must be worn and tweezers used when handling.
Lasers
Never look directly into a laser beam. Experiments should be
arranged to minimise reflected beams.
X-Rays
The X-ray generators in the teaching laboratories are inherently safe,
but the safety procedures given must be strictly followed.
Waste Disposal
"Sharps", ie, hypodermic needles, broken glass and sharp metal
pieces should be put in the yellow containers provided. Photographic
chemicals may be washed down the drain with plenty of water. Other
chemicals should be given to the Technician or Demonstrator for
disposal.
Liquid Nitrogen
Great care should be taken when using as contact with skin can cause
"cold burns". Goggles and gloves must be worn when pouring.
Natural Gas
Only approved apparatus can be connected to the gas supplies and
these should be turned off when not in use.
Compressed Air
This can be dangerous if mis-handled and should be used with care.
Any flexible tubing connected must be secured to stop it moving
when the supply is turned on.
Gas Cylinders
Must be properly secured by clamping to a bench or placed in
cylinder stands. The correct regulators must be fitted.
Machines
When using machines, eg, lathe and drill, eye protection must be
worn and guards in place. Long hair and loose clothing especially ties
should be secured so that they cannot be caught in rotating parts.
Machines can only be used under supervision.
Hand Tools
Care should be taken when using tools and hands kept away from the
cutting edges.
Hot Plates
Can cause burns. The temperature should be checked before
handling.
Ultrasonic Baths
Avoid direct bodily contact with the bath when in operation.
Vacuum
Equipment
If glassware is evacuated, implosion guarding must be used in
order to contain the glass in the event of an accident.
12
II:
EXPERIMENTS
TIME TABLE AND LIST OF EXPERIMENTS
Week
Experiment
Title
Page
Autumn Semester (PX1123)
1
1
Introductory Exercises . Straight line graphs, including log
graphs, errors and how to combine them.
15
2-3
2
3
Group Experiment: Young’s Modulus
Group Experiment: Coefficients of Friction
17
19
4 – 10
(see list)
4
5
6
7
8
9
10
Statistics of Experimental Data (Gaussian Distribution)
Optics with Thin Lenses.
Using an Oscilloscope and RC Circuit Construction
Air Resistance
Radioactivity
Mechanics and Angular Momentum
Moon Craters
23
28
36
46
47
53
59
11
11
Group challenge!
61
13
CHECKLIST
 Read through the notes on the experiment that you will be doing BEFORE coming to
the practical class. You will be expected to have read all the introductory notes and
refreshed yourself of any knowledge of the subject taught in school
 Read carefully through any additional sections that might be useful in Section III – eg.
use of electronic equipment, statistics., and also the diary checklist given at the end of
this manual.
 Think about the safety considerations that there might be associated with the practical,
having read through the lab notes. This can then be discussed with your demonstrator
prior to writing your risk assessment.
 On turning up to the lab, listen carefully to any briefing that is given by your
demonstrator: he/she will give you tips on how to do the experiment as well as
detailing any safety considerations relevant to your experiment.
 Write up the safety considerations.
 Check that the size of any quantities that you have been asked to derive/calculate are
sensible - ie. are they the right order of magnitude?
 Read through your account of your experiment before handing it in, checking that you
have included errors/error calculations, that you are quoting numbers to the correct
number of significant figures and that you have included units.
 Staple any loose paper (eg. graphs, computer print-outs, questionnaires etc.) into your
lab book.
14
Exercise 1: Interpreting data
1.
A series of experimental results is given below. In each case the mean value of the
experimentally determined variable is given, together with the error.
(a) R = 0.732 
E( R ) = 0.003 
(b) C = 9.993 F
E( C ) = 0.018 F
(c) T ½ = 2.354 min
E( T ½ ) = 11 sec
(d) R = 2.436 M
E( R ) = 23 
(e) W c = 11.562935 KHz
E( W c) = 3.1 Hz
(f) d = 62165.551 m
E( d ) = 26 cm
(g) f = 20 cm
E( f ) = 0.03 cm
For each quantity, using SI units, write down:


2.
the best final statement of the result of each experimental determinations
the percentage error in each mean value.
In the following questions the values of Z1, Z2 . . . are the given functions of the
independently measured quantities A, B and C. Calculate the values of, and errors
in, Z1, Z2 etc from the given values of, and errors in, A, B and C.
(a) Zl = C/A
A = 100
E(A) = 0.1
(b) Z2 = A-B
B = 0.1
E(B) = 0.005
(c) Z3 = 2AB2/C
C = 50
E(C) = 2
(d) Z4 = B loge C
3.. The variation of resistance, R, of a length of copper wire with temperature, T, is given
by:
R = Ro (1 + T)
where Ro and  are constants.
Experimental data from a particular investigation (similar to Experiment 4) are given
in Table 1.3.
15
T(K)
300
320
340
360
380
400
T(K)
420
440
460
480
500
520
R()
2415
2490
2585
2625
2710
2755
R()
2820
2910
3050
3030
3115
3155
Table 1.3: Data for question 3
a)
b)
c)
d)
Which are the dependent and independent variables?
Plot a graph to show the variation of R with T.
Determine Ro and estimate the likely error.
Determine  and estimate the likely error.
4. The activity, A , of a radioactive source is given by
A = Aoe-t
where Ao is the activity when time, t, = 0 and  is the disintegration constant. Data
obtained by a 1st year student undertaking Experiment 6 are given in Table 1.4.
A (Counts in 10 sec)
5768
3391
1963
1231
718
415
t (mins)
0.5
2.5
4.5
6.5
8.5
10.5
Table 1.4: data for question 4
a) Plot a graph on linear paper showing the variation of A with t.
b) Plot a suitable graph on linear graph paper to determine  and Ao
c) Plot a suitable graph on semi-log paper to determine  and Ao
5. In one 1st Year experiment, measurements are made of the velocity of sound in a gas,
c. This can be related to , the ratio of the principal specific heats of the gas by

c2 m
,
kT
where m is the mass of one molecule of gas, k is the Boltzmann constant and T is the
absolute temperature. Determine a value for  from the following data which was
obtained from an experiment with nitrogen:
c = (344  20) ms-1; T = (292  1) K
16
Experiment 2: Measuring Young’s Modulus
Note: This experiment is carried out in pairs.
Outline
Most students will be familiar with the concept of Young's Modulus from A level studies.
It is an extremely important characteristic of a material and is the numerical evaluation of
Hooke's Law, namely the ratio of stress to strain (the measure of resistance to elastic
deformation). You will design a basic experiment to verify Hooke’s law and determine
Young’s Modulus for a bar of wood.
Experimental skills
 Making and recording basic measurements of lengths, distances (and their
uncertainties/errors).
 Making use of repetitive measurements to improve error.
 Careful experimental observation and recording of results.
Wider Applications
 Young Modulus, E, is a material property that describes its stiffness and is therefore
one of the most important properties in engineering design.
 Young's modulus is not always the same in all orientations of a material. Most metals
and ceramics are isotropic, and their mechanical properties are the same in all
orientations. However anisotropy can be seen in some treated metals, many composite
materials, wood and reinfoirced concrete. Engineers can use this directional
phenomenon to their advantage in creating structures.
 Young's modulus is the most common elastic modulus used, but there are other elastic
moduli measured too, such as the bulk modulus and the shear modulus.
1. Introduction
The relation between the depression produced at the end of a horizontal weightless rule by
application of a vertical force F, as represented in Figure 1.1, is given by:
d
FL3
,
3EI a
[1]
where L is the projecting length, E is Young's modulus for the material of the rule and Ia
is the geometrical moment of inertia of cross section.
For the rectangularly-sectioned rule provided, which has width a and thickness b,
Ia 
ab 3
12
[2]
17
Figure 1.1 : Representation of the deflection of horizontal rule by force, F
2. Experiment


Clamp the metre rule to the bench so that part of its length projects horizontally
beyond the bench edge.
Make suitable measurements to explore the validity of equation [1] and to measure E
for wood.
Reminder: Concluding remarks
Note: This reminder and the advice below are given since this is an early experiment - do
not expect to see such prompts in the future.

Summarise the main numerical findings (as always with errors), important
observations and what is understood and not understood at this time.
18
Experiment 3: Coefficients of Friction
Note: This experiment is carried out in pairs.
Outline
Most students are probably familiar with the mathematics of friction as applied to static
and moving bodies on the flat and on slopes. In this experiment the behaviour of a real (if
a little contrived) system of a short length of dowel travelling down a slope of variable
angle is investigated. Experience indicates that the system can behave unusually,
requiring the experimentalist to take data reproducibly and carefully note down their
observations.
Experimental skills
 Making and recording basic measurements: angles and times (and their errors).
 Making use of trial/survey experiments.
 Careful experimental observation and systematic approach to data taking.
Wider Applications
 Funny thing friction, sometimes you want it, sometimes you don’t; the rotation of
wheels on a car should be as frictionless as possible, but friction between tyres and the
road is absolutely essential.
 The difference between coefficient of friction in the limiting and kinetic cases leads to
“stick-slip” effects, where systems once they start moving move quickly, e.g. in
hydraulic cylinders and earthquakes.
1. Introduction
The motion of a body down a slope is a classic mechanics problem. In elementary texts
two types of systems are considered; zero and non-zero friction. The friction between two
surfaces is characterised by a dimensionless constant called the coefficient of friction, μ
and can often be related to the frictional force FF by
FF  FN ,
[1]
where FN is the normal or reaction force between the body and the surface. Two types are
considered: limiting (or static) friction (μL) that prevents a static body from beginning to
move; and kinetic friction (μK) that acts on moving bodies. Usually μK is thought to be
slightly lower than (μL) but near enough so that they are considered equal in calculations.
This is illustrated in Figure 1.2, for a body initially at rest on a surface and subject to a
driving force that increases with time. The frictional force increases and matches the
driving force until the limiting condition is met, then the body starts to move and the
kinetic friction, which is slightly less than the limiting friction, operates always in the
opposite direction to that of the motion.
19
LFN
Friction
force
KFN
No
motion
motion
time
Figure 1.2. The frictional force acting on a body as the driving force is increased from zero.
1.1 Body on a slope
A body on a slope is an interesting system as there is no need to introduce external forces
in order to observe the effects of friction. In the following discussion, the angle of the
slope to the horizontal is given by θ, the mass by m and the acceleration due to gravity by
g.
FN=mg.cosθ
FF
FS=mg.sinθ
θ
mg
In your experiment this is
the wooden dowel
mg.cosθ
Figure 1.3. Forces acting on a body on a slope. The weight of the body can be resolved
perpendicular and parallel to the slope. The perpendicular component is exactly balanced by a
reaction force, FN.
As the angle of the slope increases the force on the body due to gravity acting down the
slope, Fs increases as
FS  mg sin 
[2]
At the same time the reaction force decreases as
FN  mg cos 
[3]
This is important because, from equation 1, the reaction force determines the frictional
forces.
20
The critical angle, θC
With no external forces acting the frictional force always acts up the slope and a critical
angle, θc can be defined at which the forces down and up the slope are identical and
beyond which the body starts to move down the slope. At the critical angle
or
[4]
mg sin  C  mg L cos  C
tan C   L
Therefore a simple experiment of the angle at which the body starts to move reveals μL.
Angles greater than the critical angle
Since in this regime the body is moving, it is the coefficient of kinetic friction that applies.
Now although there is an imbalance between the forces and the overall acceleration, the
acceleration, a, down the slope is given by:
a  g sin   g K cos   g(sin    K cos  )
[5]
Since this acceleration is constant (in ideal conditions) the familiar equations of motion
can be used. For example, the time, t a body starting from rest takes to move down a
slope of length, s is given by
s  0.5at 2
[6]
2. Experiment
2.1 Apparatus
The simple apparatus used here consists of a channel, a stand to support it, a length of
dowel and a stop watch. The arrangement of the support and channel should be as
follows:
 The support should be placed on the upper bench and the bottom of the channel on the
lower bench.
 The channel should be supported so that it is “L” shaped, with a slight angle so that
the dowel remains close to the upright. (A “V” shaped arrangement should not be
used as it has been found that the dowel becomes easily wedged).
 Running the forks on the support through the holes in the channel ~30 cm from the top
of the channel seems a secure, stable and convenient method.
Note: The maximum angle of the slope permitted in this experiment is 30o.
2.2 Part 1. Survey/trial experiments (including timing errors)
Survey (or trial) experiments are a vital part of performing any new procedure; they are
used to get a feel for the behaviour of the system, to determine the most appropriate
methodology, to understand the important measuring ranges etc. In many first year
experiments, these trials are hidden from the students, in order to make best use of the
available time and apparatus. Nonetheless they will have been carried out by
demonstrators and supervisors in order to generate the lab scripts.
Therefore, this part of the experiment is being used as an opportunity to take students
through the surveying process.
 So, spend ~10 minutes “playing” with the equipment and making a note your
observations and some measurements if appropriate.
 Pick suitable conditions to perform a study of the reproducibility of “your” timing.
Note that this is not as easy as it sounds since an aim is to be able to later distinguish
between your timing error and real variations within the experiment.
21
2.3 Part 2. Determine the coefficient of limiting friction, μL.
 Use the experience you have gained to design and perform an experiment to determine
μL.
Your diary entry will need to describe your methodology and how the error was
determined and what you think it corresponds to.
2.4 Part 3. Determine the coefficient of kinetic friction, μK.
 Use the experience you have gained to design and perform experiments to determine
μK, exploring angles between θC and 30o.
 There are no obvious straight line graphs here, instead it is suggested that a graph of
μK against angle is plotted.
Reminder: Concluding remarks
Note: This reminder and the advice below are given since this is an early experiment - do
not expect to see such prompts in the future.

Summarise the main numerical findings (as always with errors), important
observations and what is understood and not understood at this time.
22
Experiment 4: The statistics of experimental data; the Gaussian
distribution.
Outline
The statistical nature of measured data is examined using an experiment in which ball
bearings are randomly deflected as they roll down an incline. Random behaviour is
expected to result in a “Gaussian” distribution, the most common mathematical
distribution in experimental physics. The experiment dwells on the progression from
small to large data sets, the emergence of the well known shape of the distribution and the
implications for data analysis and error estimation (i.e. the relationship to “accuracy and
precision” and “random and systematic errors”).
Experimental skills
 Statistical analysis of data in general.
 Analysis using the Gaussian distribution in particular.
Wider Applications
This experiment illustrates the unseen statistics behind all practical physics:
 When dealing with a small number (say ~ 12) data points, as you often do in these
laboratory experiments, it should always be remembered that the measurements
represent “samples” of an underlying data “distribution”.
 The majority of physics experiments result in underlying data distributions that are
Gaussian.
 Other important distributions include Poisson, Lorentzian and Binomial. The
distribution is governed by the underlying physics and/or statistics.
1. Introduction
Virtually all experiments are influenced by statistical considerations and have underlying
distributions of various types. However in most cases either not enough data is collected
or the data is not analysed in such a way as to reveal this fact. Consequently it is entirely
possible to perform crude but quite reasonable data analysis with little understanding of
its context. Clearly the training of physicists should progress them beyond such a
superficial level. This experiment is a very important role in training by taking you
through the techniques used when dealing with small, medium and large sets of data.
The experimental set up chosen uses random processes to produce a distribution that
consequently should be Gaussian and is appropriate here since most experiments produce
such distributions. What is rare is the opportunity for students to observe the emergence
of a distribution and consider the effect on data and error analysis.
Ultimately though, always remember that the concern of an experiment is to express a
measurement as “(value +/- error) units”. Statistics is simply the tool by which the
“value” and the “error” are determined. Reminder:
 Systematic errors - the result of a defect either in the apparatus or experimental
procedure leading to a (usually) constant error throughout a set of readings.
 Random errors - the result of a lack of consistency in either in the apparatus or
experimental procedure leading to a distribution of results.
 Accuracy - determined by how close the measured is to the true value, in other words
how correct the measurement is. A value can only be accurate if the systematic error
is small.
23

Precision - determined by how “exactly” a measurement can be made regardless of its
accuracy. Precision relates directly to the random error - a value can only be precise if
the random error is small (high precision means low random error, low precision
means high random error).
1.1. Simple statistical concepts
In all the experiments a series of values xl, x2 .... xn is obtained. Often the experimental
values differ, mainly due to the fact that some variable in the experiment has been
changed (usually the aim would then be to plot the data on a straight line graph). In this
discussion and the experiments that follow, the measurements recorded will be of
nominally the same value. They actual measurements will represent a sample of all the
possible measurements and these differences are due to variations in the system being
measured, the equipment used for measuring, or the operator.
From such measurements (taking xi as the ith value of x and n as the total number of
measurements) a number of statistical values can be found that are of relevance to the
understanding of the experiment:
1 n
Arithmetic mean
μ   xi
[1]
n i 1
The arithmetic mean has a special significance as this represents the best estimate of the
“true value” of the measurement. The error in an experiment can then be understood to
reflect the possible discrepancy between the arithmetic mean and the true value.
Superficially and practically for small n an estimate of (twice) the error might involve:
Data range
xmax - x min
Probable error
the range in which 50% of the values fall
With larger n (a larger sample) formal statistical terms such as “standard deviation”
become appropriate. The standard deviation, σ(x) of an experiment is a value that reflects
the inherent dispersion or spread of the data (an experiment with high precision will have
a low standard deviation) and so is, like the “true value” an unattainable idealised
parameter. Practically, the available sample can be used to obtain a “sample standard
deviation”, σn(x) (the equivalent of finding the arithmetic mean of the measurements) and
this can be modified to give the “best estimate of the standard deviation”, sn(x):
sample standard deviation
1 n

 n ( x )    ( xi   ) 2 
 n i 1

best estimate of the standard deviation
 n 
sn ( x)  

 n 1
1/ 2
 n ( x)
12
[2]
[3]
Whilst standard deviations are related to errors and may be reasonable to use in some
circumstances they are not appropriate when there are a large number of measurements
and the distribution is well defined (see below for more on distributions). Here the
accepted error is the (best estimate of the) standard error:
s (x)
 n( x )
Best estimate of standard error
[4]
 ( xn )  n

n1 2
n  11 2
Note: All of the above values can be found without reference to the particular distribution
of the data.
24
1.2. Distributions
If measurements occur in discrete values (as they will in the following experiments) the
distribution can be drawn by plotting the number of times (frequency) a value is recorded
versus the value itself. (If the measurements are continuous then the values can be split
up into data ranges (eg x to x + dx) and then the frequency counted.)
However, the frequency of occurrence clearly depends on the number of attempts which
are made. A more fundamental property is the probability which experimentally is given
by
probability, P = number of occurrences
[5]
total number of events, n
It should be clear from this that the sums of probabilities should equal one.
mathematical functions that describe distributions are always probability functions.
The
1.3 The Gaussian (or Normal) distribution
All experimental results are affected by random errors. In practice it turns out that in
many cases the distribution function which best describes these random errors is the
Gaussian distribution given by:
  ( x   )2 
1
1
P( x ) 
. exp 
[6]

1
2

2





( 2 ) 2
P(x)
where μ is the mean value of x and  is the standard deviation. An example of a Gaussian
distribution is shown in figure 1; it is symmetrical about the mean has a characteristic bell
shape and ~68% of the measured values are expected within ± 1σ of the mean (this range
is slightly larger than that covered by the “probable error”).
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
FWHM
FWHM
-4
-3
-2
-1
0
1
2
3
4
x
Figure 1 Gaussian probability function generated using
xn  0 and σ(x) = 1 resulting in the
x-axis being in units of standard deviation. The FWHM is wider than 2σ(x).
25
2. Experimental
2.1 Apparatus
 The apparatus used here consists of a pin board, down which steel balls are rolled
individually (so that they do not interfere with each other). There is a row of 23 “bins”
at the base numbered from -11 to 0 to +11 (the discrete values representing the results
of this experiment).
 The pins are intended to induce a random motion of the balls so that the balls have a
distribution about their “true value” that is Gaussian.
 The design is such that the true value (ideal result) of the experiment is zero.
However, various biases can be imagined that might affect this and lead to a
systematic error (overall bias) that will be constant provided the equipment is not
disturbed.
 Approximately 50 balls are supplied and these constitute a “batch”.
2.2 Procedure
Although split into two parts it should be considered as a single continuous experiment in
which the number of trials, n, increases. In order to be able to monitor the “result”, and
the emerging Gaussian distribution, it is necessary to keep track of the results in the order
in which they are obtained. It would be impractical to note the result in order for every
ball (trial) however it is really only necessary to pay close attention to the first few trials.
 The first part of the experiment pays close attention to the “first batch” of ~50 trials.
 In the second part a further 4 batches are recorded and allow the accumulation of a
large data set. The total number of trials is then ~250.
2.2.1 Small-medium number statistics (n = 1 to ~50)
Note: In order to mimic the low n experiments that students usually perform the first
batch must be undertaken in stages; this ensures that unprejudiced decisions about errors
are made at each stage. Note: it will be very easy for diaries to become unintelligible
whilst working through this section - use headings, notes and comments to avoid this.
(i) First roll one ball down the slope and note its position.
 Clearly this “measurements is our current best estimate of the “true value”.
 What is the “result” of the experiment at this stage (i.e. value +/- error)? Is it in fact
possible to estimate an error (note - it must be non zero) at this stage? If it is not
possible then what are the implications for deciding on the size of the error bars that
are often drawn on graphs based on single measurements?
(ii) Roll another two balls down the slope (total = 3) and note their positions
 The best estimate of the “true value” is now the average of three measurements
(relevance: e.g. timing experiments are often performed three times).
 Realistically the estimated error here is obtained from the data range.
 Write down the result of the experiment at this stage (value +/- error).
Remember each trial should be performed identically - you should be aware of and write
down the details of the procedure at this point. It would be entirely reasonable to change
(improve) the methodology. This would entail repeating the first three trials (for
consistency later) and the diary entry should be clear.
(iii) Roll a further nine balls down the slope (total = 12) and note their positions

The best estimate of the “true value” is now the average/mean of a total of twelve
measurements (relevance: experiments in which straight line graphs are generated
often have approximately this number of data points).
26

The estimated error. With 12 measurements simply using the data range to obtain an
error value ought to be too pessimistic and statistical techniques can start to be used
(even though there are not enough data values for the shape of the distribution to have
emerged). Calculate and compare values for (0.5 x) range, the probable error,
standard deviations and standard error described above.
(Note: the above calculations can be performed using the statistical functions of a
calculator. This will save time later, but at this point students must confirm that the
correct method is being used by showing hand working and comparing with calculator +
statistical functions).
(iv) Roll the remainder of the batch down the slope and note their positions in order.
 For totals of 24 and ~50 trials calculate and compare values for (0.5 x) range, the
probable error, standard deviations and standard error.
 Use the values for n = 50 to draw a histogram and compare with shape of the Gaussian
distribution shown in figure 1. How well defined is the Gaussian distribution?
2.2.2 Large number statistics (n up to ~250)
In order to be able to monitor the further development of the experimental “result” and the
data distribution a further 4 batches of balls will be used. It would be impractical to note
the result in order for every ball (trial), instead send the balls down in batches (of ~50)
recording the distribution for each batch.
 Draw a suitable table in which to record the measurements.
 Perform and record the measurements.
Data distribution
 Draw a second table in which to record the calculated cumulative distributions for the
total of 1 (from section 2.2.1) 3 and 5 batches of measurements.
 For each case calculate the mean and sample/best estimate of the standard deviation
and standard error.
 Use the values for n ~ 250 and equation 6 to calculate the corresponding Gaussian
distribution and plot this on top of the measured distribution. Comment on the
agreement between them.
2.3 Analysis of the “result” of the experiment as a function of n
This section considers all of the results obtained.
 Consider (giving an explanation/justification) what is the most appropriate error value
to use for n = 3, 12, 24, 50 150 and 350. One decision here is; at what n does it
become appropriate to use standard error?
 Summarise the above in a table with columns for “value”, “most appropriate error
value” and “error type” (e.g. range, standard error etc).
 Plot a graph(s) of mean value, μ against n (for n = 3, 12, 24, 50, 150, and 250) using
the chosen error for the error bar.
 Finally, for the concluding remarks and drawing on the previous graph, summarise
what has been learnt about the systematic and random errors and accuracy and
precision of the experiment as n was increased. Is there any evidence for a bias
(systematic error) in the experimental set up? (Note: Just in case you’ve missed it so
far - the mean value alone provides no evidence for a bias (systematic error) it must be
considered with an appropriate error).
27
Experiment 5: Geometric optics, imaging with thin convex lenses
Safety
 The light source used is a relatively low power 40 W incandescent bulb. However, in
using lenses the light may be focused to produce high power densities with potential
to damage the eye. Therefore never look through lenses towards the light source.
 The light bulb is contained and shielded within a black housing which will become hot
after extended use. Therefore take care not to touch the housing.
 The lenses are made from glass and may break if dropped. If this occurs do not
attempt to clean up, instead call the demonstrator, supervisors or lab technician.
1. Simple Overview
This is a simple experiment designed to familiarise you with basic optical equipment and
a common sense approach to setting up optical systems. You will learn about some basic
properties of thin bi-convex spherical glass lenses, the key property of which is the focal
length of the lens. If parallel wave-fronts of light are incident on a thin lens, to a first
approximation the light is focussed by the lens to a point. This is known as the focal
length (f). Conversely, by symmetry, if a point source of light is placed at the focal point,
the lens converges the beam to be parallel. This is known as collimation/collimating.
This is sketched in figure 1. Parallel wavefronts can be approximated by light at a great
distance (for example light from the sun, or even a very distant light source). Point
sources can be simulated by small pin pricks in screens with lights behind them.
Figure 1. Simple ray trace view of the focal point of a lens.
The experiment makes use of an optical track that allows for the precise positioning and
fixing of optical components. This is essential for many optical experiments and
instruments, where the alignment of optical components can be critical. Experiments in
optics are different from most other types. This is due to the fact that an optical beam is
required to pass through or interact with a number of optical components that
consequently need to be carefully aligned. This is a skill that benefits from patience and
practice. This experiment provides a (relatively forgiving) introduction. As with any
optics experiment, avoid touching the optical surfaces as much as possible.
A simple tip to remember is to constantly look at the alignment of the lenses along the
track. They should be broadly in a straight line and the same height. If they are not
(heavily staggered, or up and down like a roller coaster) then your light path is equally
doglegged through the lenses, and in the extreme case you may even be picking up light
from some other (stray) source. This is not good, and probably means your first lens is
pointing the light significantly off the axis of the track.
28
Simple optics form the basis of cameras, microscopes, telescopes and the eye. The
techniques used are ubiquitous in scientific experiments, particularly in spectroscopy and
imaging (e.g. microscopes, telescopes etc).
Apparatus
1.5 m optical bench with Vernier scale, 40 W shielded incandescent light source, various
optical holders, lenses, filters, plates and screens.
2. Experiments
Reminder: Take care when handling optical components: The lenses are made from glass
and may break if dropped. If this occurs do not attempt to clean up, instead call the
demonstrator, supervisors or lab technician. In addition hold lenses at their edges and
above the benches when mounting into their holders.
Experiment 2.1 Collimated beams (and determination of focal length)
This section considers collimated light i.e. light whose rays are all parallel to the principal
axis. In section 1.4 such light is incident on a converging lens all passes through the
principal focus on the opposite side of the lens. Likewise rays emanating from a principal
focus emerge parallel to the principal axis (or collimated) from the lens. These rays are
central to understanding optical systems through ray diagrams. Collimated beams,
formed by placing objects at the focus of a lens, are often exploited in optical instruments
such as spectrometers.
“Auto-collimation”
The properties of collimated beams described above form the basis of a rapid method for
finding the focal length of a lens (this experiment) and for producing a collimated beam of
light (the next experiment).
 Keeping the same distance from the lamp, replace the slide with a pinhole (which will
act as a point source of light) with its black side facing the lamp.
 Mount a plane (flat) mirror at approximately 50 cm with lens 1 between the pinhole
and the mirror.
The principle of the approach here is illustrated in Figure 2. The mirror reflects light back
into the lens and towards the pinhole. A sharply-focused image is produced immediately
alongside the pinhole only when the beam between the lens and the mirror is parallel and
the object distance is equal to the focal length.
Figure 2 Focal length determination by “auto-collimation”


Adjust the position of the lens in order to obtain a sharply focused image of the
pinhole next to the actual pinhole.
Find the focal length of lens 1.
29
Experiment 2.2 Measurements with a collimated beam
Here a collimated beam is used to allow a quick determination of focal length using the
pinhole aperture.
 With the pinhole as the object use the method of experiment 2.1, with lens 1, to
collimate the light. Position adjustments may need to be made in order to observe the
reflected image. Once found, the position of the pinhole and lens 1 along the bench
need not be changed again during the experiment.
 Remove the mirror and instead after lens 1 place a second lens holder and then a
screen. With no lens in the second holder it is likely that a number of images of the
pinhole will appear on the screen - this is a consequence of a combination of the light
source that consists of an extended and non-uniform filament and the larger hole now
being used. However, the light may still be considered to be collimated (the
separation of the images should not change as the screen is moved although the size of
each image will).
 Place lens 2 in the holder and move the screen in order to determine its focal length, f.
To convince yourself that the light is collimated and the separation between the two
lenses does not matter, repeat this for the second lens at positions of 60cm and 90cm
on the optical bench (f should not change).
 Repeat for lens 3.
Experiment 2.3 Radius of curvature of a lens (+ determination of refractive index)
There are a wide variety of experiments that can be performed to examine the properties
of lenses. The following (slightly quirky) example is included since it is a convenient
way of determining the radius of curvature of convex lenses and knowledge of this value
allows the refractive index of the material used to be determined.
The principal of the measurement is shown in figure 3. A source S of light (a pinhole
again) transmits lights onto a lens. However, although most light is transmitted some is
reflected (for an air/glass boundary ~5% can be reflected) enough to form a visible
“return” image alongside the source (see background information).
The condition for forming a return image (shown in figure 3) is a separation, u, between
source and lens such that following refraction at the first (left hand side) air/glass
boundary the light rays are incident normal/perpendicular to on the second (glass/air)
boundary. Then at the same time (i) the main, transmitted part of the beam forms a virtual
image at C and (ii) the reflected beam retraces its path back to and forms an image at the
source.
Here although use is made of the reflection calculations are based on the formation of a
virtual image (i.e. relating to light refracted through both interfaces). Since a virtual
image is formed at C, the sign convention dictates that v is negative, however C is at the
centre of curvature for the r.h.s. boundary and that magnitude of v is the radius of
curvature (for a thin lens).
30
Figure 3: Condition for forming a reflected image at the source
(light rays are normally incident on second boundary and retrace
their path back to the source). Under these conditions (and for a
thin lens) the virtual image is at the centre of curvature of the rhs
boundary.
Perform the following for all three lenses:
 Place the pinhole (acting as source S) a suitable distance from the lamp.
 With the mirror removed position the lens to obtain a “return” image of the pinhole
close to the pinhole.
 Measure u and calculate the virtual image distance v using equation 4 found in the
background information at the end of the text (remember that v is negative).
 Find the radius of curvature of the other surface of the lens in a similar way.
 Use the fact that v is equal in magnitude to the radius of curvature of the appropriate
surface of the lens to calculate the refractive index of the lens material.
Experiment 2.4 Image formation (and determination of focal length)
This experiment examines the conditions for producing and the nature of an image of an
object (a cross hair on a screen) through a single bi-convex, thin, spherical glass lens.
 First measure the dimensions of the cross-hair on the clear slide (the horizontal will be
used to calculate the magnification of images produced).
 Accurately position the lamp at 0 cm and the clear slide with cross hair at 20 cm (this
is close enough for a reasonable throughput of light whilst avoiding images of the
filament in the bulb).
 Next position the screen at 110 cm (separation to slide = 110 - 20 = 90 cm) and lens 1
in its holder between the slide and the screen.
 Move the position of lens 1 and find the two positions at which an image of the cross
hair is clearly focused on the screen. Note the nature of the image compared to the
object.
 Adjust the vertical position of the lens and the lateral position of the slide and lens so
that the image is roughly in the centre of the screen for both positions (to roughly
align the system).
31



For screen positions starting at 110 cm and decreased in 5 cm steps find the two
focusing positions for the lens and the vertical height of the image (with errors) noting
your values in a suitable table. Finish the sequence by using smaller steps to find the
minimum slide/screen separation for which a well focused image is possible.
Plot a graph of 1/u versus 1/v and use the intercepts to determine the focal length of
the lens, f. What is the value of the gradient and is it as you would expect?
Compare the v/u and y/x values obtained, and comment on the conditions at the
minimum slide/screen separation (for example compare u, v and f and consider the
magnification).
3. Background Information
3.1 Geometric optics
Geometric optics (or ray optics) considers the propagation of light in terms of a single line
or narrow beam of light, through different media. It is a very useful way to consider
optical systems especially when imaging is involved.
Geometric optics is based on the consideration that light rays:
 propagate in a rectilinear (straight-line) path in homogeneous (uniform) medium
 change direction and/or may split in two (through refraction and reflection) at the
interface or boundary with a dissimilar medium (only two media are considered here:
glass and air).
Although powerful in understanding the geometric aspects of optical systems, such as
imaging and aberrations (faults in images) it does not account for effects such as
diffraction and interference.
3.2 The interface between two media: refractive index and Snell’s law
The two media of concern here are air and glass and the parameter that characterizes their
optical property as far as geometric optics (and lenses) is concerned is their refractive
index, n.
Refractive index, n relates to the speed of light in media and is defined
n
speed of light in a vacuum
speed of light in a medium
[1]
By definition the refractive index of a perfect vacuum is unity (i.e. exactly one). The
refractive index bears a close relationship to relative permittivity, εr and can be
understood to result from the interaction between matter and light’s electric and magnetic
fields.
Light incident upon a boundary between media with different refractive indexes will be
reflected and transmitted. In addition, the transmitted light may be “refracted”, i.e. it
changes direction as described by Snell’s law.
For light travelling from air to glass (see figure 4) Snell’s law can be expressed as
sin  i n glass

 n glass
sin  t
nair
[2]
Where the angles are as defined in figure 4 and nair and nglass are the refractive indices of
air and glass respectively.
32
i
r
air
glass
t
Figure 4. Behaviour of a light ray travelling from air (low n
media) to glass (higher n media). The light ray is partially
reflected and transmitted. The transmitted ray changes direction,
(is refracted) at the interface according to Snell’s law (θi, θr and θt
are the angles if incidence, reflection and refraction of the light ray
respectively). - Note that a ray with an angle of incidence of 0o
does not deviate at the boundary.
Material
Polycarbonate
Air
Glass
n
~1.58
~1.0003
1.48-1.85
Table 1. Some refractive index values
3.3 Lenses
A lens is an optical component that in transmitting light rays uses refraction (i.e. the
application of Snell’s law) to cause them to either converge or diverge. Lenses are
usually constructed out of glass or transparent plastics.
The lenses used here will be “thin”, glass bi-convex (converging) spherical lenses as
shown in figure 2 with its main characterizing features:
 The axis of symmetry of a lens is known as its “principal axis”. Lenses usually also
have a very good “axial symmetry”: the behaviour of the lens varies with distance
from the axis - but is independent of the direction from the axis.
 A “bi-convex” lens is one that bulges outwards both sides from its centre.
 The bulge is characterised by the radius of curvature of the left and right hand side
surfaces, r1 and r2 respectively.
 A “thin” lens is one whose thickness along its principal axis (d in figure 5) is much
smaller than its focal length, f, i.e. d << f. It is an approximation that permits simpler
equations to be used.
 A “spherical” lens indicates that the front and back faces can be considered to be part
of a sphere which has an associated radius (also known as its “radius of curvature”).
 Light rays parallel to principal axis and incident on the lens will, after transmission, all
pass through the “principal focus” of the lens on the opposite side (light can travel in
either direction so the reverse is also true and there are two “principal foci”). Figure 3
explicitly shows this.
33


The distance from the optical centre, Oc of the lens to the principal foci is known as
the focal length, f of the lens.
Planes perpendicular to the principal axis and passing through the principal foci are
called “focal planes”.
r1
d
lens
r2
optical centre, Oc
principal axis
F
F
principal foci, F
focal length, f
Figure 5. Main features of a bi-convex lens.
3.4 Image formation, ray diagrams and sign conventions
Reading this page you are using a convex (converging) lens in your eye to form a “real
image” on your retina - it is real in the same sense as the image on a cinema screen is real.
In forming the image the light from a point on the page travels through all parts of the
lens. A consequence of this is that image formation can be understood by considering any
convenient rays of light as shown in figure 6.
object
1
x
2
F
image
F
3
y
v
u
Figure 6. Formation of a real “image” of an “object” as
understood through ray tracing (x and y are the heights of the
object and image respectively and u and v are the distances of the
object and image from the optical centre respectively.
Three convenient rays of light (labelled 1, 2 and 3 in figure 6) are:
Ray 1. A ray parallel to the principal axis which after refraction passes through the
principal focus.
Ray 2. A ray passing largely undeviated through the optical centre.
Ray 3. A ray that passes through the principal focus on the object side of the lens and
therefore emerges from the lens parallel to the principal axis.
Any two rays of light are sufficient and most textbooks use rays 1 and 2.
In addition to “real images” in optics there is also the concept of “virtual images”. In this
case rays appear to diverge from a point on an object. This concept is more commonly
used with diverging lenses, is used in experiment 2.4, but its simplest example is a flat
mirror where the image of an object is perceived at twice the distance from the object to
the mirror.
34
In order to form equations that relate, for example, the focal length of a lens to the
distances of the object and the (real and/or virtual) image from the lens for all possible
situations (for example to include diverging as well as converging lenses) it is necessary
to adopt a “sign convention”. The convention specifies the algebraic signs that must be
given to the various lengths in the system. Different textbooks may employ different
conventions and therefore have slightly different equations (which is mildly annoying).
General “University physics” textbooks are not very explicit in the conventions they
employ, therefore the convention adopted here is that used in “Optics” by Hecht
(publisher Addison Wesley).
In this convention optical beams enter the system from the left and travel to the right (as
in figure 3). Using the symbols used in figures 2 and 3 the signs used are explained in
table 2 below.
Sign
Quantity
u
v
f
x
y
Magnification (m = x/y)
r
+
real object
real object
converging lens
erect object
erect image
erect image
boundary left of Oc
virtual object
virtual object
diverging lens
inverted object
inverted image
inverted image
boundary right of Oc
Table 2. Meanings associated with the signs of thin lens
parameters
Using this convention and by considering “similar triangles” in figure 3 it can be shown
that:
y
v
m 
the linear magnification
[3]
x
u
and that
1 1 1
 
u v f
[4]
Equation 4 is known as the “thin lens equation” or the “Gaussian lens equation”.
Another useful equation, which relates the focal length, f to the radii of curvature, rl and
r2, of the surfaces of the (thin) lens and the refractive index, n, of the material from which
it is made is the lens maker’s equation:
1 1
1
 n  1  
f
r1 r 2 
[5]
Note that for the bi-convex lens shown in figures 2 and 3 under this convention the first
radius is positive and the second is negative.
35
Experiment 6: Using an Oscilloscope and Basic RC Circuit
Construction.
General Overview
An oscilloscope is a piece of equipment that allows you to visualise a measured voltage in
time on a 2D plot. Whilst you may in future come across the traditional name ‘cathode
ray oscilloscope (CRO)’, you will be using a modern digital oscilloscope. Older
oscilloscopes used a cathode source of electrons accelerated in vacuum to hit a phosphor
screen. The deflection of the beam was then controlled by a voltage (either input (y-axis)
or internally generated for the time base (x-axis)). For this reason they are very often
abbreviated to ‘CRO’ for Cathode Ray Oscilloscope. Even when referring to modern
versions that do not rely on such ‘cathode rays’ it is still common to hear them referred to
as CROs. The basic function is identical.
The oscilloscope is a common and important piece of electronic test equipment. It allows
the observation of constantly varying signal voltages, usually as a two-dimensional graph
of one or more electrical potential differences using the vertical axis, plotted as a function
of time on the horizontal axis. This is a very familiar concept as demonstrated in such
instruments as electrocardiograms (heart monitors), where the heartbeat is monitored with
a sweeping dot across a screen in time, and the magnitude of the heart beat illustrated by
the rise and fall of the dot. Beep!
The purpose of the first part of this experiment is for you to gain familiarity with such a
useful piece of equipment, and learn some of its limitations. In the second part of the
experiment, you will use the oscilloscope to determine the circuit characteristics of some
simple circuits containing a resistor, a diode, and then a combination of resistors and
capacitors (an RC circuit).
Aims and experimental skills





To introduce an oscilloscope and its characteristics and limitations.
To determine the I-V characteristics of a resistor and a simple diode using the
oscilloscope as a voltmeter.
To understand the voltage, current, resistance and impedance relationships in series
RC circuits.
To investigate the phase angle between circuit voltage and current in series RC
circuits and to measure phase angle using an oscilloscope.
To become familiar with Lissajous figures and to use them to calibrate a variablefrequency oscillator.
36
1. Introduction to an Oscilloscope
In its simplest form an oscilloscope is a voltmeter (high impedance, ie when connecting
an oscilloscope to a circuit, the circuit ‘sees’ a very high resistance/reactance), where the
trace can be swept across a second axis in time, known as the time base. As a result the
oscilloscope can be used to monitor fixed voltages (exactly like a voltmeter), or can be
used to monitor time varying voltages as a result of the spatial sweep of the measured
voltage. In this way AC signals can be visualised.
1.1 Application of a p.d. on the Y- axis
Turn on the GW Instek oscilloscope (shown in figure 1)
Figure 1. Front panel of the GW Instek oscilloscope. The important features
initially are shown with circles (Power, signal input channel 1, Volts/div (yaxis), and time/div (x-axis)
(Very) luckily for you, even the most basic modern digital oscilloscopes come with
convenient automatic setup. Press the Autoset button to initialise your scope (top right
hand side). You should have a single yellow line (channel 1) displayed on the screen
along the y-axis zero value. (the bottom left of the screen should show that channel 1 is
highlighted – try turning it off and on again with the yellow CH1 button. Observe the
highlighting of the ‘1’ at the bottom left again.
Connect a Leclanche cell to the input BNC terminal of channel one shown in figure 1 (the
yellow channel) (BNC is sometimes referred to as a British Naval Connector – where a
bayonet clamped connection was essential to keep the coaxial wire in place in turbulent
seas. The name actually derives from the Bayonet (B) and the names of the two inventors
(NC)). The Leclanche cell is used for calibration and its e.m.f. is a standard 1.50V. Select
a suitable sensitivity range on the volts cm-1 switch (circled in figure 1) for that channel
and note the deflections produced by the cells. Investigate the accuracy of the calibration
of the oscilloscope.
Now connect the cell you are given and determine its e.m.f. Estimate the precision of the
result. Do you think that you are justified in calling the measured potential difference an
e.m.f.? [Hint: think about the voltage measured across the terminals of a battery of e.m.f.
E and internal resistance r when connected into a circuit of resistance R.]
37
Change the coupling from DC to AC when a cell is connected to the input terminals and
note the result. Find the coupling in the Channel Menu (coupling). This toggles between
Ground, DC and AC coupling)
How useful is the calibration procedure just outlined? What happens if we need to use
another volts cm-1 range?
Now that you have measured a voltage, let’s look at the limitation of the oscilloscope as a
voltmeter. A voltmeter in essence can be considered to consist of a current meter in series
with a high resistance resistor. Ideally introducing a voltmeter into a circuit will not affect
the voltages in that circuit1. To illustrate the conditions under which this ideal is not met,
we examine R5 and R6 of the clear topped resistance box
Construct the following simple circuit using leads.
Use the oscilloscope to measure the voltages across R5 and R6 individually and in
combination. Show that these values are not as expected.
The discrepancy between measured and expected values can be accounted for by the finite
resistance of the oscilloscope.
Using the voltage measured with the oscilloscope across either R5 or R6 and the battery
voltage calculate the resistance of the oscilloscope. (First draw a circuit showing the
resistors and the oscilloscope resistance in parallel with one of them as another
resistance). Consider the resistors in this circuit (you will need the combination of series
and parallel resistors to give you the total resistance in the circuit). (R6=1.5M
R5=10M).
What would you say is the criterion for the reliable use of the oscilloscope as a voltmeter?
Finally for this introduction replace the cells by an oscillator (set to a frequency of about 1
kHz) taking the output from the “50  output”. Explain the form of the trace. On the
oscilloscope DC setting, investigate the effect of applying the sum of an AC and a DC
signal by pressing the DC Offset switch on the oscillator and varying the Offset level.
You may need to adjust the ‘trigger level’. This is done by adjusting the trigger level
1
Principles of Physics. 9th ed. Wiley Page 720.
38
(RHS wheel see figure 1 – trigger level knob), until the yellow arrow marker on the RHS
of the screen is within the extent of the amplitude of the waveform. (The trigger level is
essentially the point at which the trace goes through zero on the time base). Repeat on the
AC coupling setting. Summarise the effect of the DC and AC coupling settings on the
oscilloscope.
1.2. The Time Base
The oscilloscope is generally used to display a stationary trace representing some portion
of the waveform of a time-varying voltage. Usually, voltage is plotted on the Y- axis and
time plotted on the X- axis of the screen (known as the "time base").
Remove all connections to the oscilloscope input. Set the trigger level control to its
central position (at which point the marker on the y-axis to the side of the display will be
at zero. Set the TIME/CM to 0.2 ms cm-1). Describe the resulting trace on the screen.
2 Introduction to Circuit Construction
Electronic components can be classified as linear or non-linear. Resistors, capacitors and
inductors are linear devices, because the current flow (I) through them is proportional to
the applied potential difference (V). Diodes and transistors have more exotic I-V
characteristics. These are non-linear.
Initially the oscilloscope will be used as a high-impedance VOLTMETER and the
multimeter will be used as a MILLIAMMETER.
In part 3.2 of the experiment, you will determine the I-V characteristics of a diode. For a
brief discussion of what a basic diode is, see the background info at the end of this
experiment. In comparison with the diode, the properties of a resistor may appear
uninteresting. However, it is undoubtedly one of the most important circuit elements. Its
principal uses are for limiting the flow of current and as a current-to-voltage converter.
However combined with a capacitor it can form various types of useful electronic circuits,
which you will explore the response to an AC signal in part 3.3.
3. Experiments in Circuit Construction.
3.1 I-V Characteristics of a Resistor (Ohm's Law)
Familiarise yourself with the prototype board. Plug the board into the mains.
Build the circuit (Figure 3.1) on the prototype board using a resistor with the colour code
yellow, purple, red (and gold) for R. Use your multimeter set on the "1 mA" scale for
measuring the current and your scope set on "dc" coupling for measuring voltages. When
making connections between the scope and the prototype board, ENSURE THAT
EARTH CONNECTIONS ARE COMMON. Vary the input voltage and measure the
current (I) flowing through the resistor for various values of V. Plot these values directly
on to a graph of I vs V.
39
Figure 3.1: I vs V for a resistor. Note that the ground connection for the -5V to
+5 V supply is made internally. You need only connect one wire from the
variable supply to the circuit (i.e to the milliammeter).
What is the gradient of your graph ? Measure this and from it deduce R. Compare this
value with the colour-coded markings on the resistor. Use your multimeter set on the
"ohm range" to verify your deductions. Which is the "best" value? Which one would you
trust, and why.
Briefly discuss any sources of error. How could the experiment be improved?
3.2 I-V Characteristics of a diode
Repeat the above experiment using the circuit of Figure 3.2 to determine the I-V
characteristics of a diode, a non-linear circuit element. The input voltage (Vin) is again
provided by the -5V to +5 V variable dc supply. Note that the 1 k resistor is necessary to
limit the flow of current through the diode, which might otherwise overheat and be
destroyed.
Figure 3.2: Circuit to measure I vs V characteristic for a diode
As in the previous experiment, vary Vin and make measurements of the current flowing
through the diode (I) as a function of the potential drop across the diode (V). For part of
the characteristic you will have to increase the sensitivity of your scope. Check the
position of zero volts on the screen after changing ranges. Plot a graph of I vs V but be
40
selective – it may not make sense to plot the whole of the measured range i.e. if
something is not changing at all it may be sufficient to describe this in words. For small
values of V, you may find that you have to increase the sensitivity of the milliammeter. Be
sure to take plenty of readings in regions where the graph is non-linear (ie steep) (this is
why you must plot the graph in the lab) and you will probably have to plot the nonlinear region to a greater sensitivity.
From your graph, describe the action of the diode. Note that it "switches on" at about 0.6
V. Determine the approximate values of the diode's "resistance" in these forward
(conducting) and reverse biased regions. Can you comment on the resistance in the
forward direction. What is limiting the current flow in the circuit?
3.3 RC circuits.
Capacitors and resistors often occur in circuits together. These circuits are known as RC
circuits. In RC circuits the capacitive reactance and resistance combine to produce circuit
impedance. The reactance and resistance cause the current and voltage to be out of phase
with each other. The study of current and voltage in RC circuits is the subject of this part
of the experiment. You are advised to read the reference2. The main concepts, relevant to
this experiment, are summarized here.
An ac (alternating current) source supplies a sinusoidally varying potential difference or
current. For example in the UK the mains electricity system uses a frequency of 50Hz.
To represent such varying voltages and currents we use vector (or phasor) diagrams. The
instantaneous value of a quantity is represented by the projection onto a horizontal axis of
a vector with a length equal to the amplitude of the quantity. The vector is assumed to
rotate anticlockwise with constant angular velocity corresponding to the angular
frequency of the quantity involved.
In an ac circuit with only resistors, the current and voltage are in phase. This means that
they vary in the same way with time, so that both reach their maximum and minimum
values at the same time. The current and voltage phasors are therefore parallel and rotate
together. The current and voltage amplitudes are related by Ohms law (V=IR).
When an ac current is applied to capacitors, the instantaneous current is proportional to
the rate of change of voltage. The capacitor voltage and current are out of phase by a
quarter of a cycle (or 90 degrees or /2 radians – rate of change being the differential, and
the differential of sin(x) is cos(x) which is just /2 shifted). The peaks of voltage occur a
quarter-cycle after the current peaks and we say that the voltage lags the current by 90
degrees. The current and voltage phasors are therefore at right angles but still rotate
together. The voltage and current amplitudes are related by V = I XC where XC is the
capacitive reactance of the capacitor and is defined by XC = 1/ (C). Here, C is the
capacitance and  the angular frequency; XC has units of Ohms.
Now, consider the circuit in Figure 3.3(a) consisting of a resistor, a capacitor and an ac
source connected in series. The total voltage at any instant is equal to the sum of the
instantaneous voltages across the two components. However, because of the presence of
the reactive component (the capacitor) the total voltage amplitude is the vector sum of the
2
Principles of Physics. 9th ed. Wiley Page 720.
41
voltage amplitudes across each of the components. We can see this more clearly in a
vector (phasor) diagram (Figure 3.3(b)).
Figure 3.3: (a) A series R-C circuit (b) Phasor diagram
The voltage vector for the capacitor VC is usually, by convention, shown vertically
downward. The components are connected in series so that the current is the same at
every point in the circuit. We therefore have one current vector I shown horizontally. (The
current leads the capacitor voltage by 90 degrees.) The voltage vector for the resistor V R
is also shown as a horizontal vector coincident with I. (The resistor voltage is in phase
with the current)
From the diagram we see that, the magnitude of the total voltage or source voltage V is
the vector sum of VC and VR. From Pythagoras' theorem
V =
V = I
V
2
R
 VC2

R 2  X 2C
We define the impedance of the circuit Z as
Z  R 2  X 2C
so that
42
V = I Z.
Impedance plays the same role as resistance in a dc circuit but note that Z is a function of
R, C and .
The angle  is the phase angle of the source voltage with respect to the current. We see
that
tan  =
VC IX C X C
1



VR IX R X R CR
3.3.1 Experiment: determination of phase difference in an RC circuit.
1. You are now going to put your understanding of R-C circuits into practice. Using the
prototype board, assemble the circuit in Figure 3.4. Use the capacitor provided (nominally
1 F) and a resistance box for the resistor. Use the signal generator plus the isolator to
provide the ac source (see Introduction to Electronics Experiments in your lab. book).
2. The phase difference between the voltage across the whole circuit and that across the
resistor  is given by: tan  = 1 / (2fCR). Derive this expression yourself. Therefore, cot
 may be plotted against R to give a straight line, from the slope of which C may be
found if f is known. Using the oscilloscope, measure  using the ellipse method (outlined
in the background information at the end of this experiment description) for different
values of R and plot the graph. Determine C and the associated experimental error.
Figure 3.4: R-C circuit for the determination of phase difference
3.3.2 Frequency Comparison and Lissajous figures.
If signals whose frequencies are expressible as a ratio of two small integers are applied to
an input channel (y-axis) and to the signal that drives the time base (x-axis), characteristic
traces known as Lissajous figures are obtained. The elliptical traces you have already
43
generated to measure phase difference are in fact Lissajous figures. In this case, the
frequencies were the same for both signals so the ratio was unity. More complicated
traces are obtained for higher ratios. Lissajous figures can be used to determine the
frequency of one signal in terms of another which is known.
Apply the ac output from the prototype board (or use the multi-tap transformer) to one
channel of the oscilloscope. Then apply the output of suitable amplitude from the
variable-frequency oscillator to the other channel, choosing initially a frequency of 50 Hz.
Disable the internal time-axis by selecting XY mode. Adjust the frequency of the
oscillator to obtain a stationary elliptical trace and note the frequency, according to the
oscillator, at which this occurs. Increase the frequency to about 100 Hz to obtain a figureof-eight and again record the frequency according to the oscillator. Repeat in steps of 50
Hz to 500 Hz. Plot a graph of expected frequency against recorded frequency. From your
graph, comment on the accuracy of the oscillator scale. How could you use your graph to
calibrate the oscillator?
4. Background Information (appendices)
4.1: Diodes
A diode is formed by the junction of a p-type and an n-type semiconductor. Electron
movement in the region of the junction forms a DEPLETION layer, in which there are no
charge carriers, ie, an insulating layer. When FORWARD BIASED, the layer narrows,
and at about 0.6 V (for a silicon p-n diode) the layer vanishes, and the diode then offers
very little resistance to the flow of current (Figure 4.1). When the diode is REVERSE
BIASED, the depletion layer becomes wider and little current flows. The diode works as a
RECTIFIER, allowing current to flow in one direction only, as demonstrated in the
experiment (hopefully). The diode can be considered as a ‘one way valve’ for electrical
current.
or
Figure 4.1: Forward- and Reverse-biased diode. Note convention for supply polarity
4.2: Measurement of phase angles with the oscilloscope.
If potential differences are applied to the two channels of the oscilloscope, and the first
channel drives the y-axis, whilst the second channel drives the x-axis, we have for the
44
movement of the spot on the screen
x = A sin (t) ;
y = B sin (t -  )
where  is the phase angle. In general this represents an ellipse, as shown in Figure 4.2.
Putting y = 0, we have, B sin (t -  ) = 0, so that t =  and x = A sin . From the
diagram we see that for y = 0, x = ON' = ON = A sin . The maximum value of x is A =
OA = OA', so that ON = OA sin . Hence,
sin  = NN' / AA' .
AA' is the difference between the two extreme x values of the ellipse, and NN' is the
length given by the intersection of the ellipse with the x axis. Note: These are distances
e.g. A to A’ and NOT A x A’. Both of these quantities can thus be obtained from the
oscilloscope trace. Measurement may be made easier by using a piece of graph paper as a
rule!
Figure 4.2: Elliptical trace for the measurement of phase angle
45
Experiment 7: Air resistance
Note: You must keep a real time lab diary in the usual way and aim to finish all analysis
within the 4 hours. Your lab book will taken in at the end of the 4 hour session.
Equipment: 3 muffin cases, 1 m rule, stopwatch.
Safety: Students must not raise themselves (unreasonably) off the floor to gain extra
height and must perform the experiment in the first year laboratory.
Outline
With only a reminder of the important physics, you are asked to determine as much as you
can about a very simple system: muffin cases falling vertically through the air. Some
students may have come across this experiment before, however it is demanding in terms
of both experimental skill and analysis - do not underestimate it.
Experimental skills
 Making and recording basic measurements: heights and times (and their errors).
 Making use of trial/survey experiments.
 Careful experimental observation.
Wider Applications
 Planes, trains and automobiles are all designed to reduce air resistance in order to go
faster and/or travel more efficiently.
 The wider scientific field is that of fluid dynamics (the movement of fluids), a highly
complex field that includes the prediction of weather patterns and the processes of star
formation.
1. Introduction
The force due to air resistance (drag) acting on a body travelling through air is
proportional to ρAv2 where ρ is the air density; A is the cross sectional area of the body
and v is the velocity through the air.
The constant of proportionality is called (or at least is very closely related to) the “drag
coefficient”.
A special case is a body falling under the influence of gravity so that the downwards force
acting upon it is constant (mg). Starting from rest and given sufficient time the
downwards force and the drag reach equilibrium when the body is falling at its so called
“terminal velocity”.
2. Experimental
By a combination of experiment(s) and analysis discover as much as you can about the air
resistance of the system in the four hour laboratory session.
Notes:
 By dropping multiple cases together the mass can be increased without changing the
cross sectional area.
 Take the density of air () to have a value of exactly 1.2 kg.m-3.
 75 muffin cases have a mass of 42 g (with an error of +/- 1 g).
 Compared to normal teaching lab diaries, your notes will need to contain more
procedural information (since no instructions are available to refer to).
 Demonstrators are available to bounce ideas off – not for telling you how to go about
your investigation.
46
Experiment 8: Radioactivity, counting statistics and half lives.
Important Safety Information
For this experiment you must receive training and your risk assessment must be checked
by your demonstrator before you proceed with practical work.
Two radioactive sources are provided. These are both sealed to minimise the risk of
leakage. When using radioactive materials, exposure should be minimised by:
1. limiting the amount of time exposed to the source;
2. maintaining a reasonable distance from the source;
3. washing your hands immediately after performing the experiment and certainly before
consuming food and drink;
In addition the Pa generator must always be used over the drip tray provided.
General Introduction
You will perform some basic experiments in the measurement of radioactivity using
standard pieces of equipment for detection of radioactive sources. The (effectively)
constant radioactivity of a uranium oxide source is used to determine the correct operating
voltage for a Geiger Muller (GM) tube. The GM tube is then used to perform two
experiments: (i) measurement of background radiation and its analysis in terms of Poisson
statistics, (ii) measurement of the (short) half-life of protactinium 234 (Pa234), an element
in the decay series of uranium 238.
Aims and experimental skills
 Safe handling of mildly radioactive material.
 Setting up and use of Geiger Muller detectors.
 Analysis of “counting experiment” data using Poisson statistics.
 Determination of half-life values.
1. Experiment
This experiment consists of three parts. In part 1 the operating characteristics of your
Geiger-Muller (GM) detector are investigated; in part 2 background radiation is measured
and analysed; in the final part, the half life of Protactinium234 is measured.
1.1 Setting up the detector
Note: This section is concerned with setting the detector up for later measurements. Refer
to Background section 2.5)
 First turn the counter on with the anode voltage set to 400 V to let it warm up for ~5
minutes.
 Use the warming up period to understanding how to operate the counter: Set it to
“counting” and “start”. The unit should then display the cumulative counts. These
counts can be zeroed using the “reset” button.
 Towards the end on the warm up procedure measure the background counts
accumulated over a 10 s period - there should be something like 5 to 10 counts if the
detector is working properly.
Now set the GM detector voltage to a minimum and place the UO2 ( "lollipop" ) close to
the detector window. Slowly increase the voltage until counting starts. This is the starting
potential. Record this voltage and count for one minute to give the count rate in counts per
47
minute. Increase the voltage and count for one minute. Repeat this procedure until the
maximum voltage available is applied. (This voltage will be less than that producing onset
of continuous discharge.) Plot the characteristics. Decide on the optimum voltage at
which to operate the GM detector. (See section
1.2 Background radiation (+Poisson statistics)
Due to the different sensitivities to different particles the measurement of background
radiation by a Geiger Muller tube is not straightforward. However, comparative studies
are possible and here the background detection rate is convenient for investigating the
statistics of counting.
Measuring background radiation
Refer to Background section 2.2. Poisson statistics involve counting events in defined
time periods. Here the experiment involves noting the total count every 5 s for a period of
360 s - do not reset the counter every 5 s. This is quite intense so draw up a suitable table
in advance that can be filled in during data collection.

Perform the data collection (following which note any relevant observations).
Analysis using Poisson statistics
The measured value required here (x in equation 2 in Background 2.2) is counts/time
interval and will be an integer. The data collection methodology indicates that the
smallest time interval that can be used is 5 s, however it is instructive to perform the
analysis for both 5 s and 10 s intervals. (There is potential for confusion here so diary
entries should be clear).
Data distributions
 Tabulate the counts for each 5 s (and 10 s) time interval (x) and their frequency (f(x)).
 Plot histograms (f(x) versus x) for both intervals, i.e. use separate plots.
 Determine the mean counts/time interval and the number of data points for the two
intervals. Use these to determine “expected” Poisson distributions using equation 2*,
plot the points on the same graphs as the experimental data. Is the total noumber of
timing intervals the same for both distributions? Why does this matter?
 How do your results compare with the theoretical Poisson distribution?
 What is the signal: noise ratio in both cases?
* Important note: equation 2 represents a normalised distribution.
Remember to take account of your measured mean background rate in the proceeding
measurements.
1.3 The half-life of Pa234
This Generator is supplied in a sealed translucent container which is virtually chemically
inert, and under normal circumstances is leak proof. For storage, the generator is packed
in an outer container.
Whilst in use the generator should be placed upside down, and after the experiment, the
generator must be returned to its protective beaker. When not in use the generator must be
stored with the plastic cap uppermost.
Check your risk assessment and especially remember to use the disposable gloves and
perform the experiment over plastic drip tray.
48
Figure 1. Arrangement of source and detector





Remove the flask from the box. Shake the flask while holding it above the drip tray
for a short period of time (10 second will be enough) until the contents have
completely mixed.
Replace the source upside down as shown in figure 1 and record the number of counts
per unit time. The easiest way to do this is to record the total number of counts (every
30 seconds) and work out the count rate afterwards. Continue until the count rate is
roughly constant, i.e. for approximately 20 minutes.
Plot a graph of count rate versus time. Remember to take background counts into
consideration. Comment on the graph obtained.
Finally, process your results to find the half-life of Protactinium-234. The half
life can be found from the graph by measuring the time taken for the count rate (of
Pa234) to fall by a half. If the count rate decreases exponentially to zero this task is
easy, if not then you will have to decide which is the most sensible approach and
explain what you decided and why. What is the most accurate graphical method to
use to find T1/2 and why? How do you think the signal to noise ratio changes
throughout the experiment?
Repeat the experiment if there is time to do so.
2. Background information
Radioactive decay is the process by which unstable atomic nuclei lose energy. In this
process particles of radiation are emitted, the three main types being alpha (He nuclei),
beta (electrons) and gamma (photons). Since the energy involved in nuclear processes is
high, the radiation is generally ionising. This property is exploited in the design of
detectors of radiation but is also responsible for the danger associated with radioactive
materials.
The discovery of radioactive materials, by Henri Becquerel in 1896, lead to great
advances in nuclear and other branches of physics. In one strand, it was realized that
nuclei could not only break up (fission) but also join together (fusion) and that the fusion
process was responsible to the power output of the Sun and the stars. This solved one of
the great mysteries of science at the time - that power output based on gravitational forces
implied a much shorter age for the Sun than that implied by the evidence of geology and
evolution.
49
2.1 The mathematics of radioactive decay
It was realized early on that the radioactive decay of nuclei is a “stochastic” or random
process, i.e. it is not possible to predict exactly when a nucleus would decay, instead, only
a probability of it decaying can be found. Following from this the rate of disintegration of
a given nuclide is directly proportional to the number of nuclei N of that nuclide present at
that time:
dN
 N
[1]
dt
where λ is the decay constant. However, rather than deal with 'probability of decay per
second', it is more usual to describe the rate of decay of a radioactive material by its
characteristic half-life. This is defined as the average time T1/2 it would take for half the
number of nuclei in the material to decay, or alternatively and as will be used as part of
this experiment, for the decay rate to fall to one half of its original value.
2.2 The statistics of radioactive decay (Poisson statistics)
Poisson distribution
The measurement of radioactivity is a counting experiment; a detector counts the number
of discrete events occurring in a fixed time interval. Very often with this type of
experiment the data takes the form of a “Poisson distribution”. This is the second type of
statistical data distribution examined in the first year laboratory, the other (Gaussian
distribution) is investigated in Experiment 4.
The Poisson distribution is the limiting case of a “binomial distribution” when the number
of possible events is very large and the probability of any one event is very small. The
normalised distribution is given by
 x e
[2]
P(x) 
x!
where P(x) is the probability of obtaining a value x, when the mean value is μ. The
standard deviation for a Poisson distribution relates to the mean value and is given by σ(x)
=  . This distribution is unlike the normal or Gaussian distribution in that it becomes
highly asymmetrical as the mean value approaches zero.
Counting experiments: the “signal to noise” ratio
In all counting experiments*, the “quality” of the data is expected to “improve” with
increasing counting time and counts. This can be understood as follows: the mean
number of counts in the experiment, μ, is the “signal” whilst statistical variations in this
signal are represented by the standard deviation σ(x) and can be thought of as “noise”.
In Poisson statistics σ(x) =  therefore the signal/noise =  /    , i.e. the ratio
increases with the square root of the number of counts. This is an often quoted and very
important finding for understanding and designing experiments.
Put another way, if in a particular counting period an average of N counts are obtained,
the associated standard deviation is N (ignoring any errors introduced by timing
uncertainties, etc). Clearly, the larger N the more precise the final result. For a given
source and geometrical arrangement, however, N can be increased only by counting for
longer periods of time.
* Counting experiments are wide ranging. For physicists, counting photons to acquire a
spectrum (such as that emitted by a star) is a relatively common task that comes in this
category but even the number of letters sent by Einstein in set intervals has been analysed
in this way.
50
2.3 Background radiation
Part of this experiment involves measuring background radiation. This background level
has many sources including long lived terrestrial radioactive species, cosmic rays and
remnants from nuclear experiments. For most people the most significant source is due to
radon gas formed as part of the decay series of uranium.
2.4 Philip Harris Protactinium Generator
Protactinium234 has a half-life of approximately 70 seconds, and is suitable for the
observation of radioactive decay. This isotope is one of the products from the U238 decay
series, part of which is shown below.

238
U 92
––——›
4.5x109
years
234
Th90
low-
high-
energy 
energy 
––——›
24 days
234
Pa91
––——›
234
U 92
72 secs
To achieve isolation of Pa234, a less dense, water immiscible, organic liquid is added to a
solution of a Uranium238 salt in concentrated Hydrochloric acid. Protactium234 is soluble
in this organic layer. When the liquids are shaken and they are mixed together, the Pa234 is
extracted by the organic solvent. When the mixture is allowed to settle, a physical
separation into two layers occurs, where the Pa234 is now in the upper layer. The Pa234
decay is monitored, in this experiment by a Geiger-Muller Tube which is placed close to
the top of the containment flask.
Several factors combine to make sure that the source can exhibit a Pa234 half-life:
 Thorium234 in confined to the low aqueous layer; beta radiation from this, and alpha
radiation from the Thorium230 can scarcely penetrate the flask.
 U234 and U238 also both concentrate in the aqueous layer: They are alpha emitters.
 Pa234 is a beta emitter, with a high enough energy spectrum to penetrate both the liquid
in which the source is sited, and the walls of the flask.
 Radiation from freshly born Pa234 nuclides cannot penetrate through from the bottom
layer.
2.5 The Geiger-Muller Detector
A Geiger-Muller (GM) detector in its simplest form consists of a thin wire (the anode)
mounted along the longitudinal axis of a cylindrical metal tube (the cathode). The tube is
filled with a gas at low pressure and a potential difference is applied between the anode
and cathode. Radiation entering the detector ionises the gas, producing, for each photon or
particle entering, a burst of ions. These ions are accelerated to the electrodes by the
potential difference and constitute an electrical current pulse. Successive pulses are
recorded in a counter unit.
Beta-particles are readily detected by a GM detector. Most alpha-particles cannot pass
through the detector window. Gamma-rays are so penetrating that only a small, but
constant, fraction of those entering the tube actually interact with the gas and are detected.
51
Figure 2: Schematic diagram of Geiger-Muller characteristic
For a fixed radiation rate the number of pulses detected depends mainly on the potential
difference between the electrodes as shown in figure 2. As the potential difference is
increased from a low value the pulse rate increases until the potential difference reaches a
range over which the pulse rate changes very little. This is called the (Geiger) plateau. At
higher voltages a continuous discharge occurs. The usual recommended operating
potential difference for a detector is approximately half way along the plateau. However,
not being too close to the extremes of the plateau will suffice.
Wider Applications
 The mathematics of radioactive decay is common to many areas of physics, such as
the charging and discharging of capacitors
 Counting experiments and their statistics are widespread in all sciences.
52
Experiment 9: Rotational motion and Moment of Inertia (MoI) with a
torsion pendulum
Safety: This experiment makes use of a relatively long thin steel rod. Care should be
taken to ensure that it is positioned below eye level and does not point towards the (eye
of) the user.
Equipment List: Outline
The physics: A torsion pendulum is used to illustrate some of the concepts associated with
rotational motion (motion around an axis). In particular the importance of the shape that
is rotating is considered via its “moment of inertia” (or “rotational inertia”).
Measurements are used to reveal the (unknown) internal structure of a hollow spherical
body (a hockey ball).
Experimental techniques: This experiment provides a good example of the process of
establishing a scientific technique. A test phase (in which known samples are measured)
characterises the system (i.e. calibrates it and determines its accuracy and precision)
before it is used in anger on real (unknown) samples.
Experimental skills
 Making and recording basic measurements; experimental observation; analysis of
straight line graphs.
 Establishing a scientific instrument by characterisation with known samples, before
employing it to measure an unknown sample.
 Detailed data analysis (of the hollow sphere arrangement).
Wider applications
 At the large scale consider the moon rotating around the Earth, the Earth around the
Sun, the Sun around the galaxy. The spring semester module (PX1225 Planets and
Exoplanets) shows MoI measurements to reveal the internal structure of planets.
 At the small scale consider electrons orbiting a nucleus.
 At the human scale consider almost every machine: the motor car; the electric motor;
water-pumps; windmills….
1. Background notes.
School physics and mathematics courses discuss “translational” motion in which a body
moves in one or two (or three) dimensions. By introducing “rotational” motion, in which
a body turns about an axis (Resnick and Walker Chapters 10 and 11), any motion to be
described. For example a ball rolling down a hill is a combination of both types.
However, this experiment confines itself to illustrating the case of rotational motion and
in particular focuses on the concept of the moment of inertia (MoI) the rotational
equivalent of inertial mass.
1.1 The Torsion pendulum
This is a variation on the mass on a spring experiment in which a vertical displacement of
the mass from its equilibrium position results in simple harmonic motion (SHM) provided
that the restoring force is proportional to displacement. Here the displacement is an
angular displacement (rotation), θ of the mass and the restoring torque (rather than force)
is due to torsion (twisting) in the spring.
The condition for SHM here is that the restoring torque is proportional to the angular
displacement
53
𝜏 = −𝜅𝜗
[1]
where κ (kappa) is a constant known as the torsion constant.
By comparison with SHM for a mass on a spring, the oscillation angular frequency of the
system is expected to be
𝜅
𝜔 = √𝐼
(radians per second) [2]
where I is the moment of inertia of the system.
1.2 Moment of Inertia, I (aka Rotational Inertia)
The moment of inertia (MoI) of a body indicates how mass is distributed about its axis of
rotation. It is a constant for a particular rigid body and axis of rotation (consequently the
axis must be specified for the value to be meaningful).
A point mass m a distance r from the rotation axis has a MoI of mr2. The MoI of a body
can be found by considering it as a collection of i particles of mass mi at different
distances ri from the axis of rotation. The MoI of the ith particle is given by 𝑚𝑖 𝑟𝑖2 and the
total MoI inertia, I by the sum for all particles:
𝐼 = Σ𝑚𝑖 𝑟𝑖2
[3]
This equation is extremely important generally (and to this experiment) for two main
reasons:
 It indicates that masses further from the axis have a greater affect on MoI.
 It is the basis (via adding known MoI) of determining both an unknown MoI and the
torsion constant of the spring.
1.3 MoI of different shapes
Resnick and Walker (Principles of Physics, chapter 10) discuss how the MoI of
continuous bodies (of uniform density) can be found by replacing the sum with an integral
instead of a summation
𝐼 = ∫ 𝑟 2 𝑑𝑚
[4]
where dm is a mass element and r its distance from the rotational axis. Select results from
this of relevance here are presented in table 1.
Table 1. Moments of Inertia for shapes of importance here (r represents distances or radii
as appropriate, L represents length)
Shape
Point mass
Solid cylinder
Axis
Through point
Through central axis
Sphere
Through centre
MoI, I
𝑚𝑟 2
1
𝑚𝑟 2
2
Hollow, thin walled:
2
Thin rod
Through centre,
perpendicular to length
2
3
𝑚𝑟 2
Solid: 5 𝑚𝑟 2
1
𝑚𝐿2
12
MoI are of the form n(mr2), n being different for different bodies. With this in mind the
value n will subsequently be referred to as the “pre-factor”.
1.3.1 Spheres
54
The different pre-factors in table 1 for thin walled hollow and solid spheres is of particular
interest to this experiment. They indicate that as the wall thickness increases the prefactor will decrease from 2/3 to 2/5, or alternatively that by measuring the pre-factor the
wall thickness can be determined. Finding the pre-factor requires the MoI, mass and outer
radius of the sphere.
The mathematics will be illustrated by starting with the MoI of a thin walled sphere and
developing an integral for the general case and the specific case of a solid sphere.
The mass of a (thin walled) sphere of density, ρ, radius r and thickness dr is given by its
density multiplied by its thickness, i.e. 𝜌4𝜋𝑟 2 𝑑𝑟. Therefore an alternative form of its
MoI is
8
𝐼 = 3 𝜌𝜋𝑟 4 𝑑𝑟
[5]
From this the MoI of a thick walled sphere can be found using a straightforward
integration
𝑟 8
𝐼 = ∫𝑟 2 3 𝜌𝜋𝑟 4 𝑑𝑟
[6]
1
where r1 and r2 are the inner and outer radii respectively.
For the case of a uniform solid sphere of radius r, we have r1 = 0, r2 = r and 𝑚 = 𝜌
so that
8
2
𝐼 = 3×5 𝜌𝜋𝑟 5 = 5 𝑚𝑟 2 ,
as expected.
4𝜋𝑟 3
3
1.4 Characterising a multiple component system
If, as here, the torsion constant of the spring and the rotational inertia of a component are
unknown then experiments must be devised to find them. The approach is to add known
rotational inertia (from equations 3 and 4 and table 1) to the system and find the effect on
the frequency of the torsion pendulum.
If the unknown (starting) rotational inertia is I0 and for i additional bodies is 𝐼𝐴 = ∑ 𝐼𝑖
then the total rotational inertia is
𝐼 = 𝐼0 + 𝐼𝐴
[7]
where I0 is fixed and unknown but the (multiple) contributions to IA are known. With this
in mind equation 2 can be re-written
1
𝜔2
𝐼
=𝜅=
𝐼𝐴
𝜅
𝐼
+ 𝜅0
[8]
So that a graph of 1⁄𝜔2 versus 𝐼𝐴 will be a straight line of gradient 1⁄𝜅 and intercept
𝐼0 ⁄𝜅, allowing both 𝐼0 and 𝜅 to be found.
Note: with two unknowns a minimum of two measurements are required but in practice
more will be taken to reduce errors and improve precision.
55
2. Experiment
2.1 Apparatus
High stability (triangular based) retort stand. Moment of inertia kit: main body; thin rod,
2x add-on masses, training hockey ball with screw thread. Oscillations are timed with a
stop watch.
Table 1 Properties of components (errors represent range of values measured)
Body
Mass /g
Main body (with 2 screws)
73.0±.1
Thin rod
16.55±0.15
Short cylindrical mass (with screw)
16.5±0.1
Hockey ball* (diameter 7.23±0.02 cm)
157±5
Spring
4.70±0.02
* Hockey balls are not all the same. Those studies here are made of spin cast PVC to give
a thick walled sphere and hollow centre.
2.2 Thin rod, point masses and characterisation of the system
This experiment adds a thin rod (symmetrically/balanced) to the main body and then a
matched par of masses to the rod at different distances from the rotation axis. Resulting
changes in angular frequency illustrate the operation of a torsion pendulum (equation 2),
the role of MoI and allow the torsion constant of the spring, 𝜅 and the rotational inertia of
the main body, Io to be found.
Note:
 An assumption will be made that, as the spring is loaded and extended, its torsional
constant remains constant (measurements have been made that support this).
In the experiment the periods of oscillation are the main measurement. It is suggested that
10 oscillations (periods) are measured 3 times. To start the oscillations rotate the main
body by ~45o taking care to minimise any subsequent up/down motion. Do this for:



The main body (with 2 screws attached).
The main body with the thin rod attached centrally.
The above with the two small masses symmetrically (so that they are balanced)
attached at 8 distances from the axis.
Hint you will need to use Table 1 to calculate the MoI of the fixed rod and the masses at
each of their positions.

Referring to equation 8, draw a suitable graph and use it to help determine values for
I0 and κ and their associated errors. Hint you will also need to calculate
2.3 Hollow sphere (hockey ball)
Armed with the characteristics of the torsion pendulum (i.e. values for I0 and κ) the next
step will be to use our (now established) scientific instrument to measure and learn
something about an unknown object. The measurement here is quick and easy but data
analysis will take some time.
56



Screw the hockey ball to the main body – there is no need to use more than ~half the
thread on the screw. Hint: the measurement is easier if you keep the thin rod (without
masses) attached.
Carefully measure the period of oscillation.
Calculate the moment of inertia of the hockey ball (using equation 8), then its “prefactor” (𝑛 = 𝐼 ⁄𝑚𝑟 2 ). You may need to refer back to section 1.3 at this point.
2.3.1 Further data analysis
Extracting meaning from data (like taking measurements) is a skill and one that
undergraduates initially struggle with engage with:



It’s easier to simply present measurements and superficial analyses
It is a step further from experiments that simply illustrate a piece of coursework
Thought, effort and time is required and often it isn’t obvious what the course of an
analysis might be or where it might lead
This measurement, where a little data leads to a relatively large amount of analysis, is a
good one to use to illustrate the analysis process and the way scientists question data.
As with most problem solving the biggest hurdle is overcome by starting/getting going, so
it’s always best to start with something simple and easy:
Superficially consider the pre-factor (and its errors) for the hockey ball:
 Is it in the expected range (2/5-2/3) - and therefore reasonable?
If not then there is a problem that needs to be corrected: always start by checking
for mathematical errors (everyone makes them).
If there is still a problem it may be indicating that there are systematic errors –
finding this may be a larger task.
 If it is in the expected range where is it? Does it imply a very thin or thick walled
sphere?
(This is not to pre-judge the result, just a way of thinking scientifically).
Quantitative analysis: to produce figure out a value for the wall thickness using
equation 6.
There are many ways of doing this – but being unfamiliar with the measurement and
analysis it is best to pick one that is intuitive, instructive, easily checked and preferably
general.

Generate a graph of expected pre-factor (𝑛 = 𝐼 ⁄𝑚𝑟 2 = 𝐼 ⁄𝑚𝑟22 ) versus ratio of radii
(inner/outer).
This can be achieved by setting r2 to 1 and varying r1 between 0 and 1 – as r1 then
represents the ratio. Equation 6 is then
18
𝐼 = ∫𝑟
1
3
8
𝜌𝜋𝑟 4 𝑑𝑟 = 15 𝜌𝜋(15 − 𝑟15 )
(0 ≤ 𝑟1 ≤ 1)
[9]
and the mass of the hollow sphere is
𝑚=
4𝜌𝜋
3
(1 − 𝑟13 )
(0 ≤ 𝑟1 ≤ 1)
Tabulate values for I and m as a function of r1 and use this to generate your graph.

Check the graph.
Does the pre-factor vary over the correct range (i.e. is the maths correct)?
57
[10]

Comment upon the regime where the experiment will be most (and least) sensitive
to changes in wall thickness.
Compare the experimental pre-factor (and its error) with the graph to find the ratio of
radii and then the inner radius (and their errors).
Don’t do the following if there isn’t time.
A further obvious stage (if you think about it) would be to calculate a value for the density
of the material of the hockey ball. As it is already known that it is made of PVC the value
can be compared with its accepted (range) of density values – this might add or subtract
from confidence in the measurements. If the material was unknown the value might have
suggested possible materials.
58
Experiment 10: Simulating The Craters of the Moon.
Safety: Please don’t flick sand around – avoid eyes in particular. Wear moon-boots!
Equipment: 1 sand pit, water, 1 m rule, ball bearings.
Safety: Please do not flick sand around and take care with your eyes. Wear Moon-boots!
Outline
You are to test a theory relating the diameter of a lunar crater and the energy of the
impacting asteroid which caused it. However we are going to simulate the Moon’s
surface in the lab (ok, not just make sand-castles!). By making simple measurements of
the resulting craters you can infer something about the energy (and therefore mass) that
caused them.
Experimental skills
 Making and recording basic measurements and their errors.
 Careful experimental observation.
 Thinking about the differences between simulated and real experiments.
1. Introduction
Energy of impact
There is a relation between the diameter of the crater (and height of the crater wall) and
the kinetic energy of impact of the asteroid which caused each crater. This is often quoted
as the empirical equation:
𝐸
0.25
𝐷 = 2.5 (𝜌𝑔 )
[1]
𝑀
where D is the crater diameter,
E is the kinetic energy of impact,
ρ is the mean density of Moon rock which you can take to be ~2000kg/m3
and gM is the acceleration due to gravity at the lunar surface.
This relation is taken from Horedt and Neukum (1984). This reference is included at the
end, for your interest.
2. Experimental Method:
So this experiment is simple. The aim is to make a basic test of equation [1] yourself.
Using the ball bearings and sand, make a series of measurements by dropping a ball
bearing into the sand and measuring the diameter of the crater in the sand that is
produced. (Hint: you need to best simulate the moon’s surface and gravity, so add about
half a litre of water to the sand before you start, and mix thoroughly to obtain a good
consistency for producing craters).
There are a number of ball bearings of different mass (the three you are given have
masses of 2.1, 8.4 and 88.8 grams), and you can drop them from a variety of heights as
measured by a meter rule. You can then take measurements of crater diameters and
heights as best possible. Take care to think about good experimental method!
59
Plot the crater wall height versus diameter for each of the craters you have studied. Do
you see any correlation? What would this tell you about asteroid impact?
Now you need to calculate the impact energy and make a suitable plot of crater diameter
as a function of energy in order to test equation [1] (hint: you may need to take logs and
measure the slope of the graph).
If your results do not directly confirm the relationship, mention possible sources of error
(there are of course many).
EXTENSION, if time allows.
This is a very crude small scale laboratory simulation of large events that took place under
different conditions. Think about and list the major discrepancies and what effect they
might have on equation [1].
Tough question this. Can you think of a way of testing these findings on real Moon
craters from the Earth? (Hint: you would need images not taken at full Moon.)
Further Reading List for those who are interested or writing this up as a long
report:
Horedt G. P., Neukum G., 1984, `Earth, Moon \& Planets' vol 31, pages 265-269.
Kaufman W. J., Freedman R. A., `Universe', 5th edition, Chapter 9.
Zeilik M., Gregory S. A., `Introductory Astronomy \& Astrophysics',4th edition, Chapter
4.
Spencer J. R., Mitton J., `The Great Comet Crash'.
60
Experiment 11: Some end of semester fun physics 
Have you ever heard of Rube Goldberg, or Heath Robinson? Try typing them into
Google to get a feel for what this experiment will be about.
Many years ago, there was also a challenge on the
TV called “the great egg race” which initially
tasked teams of people to transport an egg without
breaking it from A to B. This idea was later
extended by “scapheap challenge” which did
similar challenges on a grander scale in, you
guessed it, a scrap heap.
For the ideal Rube Goldberg type machine see:
http://www.youtube.com/watch?v=qybUFnY7Y8w
Whilst we are not intending anything quite on this scale we want you to be imaginative
and transport a ping pong ball (an egg would just be too risky) from one end of a
workbench to the other, in as many interesting phases as possible, with an understanding
of the basic laws of mechanics and motion that you have been working on all semester.
You should try to include elements of linear and angular momentum, friction, (even flight
if you think you can control it). However the sting in the tail, is that at the end of the table
the ping pong ball must drop into a bucket on the floor, and when you have done this you
must be able to show a good understanding (calculation) of the typical energy stored in
the system for you to have done this. Estimate how much energy was needed to set the
system going, how much potential energy was stored, how much energy was dissipated,
and how much kinetic energy was left at the end when the ball plopped into the bucket.
Marks will be awarded for the creativity of the contraption and also for the creative
understanding of the physics.
Within reason you are free to use whatever you can lay your hands on (beg and borrow).
Check with demonstrators or the lab technician before using anything “unusual”. Trial
and error is allowed, but bare in mind you have the necessary mathematical tools to have
a first stab at calculating what you might need.
You could just do a simple ramp at one end, at just the right angle to overcome friction
losses, so that the ping pong ball just rolls into the bucket (to fast and it will miss).
However, would this work everytime (errors), and where’s the fun in that.
Labdiaries should be kept as usual, although this really will be a diary as you try things
out and dismiss them as either wrong or not feasible. Failure is expected, and there is
certainly no model solution.
61
III: BACKGROUND NOTES
III.1: Experimental Notes:
INTRODUCTION TO ELECTRONICS EXPERIMENTS
In these experiments you will be required to build a variety of analogue electrical circuits
and to make measurements of potential differences, current flows etc. The following notes
give advice on building circuits and how to use test equipment, such as oscilloscopes,
multimeters and signal generators. The final section gives advice on eliminating faults in
electrical circuits.
1.
Building Circuits
BREADBOARDS are used to make circuits in some experiments. This is a purpose-built
board which allows you to make all the necessary connections between components by
means of plugs and sockets and eliminates the need for soldering. Figure 1 shows a
diagram of a breadboard of the type you will use.
Figure 1: The breadboard you will use in Yr 1experiments with details of connections.
62
At the top of the breadboard are a set of connections which can be connected by 4mm
connectors or by bare wire if the tab highlighted is pushed in. There is a choice of having
a variable DC voltage or a constant voltage given by the yellow/green/blue and red/black
respectively. The green plug is the ground socket, and the range of voltages offered by
the variable power supply is between 11.5V.
The grid of blue sockets has its own methodical set up too. Sets of 5 horizontal sockets
are connected within themselves, but are independent of the sets above and below.
Furthermore sockets within a vertical column are connected, as there are four of these
vertical sets, it can be useful to set one to 0V, one to positive voltages and one to negative
voltages. As a result, you must think about the points at which you connect a wire, as it
needs to be in the appropriate row or column in order to complete the circuit.
You are advised to construct circuits so that they resemble as near as possible the circuit
diagrams in the script. You will find this of great benefit when trying to locate faults. Note
that two interconnecting wires are indicated by a dot placed at their intersection in a
circuit diagram. Wires which simply cross each other are not connected.
2.
The Oscilloscope
The basic functions of the oscilloscope are shown in Figure 2. Most of the functions are
self explanatory.
Figure 2. Front Panel of the GwInstek Digital Storage Oscilloscope
Basic functionality is controlled by:
Function Keys: Accesses the function alongside the button shown on the LCD display
Variable Knob: Increases or decreases a value and moves to the next or previous
parameter
CH1/CH2/Math: Configures the vertical scale and coupling for each channel input (CH1
and CH2), and also Math operations such as ‘add’, ‘subtract’, or perform ‘Fast Fourier
Transforms (FFT)’ on input waveforms
Volts/Div: Sets the y axis scale
Time/Div Knob: Sets the timebase (x-axis scale)
Autoset Key: Automatically configures the horizontal, vertical, and trigger settings
according to the input signal.
63
Trigger Level Knob: Sets the trigger level. This controls the scope's ability to reproduce
a steady trace on the screen.
Additional Notes on Timebase trigger
For the analysis of time varying voltages the trace on the oscilloscope screen must be
stationary. If the timebase were "free-running", that is, not synchronised to some multiple
of the repeat-time or period of the input waveform then the trace on the screen would not
be stable.
To synchronise the timebase to the repeat time or period of the input waveform a "trigger"
is used. The trigger circuit in the oscilloscope effectively 'fires' or emits a pulse when the
input voltage passes a set threshold level. This pulse is then used to initiate the timebase
cycle. The input to the trigger circuitry is normally taken from the y axis input amplifier.
Sometimes it is found necessary to apply an alternative, externally-derived voltage direct
to the trigger circuit via the external trigger input.
The trigger is sensitive to both slope and polarity of the input waveform and can be set to
fire on a particular slope and on positive or negative polarity. Hence, if a periodic
waveform such as a sinusoid is applied to the input terminals, the trigger can be set to fire
once every cycle at a fixed point in the cycle (Figure 3). The timebase cycle shown would
lead to a stationary trace representing one cycle of the input waveform.
The trigger level is shown on the display on the RHS of the axis (small arrow marker).
This is the trigger threshold voltage shown in figure 3.
Figure 3: Understanding the timebase
64
Notes on the AC and DC components of the oscilloscope waveform.
Figure 4(b)
Figure 4(a)
Figure 4(c)
A general time-varying voltage such as that shown in Figure 4(a) may be divided into two
components:
(i)
a D.C. component, equal in magnitude to the mean value (ie, the average over all
time) of the waveform (Figure 4(b)) and
(ii)
an A.C. component which remains when the D.C. component has been removed
from the waveform (Figure 4(c)).
The oscilloscope amplifiers may be D.C. or A.C. coupled. Try this on the waveform you
are observing. When the coupling is set to D.C. the trace represents both the D.C. and
A.C. components as shown in Figure 4(a). Setting the coupling to A.C. removes the D.C.
component just leaving the A.C. component as in Figure 4(c).
65
3.
The Multimeter
The multimeter you will encounter in your first year experiments (and many subsequest)
is a hand held digital device shown in figure 5. It is capable of measuring direct and
alternating voltages and currents, resistance, and diode readout. You must select the
mode of operation on a central switch, apply your terminals correctly and select the
appropriate measuring range.
Display
Range Button
Rotary Switch
Terminals
Figure 5: The Multimeter
4.
The Signal Generator
The output from the oscillator is available from the bottom right BNC socket. The signal
amplitude can be varied by means of the attenuator (O dB or -20 dB) and the variable
output level. Three different waveforms are available: sine, triangular and square. The
OFFSET knob works only when the DC OFFSET button is depressed.
5.
Resistance Colour Codes
Resistors are colour-coded to indicate their resistance, tolerance and power-handling
capacity. The background colour indicates the maximum power of the device. You will
use only 0.5 W resistors (dark red background). The four coloured bands can be read as
described below to determine the resistance and tolerance.
The final gold or silver band gives the tolerance as follows:
gold ± 5%
silver ± 10%
66
Digit
Colour
Multiplier
No. of zeros
0
1
2
3
4
5
6
7
8
9
silver
gold
black
brown
red
orange
yellow
green
blue
violet
grey
white
0.01
0.1
1
10
100
1k
10 k
100 k
1M
10 M
-2
-1
0
1
2
3
4
5
6
7
Table 1.1: Resistor colour-codes
Example: red-yellow-orange-gold is a 24 k, 5% resistor.
6.
Finding Faults in Electronic Circuits
During the course of the laboratory work you will probably encounter practical
difficulties. You should always try to solve these problems yourself, but if you are unable
then you should call on the assistance of the demonstrator.
Occasionally, a circuit will fail to operate because of a faulty component, but more often
than not problems arise from the incorrect use of test equipment, the omission of power
supplies from circuits, or the use of broken test leads. Faults are not usually apparent to
the naked eye, but they may be detected quite easily by following a systematic checking
procedure such as that outlined below. If after following these procedures your circuit
still doesn't work, then DO NOT HESITATE TO ASK THE DEMONSTRATOR FOR
HELP.
(i)
Ensure that you understand how to use each piece of test equipment. If in doubt,
consult the demonstrator.
(ii)
Examine the circuit for any obvious faults. Is the circuit identical to the circuit
diagram in the script? Are the components the correct values? Are there any loose
wires or connectors which could short out part of the circuit?
(iii) The fault may lie in the circuit itself, in the signal generator which supplies the input
signal, or in the measuring equipment. Switch on the power supply to the circuit and
apply the input signal. Use both channels of a double-beam scope to measure
simultaneously the input and output signals of the circuit. Check at this stage to see
whether the scope leads are faulty. Ensuring that you do not earth any signals (see
next section), connect the scope to the input and output of the test circuit. If there is
no input signal, disconnect the signal generator and test it on its own. If the
67
generator functions only when disconnected from the circuit, it implies that the fault
lies in the circuit and that it is possibly some type of short circuit, most likely
associated with incorrect earthing. If there is an input signal but no output signal,
the fault lies in the circuit.
(iv) A common fault which occurs when using more than one piece of mains-powered
equipment is the incorrect connection of earth lines. ALL EARTHS MUST BE
CONNECTED TO A COMMON POINT, otherwise the signal may be shorted out.
(v)
If you have established that the fault lies in the circuitry, use your scope to examine
the passage of the signal through the circuit. Components which you regard as
faulty should be isolated or removed from the circuit for further testing.
(vi) If you trace a fault to a piece of mains-powered equipment, DO NOT ATTEMPT
TO REPAIR THE FAULT YOURSELF. Report the fault to the demonstrator or
technician and ask for replacement equipment.
HOW TO USE A VERNIER SCALE
Vernier scales are used on many measuring instruments including the travelling
microscope that we will use in the laboratory. We will begin by looking at the general
principle of a vernier scale and then look at the particular scale we will use.
Figure 5 shows a vernier scale reading zero. Note that the 10 divisions of the vernier have
the same length as 9 divisions of the main scale. If the smallest division on the main scale
is 1mm then the smallest scale on the vernier must be 0.9mm. This vernier would then
have a precision of 0.1mm and results should be quoted to ±0.1mm.
10
0
Main scale
Vernier
0
Figure 5: Vernier Scale
Let us see how it works. Examine figure 6. The position of the zero on the vernier scale
gives us the reading. Here it is just beyond 2mm so the first part of the reading is 2mm.
The second part (to the nearest 0.1mm) is read off at the first point at which the lines on
the main scale and the vernier coincide. Here it is the 4th mark on the vernier (don’t count
the zero mark). The reading is therefore 2.4 mm.
68
10
0
0
Figure 6: using the vernier
To see why examine figure 7, which is an alternative version of figure 6.
x
D1
D2
0
1
0
Figure 7: why a vernier works
In essence we have been finding the distance X, which is simply given by:
X = D1 – D2 = 4×1mm - 4×0.9mm = 4 ×0.1mm = 0.4mm
So that is the general principle. Let us see how the travelling microscope scale works.
In this case the smallest division on the main scale is 1mm, which implies that the
smallest division on the vernier is 49/50 mm = 0.02 mm
As an example the reading in figure 1.8 is 113.68mm.
69
Best Match
Figure 8: example reading = 113.68mm.
Note: unlike the examples in figures 5-7 the vernier is above the main scale.
70
III.2 ANALYSIS OF EXPERIMENTAL DATA: ERRORS IN
MEASUREMENT
Contents
1. Introduction
1.1 Important concepts of measurements and their associated “errors”
1.2. The importance of estimating errors (with examples)
2. The nature of errors (a discussion in terms of single measurements)
2.1. Classes of errors
2.2 Illegitimate errors
2.2.1 Mistakes in calculations
2.2.2 Mistakes in measurement
2.3 Systematic errors
2.4 Random errors
2.5 The interplay between systematic and random errors
2.6 A note on experimental skill and personal judgement
3. Presentation of measured values
3.1 Accuracy and precision
3.2 Significant figures
3.2.1 How many significant figures should be used for a value?
3.3 The acceptable ways of presenting measured values
3.3.1 Required format for undergraduates
3.3.2 Alternative forms that may be met
4. Calculating with measured parameters and combining errors
4.1 Error propagation: the general case
4.2 Commonly occurring special cases
4.3 Notes on performing error calculations
5. Multiple measurements (of a single parameter)
5.1 Introduction
5.2 Importance of repeat or multiple measurements (of a single value)
5.3 Introduction to statistics (distributions, populations and samples)
5.3.1 Distributions
5.3.2 Line-shapes
5.3.3 Terminology: “Populations”, “samples” and real experiments
5.3.4 Experimental information found from a distribution
5.3.5 Extraction of information as a function of sample size
5.4 The statistics of distributions
5.4.1 Mean
5.4.1 Variance (mean square deviation) and standard deviation
5.4.2 Standard error
5.5 Summary - what to use as the random error as a function of n
6. Multiple measurements: straight line graphs
6.1 Introduction
6.2 Presenting experimental data on graphs
6.3 Finding the Slope and Intercept (and their errors)
6.3.1 The two approaches
6.3.2 Finding gradient, intercept and their errors by hand
71
6.3.3 Finding gradient, intercept and their errors by computation
6.4 Error bars (and outliers)
6.4.1 When to use error bars
6.4.2 Outliers
6.4.3 Dealing with a small numbers of data points
6.5 Forcing lines to be straight
7. Some experimental considerations
7.1 Terminology
7.2 Comparing results with accepted values
7.3 y = mx relationships
8. Some important distributions
8.1 Binomial statistics
8.2. The normal (or Gaussian ) distribution
8.3 Poisson distribution
8.4 Lorentzian distribution
Additional reading
These notes are intended to be just a brief guide to errors in measurement. For further
details the following books are recommended:
G.L. Squires "Practical Physics" 3rd ed Cambridge University Press (1985)
N.C.Barford "Experimental Measurements: Precision, Error and Truth" 2nd ed J.Wiley
(1985)
P.R. Bevington "Data Reduction and Error Analysis for the Physical Sciences" McGrawHill (1969)
“Squires” is a very good, very accessible book that is available in the library. It has a
strong emphasis on the relationship to experiment, was referred to extensively when rewriting these notes and is highly recommended.
1. Introduction
This document is intended as a reference guide for undergraduates in all years of physics
degrees in Cardiff University. Most of the concepts covered in this document are covered
in 1st year courses and may be considered an essential basis for any experimentalist.
There are many more sophisticated and specialist approaches that may be met during an
undergraduate degree course that are beyond the scope of this document.
As the title of this indicates this document is concerned with a particular aspect of the
analysis of experimental data. A good start is therefore to consider what is meant by
analysis:
“Analysis” generally is the detailed examination of “something” (in this case data). It is
performed by a process of breaking up “something” that is initially complex into smaller
parts to gain a better understanding of it.
(Data) analysis is therefore a type of problem that needs to be solved. With any type of
problem often the most difficult part is finding a way to start addressing it. One place to
start is by considering “errors”. But before that, some terminology.
72
1.1 Important concepts of measurements and their associated “errors”
The “true value” (of the physical quantity being measured) is as its name suggests.
Determining the best estimate of the “true value” of something is usually an important
aim of physics experiments.
The above statement causes a problem. It is not usually* possible to be certain of “true
values”, experiments can only ever provide “measured values” and discrepancies are
expected.
The word “Error” in scientific terminology is usually quoted as meaning "deviation from
true value" or "uncertainty in true value" it is not the same as "mistake"
Consequently it is the “measured values” or the “best estimate of the true value” that
must be expressed along with their associated errors. Undergraduates in this School are
asked to do this using the form**:
(measured value +/- error) units
[1]
The measured value and its error clearly define an interval (from value - error to value +
error). The situation isn’t entirely straightforward so for now all that will be claimed is
that the experiment suggests that the “true value” lies within this interval.
This document is mainly concerned with methods of deciding upon reasonable/realistic
estimates for the error. It will reveal the underlying importance of statistics and explain a
method of combining errors whilst avoiding becoming a course in mathematics.
Although there will be some discussion of how errors arise in different experimental
circumstances and their importance in extracting meaning from experiments these are not
of primary concern. However, whilst ignoring specifics, it should be recognised that to
improve understanding (our ultimate aim) it is often necessary to obtain “better”
measurements with smaller errors achieved through use of better instruments and/or
experimental technique.
* It would be wrong to say that there aren’t cases where exact true values can be found,
for example:
 How many electrons are allowed to exist in a particular atomic orbital?
 How many legs does a bird have?
** There is more on this and some alternative forms in usage later.
1.2. The importance of estimating errors
In order to get any meaning from measurements it is essential that the value obtained is
quoted with a reasonable estimate of its error. Put the other way around, measurements
without errors are meaningless.
Since the determination of errors is a time consuming process and the bane of students’
experimental lives this requires some justification.
Example: Suppose a student measures the resistance of a coil of wire and writes down:
"The resistance of the coil of wire was 200·025  at 10oC and 200·034  at 20 oC, so the
resistance increases with temperature".
Without more information, the student's statement is not justified. We must know the
errors in the measurements to say if the difference between the two figures is significant
or not. If the error is ± 0·001 , i.e. each value might be up to 0·001  higher or lower
than the stated value, then the difference between the two resistances is significant. But if
the error is ± 0·01  the two values agree within errors and the difference is not
significant.
73
Example: Two students perform an identical experiment to determine the acceleration
due to gravity, g (on the Earth’s surface this has a value of 9.80+/-0.02 m/s2 - note that the
error in g here arises from the variation in its value over the Earth’s surface).
The first student returns g = 11+/-2 m/s2 and the second student g = (10.2 +/- 0.3) m/s2.
What can be said about these results?
 Without considering errors, all that can be said is that the results from the second
student “appear” better than from the first.
 With errors only the first students result agrees with the known value.
 But then again, the smaller error quoted by the second students implies that this data
set is “better” in some sense (possibly resulting from more careful or skilful
experimentation) and hints that there may be an underlying problem with the
equipment or with the way the experiment was carried out.
Clearly there are problems with both data sets and it is not possible to get to the bottom of
this just by looking at the numbers. However, errors are necessary in order to start to get
an understanding of what is happening.
The next step in this case would be to go back to the original data to see if there were
problems with the analysis carried out. If the analysis was reasonable in both cases it may
well be that the second student has unearthed an issue with the experiment.
It would be highly unlikely in this case that some new physics has been unearthed but
with a different experiment this is one way that science works.
2. The nature of errors (a discussion in terms of single measurements)
Initially restricting discussion to single measurements of a physical parameter allows a
sensible progression through the subject. However, almost all of what is included here
applies equally to the more complicated cases with multiple measurements.
2.1 Classes of Error
The term "error" represents a finite uncertainty in a measurement due to intrinsic
experimental limitations. These limitations can arise from a number of causes, here they
will be considered as being of two distinct classes. These are:
 Systematic errors - these are the result of a defect either in the apparatus or
experimental procedure leading to a (usually) constant error throughout a set of
readings.
This type of error can be difficult to track down. One test is to perform measurements
of well known value, if there is discrepancy there may well be a significant systematic
error present.
 Random errors - these are the result of a lack of consistency in either the apparatus
or experimental procedure leading to a distribution of results (if/when they are
repeated) that is equally positive and negative.
This is the type of error usually responsible for the spread of results when
measurements are repeated.
Good results are only obtained by eliminating illegitimate errors and minimising both
systematic and random errors.
In addition to the above, another type of error needs to be mentioned. It is different
because its errors are not intrinsic to the experiment and so is often ignored when errors
are discussed..
 Illegitimate errors (or mistakes) - these are the result of mistakes in computation or
measurement. This class of error is worthy of consideration because mistakes happen
and have to be dealt with ethically and with scientific integrity. Such errors are
74
usually (but not always) easily identified as obviously incorrect data points or values
far from expected.
The rest of section 2 discussed these classes of errors in turn and in more detail.
2.2 Illegitimate errors (mistakes)
Reminder: this class is usually ignored since definitions of scientific errors excludes it.
One way of viewing this is that science works on the implicit assumption that every effort
has been made to eradicate all mistakes from experimental results before they are
presented. Scientists being human, mistakes will get through (some are really difficult to
identify) but published work is open to being checked by others.
At this point it is a good idea to distinguish between mistakes in calculations and
measurement.
2.2.1 Mistakes in calculations
These are simple to deal with (when identified) as there is no judgement involved, either a
mistake has been made or it hasn’t. Students are generally poor at going back to their
original data and checking calculations even when faced with values that are out by orders
of magnitude. You will make mistakes with calculations and you will need to go back
over your numbers to figure out where. Hint, if you are out by factors of ~10, 100, 1000
etc the place to start is any conversion between units (e.g. millimetres to metres).
Example: Subtle calculation errors can arise through the number of significant figures
used in performing a calculation. In some contexts you might be fully aware - in “back of
the envelope” calculations rounding approximations such as g = 10 m/s2 or e = 10-19 C
might be made in order to facilitate quick combination of values and this is fine when
order of magnitude results are adequate. However, when accurate values are required,
premature rounding can introduce illegitimate errors.
2.2.2 Mistakes in measurement
These are far more contentious as there is a danger of consciously or sub-consciously
manipulating results possibly to fit certain pre-conceived expectations. This is scientific
fraud. But, it is also true that mistakes can be made - with a subsequent need to ignore
otherwise misleading results.
So how is this handled with scientific integrity? The general principal is to not let
yourself get into the situation where you might be tempted to fiddle results.
Example. After data collection it may become apparent that an individual data point lies
far removed from all the others.
Partly based on how far out this point lies a decision may then be made to ignore this data
point in further analysis. However, in the analysis it should be made clear that such a
decision has been made and why (if it isn’t clear), the point should be labelled as an
“outlier”. This process allows re-analysis with inclusion of the outlier - such a process
may be performed in any case in order to see its effect.
Example. During a measurement it may be suspected that a mistake has been made, for
example in counting the number of swings of a pendulum, in starting/stopping a timer or
in the settings applied to an instrument. If it is known, or suspected at the time of
performing the measurement, that an error was made then the data point or set of points
can be safely discarded. However, if the measurement only becomes suspect as a result of
the values obtained then it is not valid to discard them out of hand, they then fall into the
category of “outliers”.
In both of the above examples the issue is best resolved by performing repeat
measurements (not often possible in years 0 and 1 but required from year 2 onwards).
75
There will very little further consideration of illegitimate errors in this document.
2.3 Systematic errors
Systematic errors can arise in an experiment in a number of ways. For example :

Zero error: from use of a ruler that is worn at the end, or a voltmeter may read a
non-zero value even when no voltage is applied across its terminals.

Calibration error: an incorrectly marked ruler can produce a systematic error
which may vary along its length. Wooden rulers are good to about 1/2mm in 1 metre.
Even expensive steel standards must be used at correct temperature to avoid a systematic
error.

Parallax error: this may occur when reading the position of an object or a pointer
against a scale (e.g. a ruler) from which it is separated. The reading can depend on the
viewing angle.
Timing errors are a common example of systematic errors. Apart from errors introduced
by a clock running too slowly there is also the tendency of a human operator (or indeed
electronics) to start a clock consistently too soon or too late (which may show up as a zero
error).
To achieve good results systematic errors must be carefully considered and reduced so
that they become insignificant (in most cases it is impossible to remove them entirely).
Two tricks that can be useful here: (i) compare the results to another experiment made
using different apparatus and using a different method; (ii) where possible use the
equipment to make measurements of known values. In both cases, if there is good
agreement there is greater confidence that the systematic error is insignificant and results
can be trusted.
2.4. Random errors
These as mentioned arise from fluctuations in observations so that results differ from
experiment to experiment. It is easy to see that these will arise when experiments are
performed by hand as human factors will mean that way that it is performed is not exactly
the same. But in a similar fashion measuring instruments are also prone to variation, for
example: both mechanical and electrical instruments will vary with the ambient
temperature (and other factors), both analogue and digital instruments suffer from
rounding errors, low signal measurements are prone to the effects of noise etc.
The reduction of random errors can be achieved in three ways: improvement of the
experiment, refinement of technique and repeating the experiment.
2.5 The interplay between systematic and random errors
Illustrated in figure 1 are the results of a number of measurements of a quantity x (which
could be a length, voltage, temperature etc.).
x
(a)
true value
x
(b)
Figure 1 (a) Random errors only, any systematic error is insignificant. (b) Significant random and
systematic errors present.
76
In this figure the position of the true value is marked and each small vertical line marks
the result of experimental determinations of x. In figure 1a the results are scattered about
the true value with no bias for low or high values, so you would expect the average of all
the results to be close to the true value. This is the case where random errors dominate any systematic errors are negligible. In figure 1b, there is, in addition to random errors, a
systematic error which means that average value is shifted to a value smaller than the true
value.
From the above it is clear that:
 Measured values close to the true value are obtained if the systematic error is small
 A small systematic error will only be revealed when the random error is small.
Less obviously:
 It is possible to have a small random error even with a large spread of data points this is addressed later in the section on multiple measurements.
 Systematic and random errors are always present. However, systematic errors are
ignored when they are small compared to random errors.
2.6 A note on experimental skill and personal judgement
Experimental skill and personal judgement are both important. Students should find this
statement both worrying and reassuring at the same time. Worrying because simply
following a set of instructions often produces bad results, reassuring because there are
rewards for practical ability and training. Bad results can be understood to be the
consequence of having significantly larger random and systematic errors. So how can this
come about?
Example: The error in a length measured with a rule will be influenced by the fineness of
the graduations on the scale, but the position of the scale relative to the object and how the
system is viewed are important (for both random and systematic errors) as is the ability to
interpolate between graduations (mainly for random errors).
Generally, experimenters should understand the equipment in use, acquire a feel for it
and, based on this, subsequently use their judgement. This applies equally to experiments
in which the data acquisition is handled by a computer. There is a tendency for students
to have a greater trust in results obtained via a computer. This is dangerous and it is better
to treat all equipment with the same initial (healthy) mistrust.
3. Presentation of measured values
Knowing about classes of errors it is now possible to discuss the presentation of measured
values in greater detail, starting with more of the terminology that accompanies it.
3.1 Accuracy and precision
As with “errors” the terms "accuracy" and "precision" have distinct meanings in
experimental science. In fact, accuracy is closely linked to both systematic and random
errors whilst precision relates only to the random error.
 Accuracy - The accuracy of an experiment is determined by how close the
measurement is to the true value, in other words how correct the measurement is.
From the above sections it should be clear that a value can only be accurate if the
systematic error is small, however, even with a small systematic error a measurement
will lose accuracy if the random error increases.
 Precision - The precision of an experiment is determined by the size of the spread of
values obtained in repeated measurements regardless of its accuracy. As illustrated in
figure 2 a smaller spread of values corresponds to a more precise measurement. From
77
the above sections, a value can only be highly precise if the random error is small.
Precision and random error are essentially equivalent - the random error is often
termed the precision of a measurement.
Figure 2 Two groups of measurements of x with different precisions (for a small systematic error the values
are distributed about the true value).
Some examples may serve to illustrate these definitions:
Example: Supposing a steel rod is measured to be 1.2031+/- 0.0001 m in length, i.e. its
length has been expressed to the nearest 0.1mm. This measurement implies a precision of
0.1 mm. But suppose that, due to wear at the end of the ruler used to measure the rod, this
figure is in error by 1mm. Then, despite the quoted precision, the measurement is
inaccurate.
Note: The precision quoted here is more formally known as the “absolute precision”.
This is distinct from the “relative precision” which is given in terms of the fraction (or
percentage) of the value of the result. In this case the relative precision is 0.0001/1.2031
= 8x10-5 (or 0.008%).
Example: Suppose that the true value of a temperature of an object is 20·3440 oC: a
measurement of 20·3 +/-0.1oC is accurate (it agrees with the true value within errors); a
measurement of 20·33+/-0.02 oC is both accurate and more precise (and could be claimed
to be “more accurate”); a measurement of 20·322 +/- 0.005oC is more precise but now
must be stated to be inaccurate because it does not agree with the true value within error.
The terms “accuracy” and “precision” as defined allow results and experiments to be
considered more meaningfully. The second example illustrates that as the random error in
reduced and precision improves systematic errors, previously hidden, start to emerge.
When systematic errors are evident there is little usually little point in improving the
precision further - steps should first be taken to reduce systematic errors.
In the rest of this guidance it will be implicitly assumed that systematic errors are
negligible compared to random errors. This will allow the discussion to be presented
such that when a more precise measurement is made, the accuracy will also be greater.
Bear in mind that in real experiments this will not always be true.
78
3.2 Significant figures
In the previous section it was seen that as the precision of the experiment improved the
number of significant figures (s.f.s), used to quote the result, increased. By contrast, by
their nature errors are estimates (i.e. imprecisely known) and so can only be quoted to 1
or 2 s.f.s. This can be a little confusing at first and, perhaps not surprisingly, a common
mistake that students make is to use an incorrect number of significant figures. This
section uses two examples in an attempt to clarify the situation - ultimately it is simply
common sense.
3.2.1 The use of significant figures
Example: A measurement of distance can be correctly quoted as (4.85 +/- 0.02) mm or
(0.485 +/- 0.002) cm or (0.00485 +/- 0.00002) m. These values are equivalent, all we’ve
done is change the units:
 The significant figures (s.f.s) are 4,8 and 5 hence in his case all measured values are
quoted to 3 s.f.s.
 The largest figure (4 in the above example) is the most significant figure and the
smallest number (5 here) is the least significant figure.
 The position of the decimal point therefore has no bearing on the number of s.f.’s.
 The error here is quoted to one s.f..
 The number of significant figures used for the measured value is determined by the
least s.f. in the error. This is also the (fixed in this example) precision of the
measurement.
Example: To illustrate this further take the temperatures given in an example in section 1
- (20·3 +/-0.1)oC, (20·33+/-0.02)oC, (20·322 +/- 0.005)oC. These measured values are
quoted to 3,4 and 5 significant figures (s.f.) respectively, this contrasts with their errors
(here) always quoted to 1 s.f. (remember that a maximum of 2 s.f.s are allowed for errors).
In all cases, the size/decimal place of the least significant figure in the error determines
the least significant figure in the value and therefore the precision of the measurement.
The 3 values quoted are therefore of different precisions.
Finally, it would be wrong to quote these values in the following ways:
(20·33 +/-0.1)oC (value more precise than error)
(20·322 +/- 0.0005)oC (error more precise than value)
(20·322 +/- 0.125)oC (to many s.f.s in the error)
3.3 Acceptable ways of presenting measured values
3.3.1 Required format for undergraduates
Reminder: the format required by the School has already been given as (measured value
+/- error) units. The subtleties of the required format will be addressed using an example,
the value of a distance S:
S = (2.36 +/- 0.04) km





[2]
The value and error are enclosed in brackets because the units apply to both.
The form above allows easy use and appreciation of both numbers and units.
The alternative form (23650 +/- 40) m is equally as good.
The alternative form (2365000 +/- 4000) cm is less easily appreciated.
Using powers of 10 instead of prefixes (such as k for kilo) is certainly allowed.
79



If a power of 10 is quoted, rather than incorporated in the units it must go outside the
brackets, e.g. R = (2.36 +/- 0.04) x103 m.
If a power of 10 is quoted then the exponent will be a positive or negative integer, n.
(Some publications may insist that the exponent should be an integer multiplied by 3,
i.e. use 103n, but this is not something that we insisted upon for undergraduates lab
diaries or reports).
The value of the quantity and its error should be quoted to the same power of 10 and
in the same units so that they can be compared easily (e.g. 2.36 km +/- 40 m) would
not be acceptable).
3.3.2 Alternative forms that may be met
The required format above is an unambiguous style of presentation but other formats are
used in which the error is not given explicitly. Students should be aware of the different
ways of presenting data as they should always be clear of the errors associated with any
experimental values that they meet.
Alternatives to the required format: The simplest way of indicating the precision of a
measurement is through the number of significant figures quoted (as is done in the
required format). Here though no error is given and an error (or precision) of 1 in the
final figure is inferred. For example, if presented with a length given as 1.23 m, the
inference is that in the required format it would be given as (1.23 +/- 0.01) m.
Clearly there is potential for ambiguity here. For example, if there was a requirement to
present all length in mm’s then with the above example there is a temptation to quoted the
value as 1230 mm which is clearly wrong as the zero is not significant. The value could
instead be quoted as 1.23 x 103 mm.
Although not recommended here scientists often quote one more figure than is justified by
the error. In the required format this might appear as (1.232 +/- 0.01) m and it is clear that
the last figure is not significant. Where the error is not quoted then it is necessary to
distinguish between figures that are significant and those that are not and this can be done
with by placing insignificant figures in bracket or as a subscript, i.e. 1.23(2) m or 1.23 3 m.
The reason for quoting an extra figure is to avoid introducing (a form of illegitimate) error
if the value is used in subsequent calculations (see section 4 below “Calculating with
measured values..”).
Fundamental constants and material parameters: Almost certainly the most common
measured parameters that students are exposed to are the fundamental constants quoted in
textbooks, lab books, data books etc. Following that may be material properties such as
the speed of sound in air or the density of water. It can be forgotten that these parameters
are (almost always) measured parameters and so are known to limited precision. So what
to make of the values presented?
It is a fact of life that the presentation of these “known”* or “accepted”* values does lack
consistency, although in many cases it is clear what has been done. For example in the
School’s “Mathematical Formulae and Physical Constants” handbook fundamental
constants are quoted to (mostly) 3 s.f.s. Since the constants are known to much greater
precision than this, here it is obvious that the values have been rounded - and because of
this the final figure has a precision (error) of 1. In addition, constants handbooks
generally indicate the associated errors and often reference the source of the information.
The situation is less clear for example when values are rounded but not obviously so, and
it should be remembered that values quoted in old publications may be out of date.
80
* Undergraduate experiments often measure parameters that have well “known” or
“accepted” values. The precision with which they are established lends itself to thinking
that these are “true” values and they may reasonably be used this way in teaching
laboratories. However, bear in mind that at the limits of their precision there may well be
disagreements between the different laboratories or experiments used to determine them.
4. Calculating with measured parameters and finding overall errors (error
propagation)
Sometimes in science finding the parameter that we measure directly is the main point of
the experiment, sometimes it is necessary to incorporate it into a function, combine it with
known constants or combine a number of measured parameters and constants. For
example, the value of a resistor R can be found by measuring the current I through it and
the voltage V across it and using R = V/I.
The process of using functions or combining values is usually straightforward. However,
it is not obvious how the corresponding errors are determined, a process commonly
known as “error propagation”. (Reminder - only random errors are being considered
here.)
This section starts by considering the general case before presenting the outcomes for
commonly occurring special cases.
4.1 Error propagation: the general case
The problem here is to find the overall change of a function due to (small) changes in its
component parts. The answer can be found using calculus, if a value z is a function of x
and y, (i.e. z = f(x,y)) partial differentiation can be used to find the effect of a small change
in either x or y. (Partial differentiation is taught in the first year and the process is
essentially one of differentiating with respect to (w.r.t.) one variable whilst holding all the
others constant).
The partial differential of z with respect to x (holding y constant) is given by z x so that
the change in z (i.e. Δz )due to a small change in x (i.e. Δx) is:
z
z 
x
[3]
x
There is a similar expression for changes in z due to changes in y and the total change in z,
i.e. the “total differential” is then given by
z 
z
z
x  y
x
y
[4]
The above equation concerns two variables but clearly the number of terms on the right
hand side would increase to match the number of variables in an arbitrary function. Even
so, Δz in the above equation cannot be used as the combined error arising from the errors,
Δx and Δy, in x and y respectively. The reason is that in the above equation the signs of
both the derivatives and the errors are important. As presented then the signs of multiple
terms (2 here) could lead to the situation where two large but opposite contributions
cancel each other, resulting in an underestimated error.
One way to resolve this issue would be to add the magnitudes of the terms on the rhs of
the equation. However, this is equivalent to having the errors contribution due to x and y
always reinforcing each other which is not realistic either. Instead, the conventional
solution is to square all of the terms, i.e.:
81
2
2
 z 
 z 
(z )    x 2    y 2
 x 
 y 
2
[5]
Δz in this equation is the overall error. The resulting errors are realistic and are often said
to have been combined in “quadrature” (quadrature is often used to mean squaring).
Example. Resistance, R = f(V,I) = V/I.
The aim is to show how the overall error for resistance is found using the values and
errors for voltage and current.
First consider the total derivative
R
R
1
V
R
R
R 
V 
I  V 
I  V  I
V
I
I
V
I
I2
Rearranging
R V I


R
V
I
Squaring each term
 R 
 V 
 I 

 
  
 R 
 V 
 I 
2
2
2
This methodology used here for a quotient can be used generally and the more common
results are given in the next section.
4.2 Commonly occurring special cases
In the table below one or two measured parameters (A and B) and a constant k are
combined through addition, subtraction etc. to produce a result Z. The error Z in Z is
then expressed in terms of the errors, A and B, in A and B respectively.
Table 1. Rules for finding errors when values are combined or functions used
Z=A+B
Z=A-B
(Z)2 = A)2 + (B)2
Z=AB
Z=A/B
(Z/Z)2 = A/A)2 + (B/B)2
Z = kA
ΔZ = k
Z = k/A
ΔkΔA/A2
Z = An
Z/Z = nA/A
Z = ln A
Z = A/A
Z = eA
Z/Z = A
Note: to find the error when constants are present simply consider that the error in the
constant is zero.
Example: If the length of a rectangle is (1.24 ± 0.02) m and its breadth is (0.61 ± 0.01) m.
What is its area and the error in the area?
Here A = 1.24 m, A = 0.02 m, B = 0.62 m, B = 0.01 m, Z is the area and ΔZ is the
error in the area, found by combining errors.
Area, Z is the product of A and B, i.e. Z = AB = 0.7564 m2.
82
(Z/Z)2 = A/A)2 + (B/B)2
= (0.02/1.24)2 + (0.01/0.61)2
= 2.602 x 10-4 + 2.687 x 10-4 = 5.289 x 10-4
So that
Z/Z = 0.023
or
Z = 0.023 x 0.7564 = 0.0174 m2
So the area can be expressed as (0.756 ± 0.017) m2 or as 0.76 ± 0.02) m2.
The appropriate rule is
4.3 Important notes on performing error calculations
Performing error calculations can be tedious and time consuming. But it has to be done
and it is worth paying attention to the numbers. It is inevitably true that different
parameters will have different contributions to the final error. Being aware of this can be
useful in at least two ways:
 Error contributions that are significantly smaller than others may reasonably be left
out of calculations, saving time. This is easily performed by comparing the relative
precision of the contributions, i.e. comparing ΔA/A with ΔB/B etc.
 The relative precision of the different contributions is instructive in indicating
weaknesses in the overall experiment, e.g. where to spend effort to find
improvements.
5. Multiple measurements (of a single parameter)
5.1 Introduction
As has already been mentioned, repeated or multiple measurements are important in
experimental work associated with the reduction of random errors. In fact one of the
cardinal rules of experimental work is that whenever possible repeat measurements should
be made.
This section is concerned with repeated measurement of a single parameter. The more
common situation for physics labs is where a variable is changed and the resulting x, y
data set plotted on a (preferably straight line) graph is dealt with later.
5.2 Importance of repeat or multiple measurements (of a single parameter)
A single measurement of a parameter relies on (often personal) estimates of an error based
on the equipment being used (for example on the smallest graduation of a meter or rule).
When repeated measurements are made:
 The second measurement acts as a check that the first one is reasonable, i.e. not
subject to gross error through carelessness.
 A relatively small number of repeats indicates the range within which the true value
lies.
 A relatively large number of repeats indicates the range and the distribution of
measurements - and allows the (random) error of the measurement to be reduced so
improving its precision.
If an estimate is made of the random error then repeated measurements can act as a test of
whether this was correct and therefore that the measurement was understood.
As the number of measurements, n increases from 1 to infinity the way that the data is
handled and error determined changes, however the mathematics follows statistically
accepted rules*. In the following discussion attention will be paid to the number of
measurements as this has clear experimental relevance. In teaching laboratories many
experiments involve n ~ 8 and it is possible get away with a superficial understanding of
statistics. In research the number of measurements tends to relate to the research field. In
83
astronomy there are large numbers of stars and galaxies to examine, n can be large and
there’s no escaping statistics.
* The terminology of statistics will be introduced without its mathematical justification in
this document (see statistics books or further reading for more maths).
5.3 Introduction to statistics (distributions, populations and samples)
In this section the terminology of statistics relating to data distributions is introduced and
related to experimental error analysis/determination.
5.3.1 Distributions
As number of measurements increases, in the absence of systematic errors, we expect the
mean to become closer to the true value. In other words it will always be the case that the
mean of a set of values is the best estimate of the true value (more on this below). It is
also reasonable to expect more values close to the true value than further away, i.e. the
distribution of measurements has a central tendency and is expected to peak at or close to
the true value. With a reasonable number of points the distribution can be plotted by
plotting the number of points that occur in a certain interval against the measurement
value. As the number of points increases the interval used can get smaller until, for an
infinite number (the limiting case), the distribution is continuous and is known as the
“limiting distribution”. An example of a (close to) limiting distribution is shown in
figure 3 below.
In figure 3 the y-axis shows the number of measurements having a given value
(continuous line) or number of measurements in a certain interval (bars). Often the y axis
shows either the fraction of measurements in a certain interval (bar charts) or the
probability of having a certain value(limiting distribution). This is achieved by
normalisation - dividing by the total number of measurements. The result of
normalisation is that the sum of all probabilities or the integral over all measured values
will be unity in both cases.
Figure 3. Distribution of a set of data. A continuous line and three bars are shown to represent a large
number of data points.
5.3.2 (Spectroscopic ) line-shapes
Very closely related to the distributions described in the previous section are line-shapes
of various origins, for example the intensity of atomic emission lines versus wavelength
or the amplitude of oscillation of a resonant mechanical system versus frequency.
Different although related terminology can be used to describe the two cases. The
84
statistical terminology for distributions will be discussed later but the general terminology
for line-shapes will be introduced here.
Figure 4 shows an intensity versus frequency line shape (actually the same shape as the
distribution in figure 3). On the assumption (as it is not shown) that the intensity falls to
zero well away from the “peak” the “full maximum” of the intensity is shown along with
its full width at half maximum (FWHM). “FWHM” being independent of the intensity of
the peak is a convenient and often quoted way to describe line-shapes features. A peak
that is symmetrical will often be characterised by its peak intensity, position (a frequency
in this case) and its FWHM. Note: The term “Half width” is sometimes used and has the
same meaning as FWHM - it can be understood to mean the width at half height.
An asymmetric peak (as figure 4 is) might be additionally characterised by its half width
at half maximum (HWHM) values either side of the peak position (i.e. that of the
maximum of the peak).
“Full”
maximum
Intensity
HWHM
FWHM
frequency
Figure 4. A (slightly) asymmetric line shape, perhaps of a spectroscopic feature with its full maximum (i.e.
intensity) its full width at half maximum (FWHM) and its half width half maximum (HWHM).
5.3.3 Terminology: “Populations”, “samples” and real experiments
Returning to distributions, although it is the limiting distribution that characterises an
experiment, real experiments have a finite number of data points and the role of statistics
is to extract the best estimates of true values and associated errors. How this is achieved
will be discussed later, for now only the general principles will be of concern.
If the limiting distribution is viewed as resulting from all possible measurements then a
real experiment may be viewed as a limited “sample of all possible measurements”. A
single measurement then may take any value within the distribution and is more likely to
be found near to the peak, i.e. the mean or true value. In many experiments it’s possible
to conceive of an infinite number of repeats and this set of data is known as the
“population”. In other words a real experiment takes a “sample” of a “population” of
measurements.
The origin of the term population may be understood by thinking of statistics more
widely. For example surveys may be made of political views in Wales. Not all people
will be included, those that are constitute the “sample” whereas all possible people in
Wales constitute the “population”. Likewise, in astronomy a survey may consider a
sample of the (finite) population of galaxies.
85
5.3.4 Experimental information found from a distribution
Experimentally what is required from a sample is the best estimate of the true value,
sometimes also the shape of the limiting distribution but especially its (random) error:
 The best estimate of the true value is easy - it is simply the mean value of the
“sample”.
 The shape of the limiting distribution clearly is of interest because its width
corresponds to the “precision of the apparatus”* or the “experimental precision”* ,i.e.
in some sense it is a measure of how good the experiment is independent of the
sample size (although a large sample size is required to find it reliably).
 The random error (“precision of the experiment/measurement”*) not only improves
with increasing sample size but is also estimated differently depending on sample size.
* With two types of precision and wording that is ambiguous it is very easy to get
confused. The trick here is probably to be clear of the concept and don’t worry about the
terminology (if you come across wording that it not ambiguous please let us know).
5.3.5 Extraction of random error as a function of number of measurements (sample
size)
It is important to emphasise that here the concern is with cases where more than one
measurement is made and the random error is determined by analysing the distribution or
spread of data.
The following discussion concerns an increasing number of measurements(samples) of an
arbitrary experiment.
As mentioned previously a single measurement (n = 1) provides one sample of the
limiting distribution and although it is more likely to be close to the true value (rather than
out in the wings) occasionally the experimentalist will be unlucky.
Very quickly with n = 2,3,4.. averaging gives a lot more confidence in our estimate of the
true value and more importantly for errors starts to give an idea of the limiting
distribution. At this point the error will almost certainly be taken to be half the range or
spread of the values (because we quote ± error).
With a few more measurements a dilemma arises. The range/spread of values is likely to
increase whereas the random error should sensibly decrease. One valid approach which is
to use the range in which 50% of the values fall to indicate (twice) the error, this is known
as the “probable error”. This approach is illustrated in figure 5, it is a convenient
approach to use for 8 or 12 data points where the outer 4 or 6 points respectively can be
discarded.
86
Average (best estimate
of true value)
x
2Δx
Figure 5 Average value and probable error range from a set of eight data points
Probable error however, suffers a similar limitation to range and it does not progressively
decrease with increasing n. Neither is it a required step as statistical techniques
(described below) may be used. More importantly experimental work always requires
choices to be made and a good experimentalist will be clear on the method and the logic
applied in deciding on the approach used.
With a large numbers of measurements (let’s say n >> 10) and even before a well
defined distribution emerges statistical techniques are used - although cautiously because
this is the regime of small number statistics. With very high n and a well defined
distribution it is clear that its mean (our best estimate of the true value) can be found to
high precision. In fact its error approaches zero as the number of measurements
approaches infinity. What this is saying is that even when the precision of the experiment
is low with enough measurements a value can be found with a low error. But, as you
would expect it is easier to get a low error (i.e. using less measurements) when the
experimental precision is high - the precision of the experiment does matter. The next
section introduces the formal mathematics of this process.
Note: it isn’t easy to say how large n needs to be in order for a distribution to become well
defined. However, as a guide with n ~ 50 a it would be reasonable to draw a distribution
split into 4 or 5 intervals. If nothing else it should be clear from this that in order to
approach a limiting distribution n needs to be very large indeed.
5.4 Formal statistics (of distributions)
All experimental results are affected by random errors. In practice it turns out that in the
majority of cases the distribution function which best describes these random errors is the
“normal” or “Gaussian” distribution. Other mathematically described distributions
include “Poisson”, “Binomial” and “Lorentzian”. Distributions such as the one
presented in figure 3 may not have a basis in mathematics. However, all can be treated
with the same statistics.
Reminder: statistics work well with large but not small numbers of measurements - the
term “small number statistics” doesn’t have a poor reputation for nothing.
5.4.1 The mean
If n measurements of a quantity x are made and these are labelled x1, x2, x3,….xn then the
mean is given by:
87
xn 
1
1 n
( x1  x2  x3  ...  xn )   xi
n
n i 1
[6]
Often used alternative symbols for the mean, x n include x , Xn and μ.
5.4.2 Mean square deviation (variance) and standard deviation(s)
Clearly individual values of xi will differ from x n and these differences are intrinsically
linked to the nature of the distribution. The deviation of a particular measurement, xi from
x n is given by
 i  xi  xn
[7]
Clearly deviations may be either positive or negative and both the sum the mean
deviations will be zero. To avoid this the absolute value of mean deviations could be used
but it makes more sense mathematically to use the square of deviations. The sum of
square deviations would simply increase with the number of measurements whereas the
mean value would be expected to converge to a value representative of the limiting
distribution. The mean square deviation of n measurements,  ( x n ) 2 is given by
1 n 2 1
2
[8]
  i   ( xi  xn )
n i 1
n
From this it is a short step to the root mean square deviation, normally known as the
“sample standard deviation”, σn:
1 n
1
 ( xn )  [   i2 ]1 / 2  [  ( xi  xn )2 ] 1 / 2
[9]
n i 1
n
The term sample standard deviation is used since it is calculated from a sample of n
measurements - it is important to include the subscript. It is sometimes also written as σn.
Note: although standard deviations can be calculated for small numbers of values it
doesn’t make sense to do so as discussed earlier.
 ( xn ) 2 
The standard deviation is useful quantity as it has the same units as the measured value
and relates to the width of the distribution and is often described as the precision of the
measurement. However, as hinted above there is more to this story.
In the same way as it is the limiting value of the mean that represents the true value, it is
the limiting value of the sample standard deviation that is the standard deviation (and
represents the precision) of the experiment. It is also possible to conceive of a correction
to the sample standard deviation, σn(x) to get a better estimate for the population standard
deviation σ(x). This best estimate of the standard deviation is usually denoted sn(x).
(Again, because it is confusing) the three versions of standard deviation with their
meanings:
Sample standard deviation, σn(x) - The standard deviation that can be calculated from n
measurements.
Standard deviation, σ(x) - The (unattainable) limiting or “true” value of standard
deviation, also quoted as the true precision of the experiment.
Best estimate (or adjusted) standard deviation, sn(x) - a variation of the sample
standard deviation, using σ(xn) and n to get a best estimate of σ(x). sn(x) is given by
1/ 2
 n 
sn ( x)  

 n 1
 n ( x)
[10]
88
5.4.3 Standard error (standard deviation of the mean),  ( xn )
As discussed above, the standard deviation gives a measure of the width of a distribution,
whereas what is required is the error in the mean value, a value that can become very
small as the distribution is better known (through increasing the number of measurements
n).
The error in the mean will be taken as given by the “standard error”. Mathematically, the
standard error is found by finding the standard deviation of a number of samples of the
mean value. This explains why the symbol used appears very similar to that for standard
deviation.
If the limiting or true standard deviation is known (σ(x)) then the standard error for n
measurements,  ( xn ) is given by
( x )
 ( xn ) 
[11]
n1 2
However, the true standard deviation cannot be known, and so similar expressions may be
considered including either σn(x) or sn(x.). Since the labelling is getting tricky/confusing
the same symbols will be used for standard error below but with words of explanation
attached:
 (x)
(standard error using sample standard deviation)
[12]
 ( xn )  n
n1 2
1/ 2
1  n 
 ( xn ) 



12
1 2  n 1
n
n
sn ( x )
 n( x ) 
 n( x )
n  11 2
(best estimate of standard error)
[13]
Given that n will be quite large where it is applicable to use standard errors (i.e. when
distributions have emerged) there is little difference between the two expressions.
However, here it is now possible to state that the value for a measurement, X can be
expressed as
X  xn   ( xn )
[14]
In experimental terms the 1/n1//2 dependence of the standard error (for large n) indicates
that although it is possible to use repeats to find a value to high precision/small error this
is hard work and it is often better to work on improving the precision of the measurement.
5.5 Summary - what to use as the random error (precision) as a function of n



Single measurement - estimate of error.
Small number of measurements - whilst using best judgement: the range of the data
might be used for a very small number of measurements; with a few more
measurements (and possibly taking convenience into account) choose between
probable error and possibly standard deviation).
Large number of measurements - with the distribution emerging use standard error.
Some of the first year lab experiments are designed to illustrate how this works in
practice. However a guiding principle is to be open and clear about what error is chosen
and why.
6. Multiple measurements: straight line graphs (y = mx +c)
6.1 Introduction
89
The previous section discussed multiple measurements of the same value. However this
is not how laboratory physics experiments are usually performed. If a quantity y depends
upon another x, then rather than fixing on a value of x and making repeated measurements
of the corresponding value of y, it is usually much more revealing to vary x. The form of
the dependence of y upon x is then most simply demonstrated by plotting a graph. The
statistics of repeat measurement in section 5 still applies but in a modified form - think of
the different points as being in some sense a repeat.
The understanding and use of graphs is an essential skill. Teaching laboratories
concentrate on using straight line graphs, which are by far the easiest to analyse, and great
efforts are made to ensure that graphs emerge in this form.
6.2 Presenting experimental data on graphs
Scientific experiments examine cause and effect relationships where changing one
variable (known as the independent variable) causes a change in a second (dependent)
variable, both of which are measurable.
(Important: Conventionally the independent variable is plotted on the horizontal (x) and
the dependent variable is plotted on the vertical (y) axes of the graph respectively.)
For example, how the length of a spring depends upon the weight hung from its end may
be studied. The length is the dependent variable so it is plotted on the y axis, as in figure
6.
length / m
0.4
0.3
0.2
0.1
weight / N
0
0
1
2
3
4
5
6
7
8
9
Figure 6. Example graph, spring length versus weight (the line through the data is a “best fit” line).
On the graph, as is quite common, a line through the data is shown. The meaning of any
such line should be made clear, in this case the figure caption indicates that the line is a
“best fit”. In other words it is the straight line that best represents the data and from
which information is extracted. In this case, from the gradient a value for the spring
constant may be determined. The alternative is that a line is a “guide to the eye”, this is a
line with no scientific meaning. In a lab diary this information can be given at any
convenient place on the graph, in a report inclusion in the figure caption is usually best.
Error bars can also be included on graphs, this is discussed in a later section.
6.3 Finding the Slope and Intercept (and their errors)
The equation for a straight line is given by:
y = mx + c
[15]
where m is the gradient (or slope) of the line and c is the intercept with the y axis. It is
necessary to find values and errors for both, and two approaches are possible.
90
6.3.1 The two approaches
By hand, where a graph (drawn in a lab diary) is analysed using the judgement of the
experimentalist. This approach, although subjective, gives students an understanding of
the process of data analysis and it keeps students “close” to the data. Both of these are an
essential part of the process of equipping students with the skills and experience to
develop as a scientist. It is used by preference in the first year laboratory (and still would
be even if there were enough PCs readily available to use).
By computer, where the data is fed into software (such as EXCEL or Python) that graphs
and analyses the data. This approach has the advantage of using well defined statistical
techniques and in these terms at least giving “correct” answers. There are a number of
disadvantages: students lose their critical faculties and tend to believe any number
emerging from a PC or calculator (regardless of the quality of the data entered), extracting
usable error information can often be more troublesome than working by hand.
6.3.2 Finding gradient, intercept and their errors by hand
The approach is illustrated in figure 7. Having so determined the best straight line, the
gradient m and the intercept c can be determined. Two well separated arbitrary points on
the best fit line are determined (x1,y1 and x2,y2). This is a statement that it is the best fit
line that represents the experiment (students are often tempted to use extreme measured
data points - this is incorrect). From the two selected points the gradient can be
calculated:
dy y 2  y1
[16]
m

dx x 2  x1
c can then be found using the straight line equation, m and either of the two points (or
indeed any point on the best fit line):
c  y  mx
[17]
For clarity a right angled triangle is drawn linking the two chosen points on the best fit
line.
x2,y2
10
8
dy=y2 – y1
y
6
4
x1,y2
2
dx = x2 – x2
0
0
2
4
6
8
10
x
Figure 7. Determining m (= dy/dx) from a best fit line. Note that x1 and x2 are points on the best fit line ,
i.e. they are not data points.
Finding the errors is achieved by repeating the above procedure for one or two other
straight lines which are as far away in gradient (one larger, one smaller) from the data as
possible, but which are judged to be nevertheless still reasonably consistent with the
data. These are known as “worst possible fit lines” or “worst fit lines”. As shown in
figure 8 the lines should pivot about the approximate centre of the data points. These
91
lines provide two further values for m and c from which errors in m and c can be
estimated. In practice it is allowable to use one worst fit line, this saves time and is
justified since it is error estimates that are found.
However, remembering back to Gaussian distributions arising from repeated
measurements of the same value there is clearly a problem with this approach. With more
measurements the errors in m and c must decrease, whereas with this simplistic approach
more measurements are likely to sample a larger spread about the best fit line and
therefore result in slowly increasing errors.
10
best fit line
8
worst
fit
lines
y
6
4
dy
2
dx
0
0
2
4
6
8
10
x
Figure 8. Best and worst-possible fit lines used to estimate errors. The lines pivot about the centre of the
data range.
In effect the worst fit lines provide estimates of the standard deviations in m and c.
Estimates of the standard errors in m and c can be found by dividing these values by n1/2.
where n is the number of data points (dividing by (n-2)0.5 is probably better but the worst
fit lines are generated by eye so let’s not worry). The errors then decrease (as must be
expected to happen) with the number of data points and match better to cases where
averages of repeat measurements at different points are taken (e.g. timing an event 3 times
for each point) and also when errors are calculated by computer fitting packages (see next
section).
Summary:
estimated standard error in m
 ( mn ) 
estimated standard error in c
 ( cn ) 
mworstfit  mbestfit
n 0.5
c worstfit  cbestfit
n 0.5
[18]
[19]
6.3.3 Finding gradient, intercept and their errors by computation
This section gives the mathematics for determining gradients, intercepts and their errors
using a linear regression technique known as least squares fitting of a straight line. It may
be useful to think of the best fit line as the “true value” with points distributed about it.
Given n pairs of experimental measurements (x1,yl), (x2,y2) ......... (xn,yn), which have (the
same) errors in the y-values only*, the gradient (m) and intercept on the y axis (c) of the
best straight line (y = mx + c) through these points can be found by minimising the
92
squares of the distances of the points from the line in the Oy direction. The minimum is
found by differentiation and this leads to the analytical expressions that follow.
With the summations from i = 1 to i = n and defining (following Squires)
the “residual” for the ith data point
d i  yi  mxi  c
(the deviation in y for each data point - from the best fit
line)
1
1
x   xi
y   yi
n
n
1
1
D   xi 2 - (  xi ) 2
E  (xi yi ) -  xi  yi
n
n
1
F   yi 2 - (  yi ) 2
n
Then
E
m
D
1  d i2
1 DF  E 2
( m ) 

n2 D
n  2 D2
c  y  mx
( c )2 
2
2
2
2   di
2  DF  E
1 D
1 D

 x 
 x 
n2 n
n2 n
 D
 D2
Mathematical software might have this programmed in, but many, EXCEL for example,
give the “product-moment correlation coefficient”, R (actually R2 is usually given) that is
a quality of fit (with R = ± 1 or R2 = 1.0 representing a perfect fit/correlation). This is
insufficient as error values are required.
R2 
E2
DF
With the constraint that the straight line is required to pass through the origin (0,0), c = 0,
the best value for m is
12
  y 2  2m ( x y )  m 2  x 2 
i
i i
i
with error ( m )  

2


 xi ( n  1 )
However it isn’t at all clear when this may be used. It certainly should not be used on the
basis that an equation indicates that a straight line graph is expected to go through the
origin. A systematic error in the experiment might shift data such that the gradient is
unaltered but the line does not pass through the origin. Then the consequences of forcing
the line through the origin are to lose information on the presence of systematic errors and
at the same time to introduce a systematic error into the gradient.
 (x i y i )
m=
2
 xi
2
* This draws attention to an important point concerning statistical analysis. Insignificant
errors in the independent variable is often true experimentally (where the value of x is set
and the value of y measured) but it is also and is a necessary condition for the commonly
used statistical treatment of errors in gradient and intercept (software that calculates errors
in gradient and intercept almost certainly make this assumption). Treatments are much
more involved if the errors in both y and x are significant or if the error in individual
points varies.
93
6.4 Error bars (and outliers)
When plotting graphs it can sometimes be useful to include “error bars”. An error bar is
a way of drawing (an estimate of) the (random) error in the measured value of each data
point on the graph. It is illustrated in figure 9 for the case where only the errors in y are
significant and it is implied that the errors on x are insignificant. If the x error is
significant a horizontal bar should be included.
y
x
Figure 9. Example, use of error bars. The line is a best fit that excludes the outlier (the point significantly
below the best fit line and therefore ignored from the analysis).
Error bars are generally only included where there is a clear benefit compared to their
absence: not only do they take time to insert but they also complicate graphs (especially a
problem in lab diaries where best fit and worst fit lines (if drawn by hand) are present.
Before discussing the cases where there are “clear benefits of error bars” it is worth
dwelling on what they represent. Whilst it is possible to use error bars to represent
systematic errors the convention is that they represent random errors. Deviation from
convention is permitted provided it is clearly explained. Random errors are best
determined from repeated measurements, however it is often the case that points on a
graph correspond to single measurements. It is often possible to estimate the random
error for a single measurement (for example from the minimum graduation of a meter or
rule) but students are notoriously pessimistic (i.e. overestimate) random error sizes perhaps confusing them or mixing them up with possible systematic errors.
6.4.1 When to use error bars
Testing understanding of the measurement
Suppose that the error bars in figure 9 were estimated from single measurements for each
point. The fact that the scatter in the data points about the best fit line is of the same size
as the error bars supports the view that the experimental errors are well understood. It
should be a concern when error bars are significantly larger than the scatter.
Significance of deviations from theoretical curves
The theoretical curve that the data is compared to here is a straight line. Here error bars
make it easier to decide whether deviations from a straight line are significant or not.
(In scientific jargon anything that is “insignificant” is small enough to be ignored)
This is illustrated in figure 10 a and b which show the same set of data but with different
error bars.
94
16
14
y values /a.u.
12
(a)
10
8
6
(b)
4
2
0
0
2
4
6
8
10
x values /a.u.
Figure 10. (a) Data with best fit line and large error bars, (b) the same data shifted (down) with small error
bars (a.u. - arbitrary units)
As with any experiment there is scatter in the data. In figure 10a the error bars all
encompass the straight line and therefore the deviations from the best fit line cannot be
considered significant. By contrast in figure 10b with smaller error bars the deviations
must be considered significant and implies that either: (i) the theoretical model is
incorrect or (ii) that there are additional unknown or unconsidered experimental factors
causing a deviation.
The above discussion illustrates both the importance of careful consideration of errors and
also that extra information is revealed as errors are reduced.
Final note: here the deviation of a number of data points was considered. The significant
deviation of a single data point is treated a little differently (see also outliers below).
Significant errors in both y and x and a variation of size of error bars
Since the commonly used analytical method of determining line of best fit and errors in m
and c is based on the errors in each point being significant only in y then the cases where
this does not apply need to be treated with care. A first step towards dealing with (or at
least acknowledging) this is to provide x as well as y error bars when appropriate.
The error analysis required when the errors are significant in both x and y is beyond the
scope of this document.
Similarly the commonly used analysis assumes that the y errors are the same for each data
point and a first step towards acknowledging when this is not so might be to show these
varying error bars.
Situations where varying errors may occur:
 Errors based on repeat measurements will vary if the number of repeats is varied.
 Some experimental conditions might naturally lead to varying errors (for example, the
determination of frequency from a fixed number of oscillations).
 When combining measurements to obtain a “y” value.
6.4.2 Outliers
Returning to figure 9 in drawing the best fit line only 5 points were taken into
consideration, whilst the 6th (the point below the line) was excluded. An excluded point
is known as an “outlier” and clearly points should not be categorised as outliers lightly.
95
Potential outliers may sometimes occur due to a mistake in a reading or the setting of an
experimental condition and care must be taken when dealing with them. Working on the
assumption that the first indication of a presence was on plotting a graph (probably in a
lab diary):
 First check that all arithmetic and the plotting of the data point was performed
correctly.
 Do not rub the point out or ignore it - apart from anything else it may in fact be
correct.
 Make a decision about whether to include or exclude the point from analysis (i.e.
whether it is treated as an outlier or not) and indicate this clearly.
 If possible determine whether an error was made in the measurement - by going back
and performing repeats (this isn’t usually possible in year 0 and 1 labs, is often
possible in year 2 and is essential in year 3 and 4 projects).
 The earlier an outlier is spotted the easier it is to perform repeat measurements. This
is aided by drawing graphs as quickly as possible. The ultimate is to draw graphs as
you go along. Computers are very useful here but very rough sketch graphs are useful
alternate.
Consideration of whether a point should be considered as an outlier takes us back to error
bars. In figure 9 it is somehow reassuring that the line of best fit passes through the 5
good data points within their error range as indicated by their error bars. It appears
reasonable to ignore the outlier in the determination of the best fit line because it would be
impossible to include this point on the same basis (although with much larger error bars
the outlier might be included). However, the scatter in the data is also sufficient to make
this judgement and in reality the error bars do not add anything.
6.4.3 Dealing with a small numbers of data points
Clearly it is better to have many data points rather than few but what are the implications
of cases when this isn’t possible? Return to figure 9 and consider having not 6 but 3 or
even 4 data points one of which is the outlier:
 The scatter in the data is not obvious from the points alone.
 (Correct) error bars become more important.
 It is difficult or impossible to identify outliers.
 The values obtained for m and c are (almost always) less accurate and their errors
larger.
6.5 Forcing lines to be straight
It is almost always possible to manipulate the mathematical form of data such that an
easily analysed straight line results when it is plotted. Essentially the approach is to
obtain a relationship in the form y = mx + c. A simple example and two experimentally
very important examples are given in table 1.
Table 1. Example methods for making straight line plots
Function
y = 2x2
W = kTn
y = Ae-E/kT
Plot (y = mx + c)
y vs x2
log10W vs log10T
(log10W = log10(kTn)
= nlog10T + log10k)
lny vs 1/T
Comments
A very simple example
Used in determining unknown power
relationships (finding n).
Known as an “Arrhenius plot” it is used when
considering thermally activated processes
with an activation energy (E).
96
7. Some experimental considerations
It is too large a subject to consider what constitutes a good experiment, i.e. one that can be
believed. Here a flavour will be provided by first introducing some of the terminology
that is used before providing two useful examples making use of what has gone before.
7.1 Terminology
The “reliability” of a measurement relates to its consistency. Otherwise known as the
“repeatability” of a measurement, it is the extent to which an instrument can provide the
same value for nominally the same measurement (i.e. the same subject under the same
conditions).
The “validity” of the findings of an experiment refers the extent to which the findings can
be believed to be right. For a particular experiment this depends on the rigor with which
the study was conducted (as assessed through the experimental design, its reliability and
the care in its execution) but also the extent to which alternative explanations were
considered.
7.2 Comparing results with accepted values
In the year 0,1 and 2 teaching laboratories, it is common for measurements to be made of
known values (such as g) allowing a comparison with the results obtained. A downside of
this is that students may perceive that the result (being already known) is not important
and instead the point is practice of a technique and seeing physics in action. This is
incorrect, whatever the result, it sheds light on the experiment.
Remember that any result is presented as: (measured value +/- error) units. This allows
comparison with the known values and if the two agree within errors (i.e. within the error
range of the measured value) then there is nothing more to say. However, if the two do
not agree within errors there must be a reason and it is necessary to consider what this
might be.
Candidates include:
 Systematic errors in the measurement or equipment.
 Misjudged random errors.
 Poor experimental technique.
 Poor or inappropriate (possibly oversimplified) theory.
If the reason for the discrepancy is properly understood and subsequently included then
agreement should be possible. Whilst such an extra analysis is likely to be beyond the
expectations for 0 and 1st year labs it is important that students think about the situation,
and it is often true that the reason for the discrepancy is known in principle.
A link can also be made to more advanced work where it is essential that accurate
measurements of unknown values are made. If measurements of known values (possibly
standard samples or “standards”) are made first then any systematic errors can be
corrected for. The known samples provide a way of calibrating the instrument.
7.3 y = mx relationships
Previous discussion of straight line graphs have been concerned with the general case (y =
mx + c relationships). However, many expected relationships are of the form y = mx, in
other words the graph produced is expected to go through the origin. This is worth
special consideration as it often causes confusion for inexperienced experimentalists.
The main issue is that students not only include the origin as a data point but also give it
special significance by forcing the best fit line to go through the it (whether by hand or on
a computer).
97
One of the classic systematic errors is a zero offset the effect of which is to produce a
constant (solid) shift of all data point either up or down whilst leaving the gradient (from
which most information is found) unaffected. Excluding the origin from analysis allows
the y intercept to be compared to zero and so the significance of a possible zero offset to
be considered. The alternative such as forcing the best fit line through the origin both
removes evidence for a possible zero offset and if there is one alters the gradient so
introducing an (illegitimate) error into the gradient.
8. Some important distributions
A number of distributions are observed in experiments, three important ones described
here are the Gaussian (or Normal), Poisson and Lorentzian. The former two distributions
can be related to the Binomial distribution and so this is introduced first.
In all cases the probability function P is given using x, μ and σ as the measured value, the
mean and standard deviation of the distribution respectively. The functions are
normalised such that  F ( x)dx  1 .
8.1 Binomial statistics
Binomial statistics describe certain situations where results of physical measurements can
have one of a number of well-defined values - such as when tossing coins or throwing
dice. Consider a situation where the result of one physical measurement of a system has a
probability p of giving a particular result. If an experiment is carried out on n such
systems, then the probability that x of the systems will produce the required result is given
by.
Px,n, p  
n!
p x 1  p n  x 
x! n  x !
An example: The probability of throwing a six with one dice is 1/6. If we throw 4 dice
we may obtain 0,1,2,3 or 4 sixes. The probability of obtaining zero sixes is given by
substituting in equation 1 above so that
0
( 4 0 )
1
4!

1 5
Probability of zero sixes with 4 die = P 0 ,4 ,  
   
6  0! ( 4  0 )!  6   6 

Similarly the probability of throwing one six is
1
1
4!

1 5
P1,4,  
   
6  1! ( 4  1 )!  6   6 

( 41 )
etc
For this distribution the mean value is np and the standard deviation is
np(1  p)
8.2. The normal (or Gaussian ) distribution
As already mentioned the distribution function which best describes random errors in
experiments is the “normal” or “Gaussian” distribution. This distribution is an
approximation to the binomial distribution for the special limiting case where the number
of possible different observations is infinite and each has a finite probability so that
np>>1.
The normalised probability function P(x) given by:
98
P(x) 
  ( x  x )2 
1
n 
exp
2
2  n ( x )
 2 n ( x ) 
1
P(x)
where, as before, x is the measured value x n is the mean of the sample and  ( xn ) is the
sample standard deviation and the function is normalised such that  P( x )dx  1 . As
the example figure A1.1 shows the function is (characteristically) bell shaped and
symmetrical.
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
FWHM
FWHM
-4
-3
-2
-1
0
1
2
3
4
x
Figure 11 Gaussian probability function generated using
xn  0 and σ(x) = 1 resulting in the x-axis being
in units of standard deviation. The FWHM for the distribution is also shown and can be seen to be wider
than 2σ(x).
If x n and  n ( x) are known the whole distribution function can be drawn and the
probability of measurements occurring in a given range can be determined. The integral
of the Gaussian function cannot be performed analytically and so many statistics books
will contain look-up tables, a summary version of which is presented in table 2.
Table 2. The Integral Gaussian or Normal probability
Range either side of mean,
Expected percentage of values in range
in terms of +/-m σn(x)
m=0
0%
m=1
68.3%
m=2
95.4%
m=3
99.73%
m=4
99.994%
From this table it can be seen that quoting an error of +/- σn(x) would cover a range in
which ~68% of the values fall which will therefore give a similar estimate of error as the
“probable error” in which 50% of the values fall. The FWHM is also worth considering
in this context as experimentally it is often more direct and convenient to deal with then
standard deviation. It is clear from figure 11 that the FWHM covers a little more than the
range of +/- σn(x) (in fact FWHM = 2 2 ln( 2 ) ( xn )  2.355.... ( xn ) ). This corresponds
to a range in which ~76% of the values fall. Any of these three might be used as an
99
estimate of the error - in the case where a small number of measurements have been
performed.
8.3 Poisson distribution
The Poisson distribution is the limiting case of a binomial distribution when the possible
number of events (n) tends to infinity and the probability of any one event (p) tends to
zero in such a way that np is a constant.
Poisson distributions are often appropriate for counting experiments where the data
represents the number of events observed per unit time interval. A gram of radioactive
material may contain ~1022 nuclei whereas the number that disintegrate in each time
interval is many order of magnitudes smaller.
This covers a very wide range of physics experiments:
 In the teaching labs - radioactive decay, x-ray absorption and fluorescence.
 More widely - Spectroscopy, particle physics (such as at the LHC), astronomy.
The Poisson distribution is the limiting case of a “binomial distribution” when the number
of possible events is very large and the probability of any one event is very small. The
normalised distribution is given by:
 x e
P(x) 
x!
where P(x) is the probability of obtaining a value x, when the mean value is μ. The
standard deviation for a Poisson distribution is  . This distribution is unlike the normal
or Gaussian distribution in that it becomes highly asymmetrical as the mean value
approaches zero.
Counting experiments: the “signal to noise” ratio
In all counting experiments the “quality” of the data is expected to “improve” with
increasing counting time and counts. This can be understood as follows: the mean
number of counts in the experiment, μ, is the “signal” whilst statistical variations in this
signal are represented by the standard deviation σ(x) and can be thought of as “noise”.
In Poisson statistics σ(x) =  therefore the signal/noise =  /    , i.e. the ratio
increases with the square root of the number of counts. This is an often quoted and very
important finding for understanding and designing experiments.
Put another way, if in a particular counting period an average of N counts are obtained,
the associated standard deviation is N (ignoring any errors introduced by timing
uncertainties, etc). Clearly, the larger N the more precise the final result. For a given
source and geometrical arrangement, however, N can be increased only by counting for
longer periods of time.
8.4 Lorentzian distribution
This distribution is important as it describes data corresponding to resonance behaviour.
This includes mechanical and electrical systems but also the shape of spectral lines
occurring in atomic and nuclear spectroscopy.
The Lorentzian distribution is symmetric about the mean, is usually characterised by its
half full width at half maximum  (aka “half width”) rather than by its standard deviation
and is given by
Px, ,  
1
/2
 x   2   / 22
100
A characteristic of the distribution is that is has “heavy tails”, i.e. it falls away slowly for
large deviations. A consequence of this is that it is not possible to define a standard
deviation for this function.
It should be noted that a number of broadening mechanisms may be effective in
spectroscopic experiments and some of these, such as Doppler broadening and also the
resolution of the system may be Gaussian in nature. What is measured may therefore be a
convolution of a Lorentzian and a Gaussian function resulting in a so called “Voigt”
profile. Experimentally, it is usual to start by assuming a Gaussian line shape, deviations
away from this in the tails is often good evidence of a Lorentzian contribution.
101
III.3 USING EXCEL
1. Determining errors from straight line graphs using EXCEL
Instructions
 Input the data to be analysed into an EXCEL spreadsheet in column form.
 Select a 2x2 array of cells anywhere in the spreadsheet (these are the ones highlighted
in the figure below).
 In the function/command line type “=linest( ” - presumably “linest” stands for line
statistics.
 Opening the bracket leads EXCEL to prompt for
known_y’s
simply select using mouse, then insert a comma.
known+x’s
simply select using mouse, then insert a comma.
const
input 1 (using 0 would force line through the origin) and a
comma.
stats
input 1 (this sets the correct statistics) and close the bracket.
 The command line should look something like:
=LINEST(A5:A14,B5:B14,1,1)
 To execute the calculation press CTRL,SHIFT and ENTER
 Values for m and c and their errors should appear in the selected 2x3 array in the
format shown in the figure below. The “m”, “c” “errors” “R^2” and “reg error”
labels have been added for clarity.
 In this case the gradient is m = 2.60 ± 0.04 and the intercept is c = -1.2 ± 1.6, i.e. the
straight line passes through the origin within the (standard) error.
 R^2 is the same value as appears on graphs when adding trend lines: it is a correlation
coefficient indicating how good a straight line the data represents.
 “Reg Error” is short for “regression error”; it is the standard error of the measured y
values compared to the best fit y values. It is analogous to the standard error for
repeated measurements of the same value where values are then compared to the mean
of the values.
Least squares fitting of straight line data
The data
x
0
1
2
3
4
5
6
7
8
9
x^2
0
1
4
9
16
25
36
49
64
81
y
0
2
11
21
42
63
93
120
162
216
errors
m
c
2.60301 -1.18577
0.042074 1.647517
0.997914 3.572721
R^2
reg error
Figure Appearance of EXCEL spreadsheet when determining errors in a straight line
graph. The selected 2x3 array of cells (in which values were eventually returned) are
highlighted.
102
2. Making graphs in EXCEL 2007
EXCEL 2007 is substantially different from previous versions and this has caused
students (and staff) some problems: there are more options so things are generally a bit
more difficult to find.
To help, some guidance on basic graphing tasks is given below.
To make a basic graph
 Select two or more columns of data either by clicking and dragging or by selecting a
column holding down control and selecting additional columns. The left hand column
will be the data for the x-axis no matter what order the data is selected.
 Select “insert” on the toolbar
 Select type of graph (usually “scatter”).
To add titles*
 With graph selected, in “chart tools” click on “Layout”.
 Here click on “axis title”. For the y axis (primary vertical axis title) it is probably best
to use “rotated title”.
 You may also want to add a “chart title” (for your diary but not for inclusion in
reports!).
*You don’t seem to be able to add equations to titles but you can use Word-like
formatting: “CTRL =” for subscripts, “CTRL +” for superscripts.
To change the range of data shown
 Either select the axis or choose “format axis”.
 Or, under “Layout” choose “Axes”, then the axis of interest, then (at the bottom of the
list) “More… axis options”.
 Under “axis options” change minimum and/or maximum to fixed (from auto) and
select desired value(s).
Formatting data series (line and marker)
 Right click on the required data series on the graph and then choose “format data
series” and choose from the “series options”.
 For example to change marker size choose “marker options” set marker type to “built
in” then set “size”.
 Alternatively, with the graph selected: under “layout” the required data series can be
selected by use of the drop down box in “current selection” (on the left of the toolbar).
103
III.4 REPORTING ON EXPERIMENTAL WORK
AN EXAMPLE OF HOW TO WRITE A LONG REPORT
1. Introduction
Scientific report writing is a skill, the application of numerous rigid conventions, in
combination with a surprising degree of freedom in structure, combined to achieve clarity
of presentation.
Physics students will write such reports at a rate of approximately one per semester
throughout their undergraduate University career. For many students the feedback this
provides may be insufficient for them to efficiently get to grips with what is required and
expected. The document is based around a specimen report the examination of which is
intended to help students in writing long reports.
“Galileo’s Rolling Ball Experiment” is a Preliminary (Year 0) experiment and also a
classic experiment of physics. It is performed in a three hour laboratory session in which
students are required to both take and analyse their data (diaries are handed in at the end
of the session). It is a simple experiment used to help develop data handling and error
analysis for people some of whom are new to performing physics experiments for
themselves. Consequently the report is rather basic.
Following this introduction, the main body of the report is split into three sections:
2. Teaching Laboratory instructions for the experiment
3. The specimen report based on students’ laboratory diaries
4. A final section on report writing that discusses some of the finer points and the
School’s changing expectations of students as they progress through their Physics courses.
2. Teaching Laboratory instructions for the experiment
G2
GALILEO'S ROLLING BALL EXPERIMENT
Reference: Duncan, Chapter 7, Statics and Dynamics, Chapter 8 Circular motion and
gravitation
Equipment List: Metal channel, retort stand, ball bearings and box, stopwatch, metre rule.
Introduction
Galileo Galilei made observations in astronomy and mechanics that were of major
importance to the development of 17th century science. Perhaps Galileo's most famous
experiment, which was supposed to involve the leaning tower of Pisa, was his verification
that all bodies, independent of their mass, fall at the same rate (if the bodies are heavy
enough that air resistance is negligible). We shall look at here one of Galileo's less famous
but closely related experiments which conveniently does not require dropping weights
from the tower of Pisa!
104
Galileo performed an experiment on a falling body that 'diluted' the effects of gravity, by
letting the body roll down a slope. Galileo predicted and was able to show experimentally
that in this case:
1) No matter what the angle θ (this is the Greek letter theta) of the slope, the speed of the
object at the bottom of the slope depends only on the total height h it has fallen through.
2) The speed of the object increases in proportion to the time it has travelled.
3) For a given angle of the slope, the vertical height h fallen is proportional to the square
of the time it has travelled.
Since this was true for all the slopes that Galileo was able to measure, by imagining the
steepness of the slope to be increased until it was vertical he predicted that these rules
would be true for a freely falling body.
Imagine yourself in Galileo's position. Mechanical watches had not yet been invented. He
had to use 'water clocks' in which time was measured by water escaping from the bottom
of a conical container. Standards of length differed across Europe. Also, he calculated, not
with decimal fractions, but with whole number ratios. (See the article by S Drake in the
American Journal of Physics, p302, volume 54, April 1986, if you are interested in the
historical details). Your experiment here will be rather easier than Galileo's!
Start (t=0)
h

Finish
In this experiment we shall be concerned with investigating the third statement only.
Referring to the above diagram, Galileo's third statement can be expressed mathematically
as
h α t2
(if θ is fixed)
(Eq. 1)
Here t is the time for the object to roll from the start to the finish, and the symbol α means
"is proportional to". (The constant of proportionality depends on the strength of the
Earth's gravity and the angle of the slope). The aim of this experiment is therefore to
check the above relation.
The experiment provides a good introduction to taking measurements, presenting
information in tabular and graphical form, and the consideration of errors of
measurement. Additionally, you will need to relate your experimental data to theory
presented in a mathematical form.
105
Experiment (read this to the end before you start)
You are provided with a channel which can be inclined at any angle. You should use the
following procedure, making sure you record all the details in your laboratory notebook.
STEP 1 - First fix the value of θ at a value between 2 and 15 degrees. (If θ is too large
then it is difficult to time the fast-moving ball, whilst if it is too small the effects of
friction will be more important).
Measure sin θ for the slope and estimate its error (see below). Since all your
measurements will be made at the same angle it is very important to perform this
carefully. In subsequent calculations you will use sin θ and its error but you should also
STEP 2 - Hold the ball at a convenient position along the channel and measure h.
STEP 3 - Measure the time t that it takes the ball to roll down the slope for a starting
height h. Repeat the measurement 3 times and record each result.
STEP 4 - Repeat steps 2 and 3 for eight different values of the starting height h. Make
sure that you neatly tabulate every measurement that you make (not just the averages).
Your table should have the following columns:
Height
/m
...
...
...
h t1/ sec
...
...
...
t2/ sec
t3 / sec
t1²/ sec2
t2²/ sec2
t3² / sec²
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
t2
(average) /
sec2
Always include the units when you write down any numerical value.
Some suggestions
It is difficult to accurately measure the angle θ with a protractor! The best way to find it
is to measure H (the change in height of the end of the channel above the bench) and D
(the total length of the channel) shown in the diagram below. (Do not confuse the symbol
h with H or d with D, also shown on the diagram!) Then sin θ = H/D, so you can
calculate θ . Remember to tabulate all the measurements you make, not just θ
D
H
h
d

106
Precision Estimates
In all measurements you make, you should write down the precision of the measurement ie could you measure h, H and D to the nearest millimetre, centimetre, or metre? (This
depends on how you measure the quantity as well as the fineness of divisions on the metre
rule. For example, can you tell exactly where the centre of the ball bearing is, and can you
position the ruler easily? The golden rule is use common sense when estimating the
precision of a measurement.
Analysis
Equation 1 can be written in another, exactly equivalent, form:
t² = K × h
(if θ is fixed)
(Eq. 2)
Because t² is proportional to h, a graph of t² (plotted on the vertical axis) against h (plotted
on the horizontal axis) should give a straight line, which passes through the origin, with a
gradient equal to the constant K.
STEP 1 - From your data in the tables of t² and h, plot a graph for your value of θ .
STEP 2 - Draw a straight line, which best fits the data points. Work out the gradient of
this line (don't forget the units). Draw the 'error lines' and so work out the error in the
gradient. Does your best fit line pass through the origin?
The data you took can be used to work out the acceleration due to gravity, g. This can be
done since the constant K in equations 1 and 2 is, according to theory (see appendix),
K = 2 / (g sin² θ)
(Eq. 3)
So, to find g, just do the following: Work out sin θ (it's just equal to H/D) and K (the
gradient of the corresponding graph you plotted) and substitute into equation 3, after
rearranging it to make g the subject of the equation. Be careful to make sure you know
what units K is measured in.
What value of g do you get? Even taking errors into account* the value is probably
around half the accepted value of 9.8 ms-² ? Can you think of any reason why this should
be so?
(* If you need to, ask a demonstrator to explain how to calculate the errors in g - you will
need to estimate the experimental error in each of the things that was used to find g, ie the
individual errors in sin and K, and then combine the errors. Actually, you will probably
find there is comparatively little error in sin so that most of the error is in finding K.)
Appendix
Read this at home, not in the laboratory class. You may find it useful in conjunction with
your Mechanics lectures.
107
Suppose a body slides, without friction, down a slope of inclination θ :
mg sin 
h
mg

Finish
The component of the force on the mass m parallel to the slope is mg sin θ , so the
acceleration of the body parallel to the slope is
a = F/m = g sin θ
Using the formula "s = ut + at² /2" means that the distance moved to the bottom of the
slope in a time t is just (u=0 if the body starts at rest)
d = g sin θ × t² /2
But sin θ = h/d or d = h/sin θ , so we finally get
h = g sin² θ t² /2
(Eq. 4)
This equation is therefore the same as equation 2, since we can re-arrange it as
t² = 2 h /g sin² θ
(Eq. 5)
So, comparing directly to equation 3, we have K = 2/(g sin² θ ), as stated earlier.
108
3. The specimen report based on students’ laboratory diaries
(A report based on measurements made by a Foundation Engineering Student taking
PX0102 in October 2006)
Galileo’s Rolling Ball Experiment
Date January 2007
Author: Cardiff University, School of
Physics and Astronomy
Abstract
Galileo’s rolling ball experiment was performed in which the motion of a ball bearing
down a shallow incline, of angle  = 3.52 +/- 0.03 degrees, was timed as a function of the
starting height of the ball. Starting heights between 0.035 and 0.070 m resulted in travel
times in the range 1.90 – 2.90 s. As expected, a graph of the square of the time of travel
versus starting height was a straight line that passed through the origin. The gradient
would be expected to be 2/g.sin2  , where g the acceleration due to gravity, assuming that
the gravitational potential energy was entirely converted to translational kinetic energy.
The value of the gradient was found to be 113+/-19 s2.m from which a value for g of 4.76
+/- 0.12 m.s-2 was determined that is approximately a factor of two lower than the
accepted value of 9.81 m.s-2. The discrepancy can be attributed to the fact that as the ball
rolls down the incline gravitational potential energy is converted not only into
translational but also into rotational kinetic energy.
1. Introduction
Galileo Galilei was a seventeenth century Italian scientist who made many important
observations in astronomy and mechanics [1]. His most famous experiment on the effects
of gravity involved dropping weights from the tower of Pisa and showed that all bodies
fall at the same rate independent of their mass. In the rolling ball experiment [2] in which
a ball rolls down an incline, the effects of gravity are easier to quantify since the travel
times are increased.
Using this experiment Galileo showed that: (i) the speed of the object at the bottom of the
slope depends only on the height it has fallen through, (ii) that the speed of the object
109
increases in proportion to the time it has traveled and (iii) for a given angle of slope, the
vertical height fallen through is proportional to the square of the time it has travelled.
The experiment performed here was concerned only with the last statement.
2 Background Theory
A schematic of the experiment in which an object of mass m acted upon by gravity
(acceleration due to gravity is g) on an incline is illustrated in figure 1 below.
d
m.g.sinθ
m.g
h
Figure 1. Schematic of an object on an inclined plane. The plane is at an angle  to the
horizontal and the force due to gravity acting down the slope is m.g.sin  .
For an incline at an angle  , although the force vertically downwards is m.g the force
parallel to the slope is m.g.sin  . This is the force that accelerates the body down the
slope the acceleration, a being given by:
a
force m.g.sin 

 g.sin 
mass
m
(1)
If the body starts at rest (initial velocity zero) and travels a distance d (for example to the
bottom of the slope) the relationship between the time taken and distance travelled is
given by the well known equation of motion:
d
1 2
a.t
2
d
or
1
g.sin  .t 2
2
(2)
In addition, if h is the change in height the object undergoes by travelling a distance d
down the slope then it is clear from figure 1 that:
sin  
h
d
(3)
Note that as h is defined in figure 1 the object would start at the top of the slope.
Substituting for d in equation 3 and rearranging gives:
t2 
2
g.sin 2 
.h
(4)
110
Equation 4 confirms Galileo’s third statement and indicates that a graph of the square of
the travel time versus height should be a straight line that passes through the origin. In
addition, if the angle of the slope is known the value of the gradient can be used to
determine a value for the acceleration due to gravity.
This is the experiment that has been performed. Whilst Galileo performed the experiment
for a range of slope angles, here only one has been used.
3. Description of the Experiment
The “slope” was provided by a right angled channel, held by a retort stand down which a
ball bearing could roll. After fixing the slope its angle was found (by way of measuring
its elevation and length) to be 3.52 +/- 0.03 degrees. The ball bearing was placed on the
slope at a particular height and its time to travel down the slope was measured by hand
with a stopwatch. The measurement was performed three times for each height and at
eight different heights. One person released the ball at the set height and a second person
timed the descent. The timing error for a single measurement was initially estimated to be
+/-0.5 s however the spread of times found in the repeated measurements was usually
only +/-0.1 s. The error in the release height of the ball bearing was +/- 1 mm. The range
of heights used was 0.035 to 0.070 m resulting in travel times in the range ~1.9 to 2.9 s.
4. Results
A graph of the average squared travel time against release height is shown in figure 2.
The data is a reasonable straight line with some scatter about the best fit line. By drawing
best and worst possible fits by hand the gradient of the line was found to be 113 +/- 19
s2.m-1. These lines indicated that within errors the data is a straight line through the origin
[3] as expected from equation 4 and indicating that any systematic errors are small
compared to random errors.
Time squared /s 2
9
8
7
6
5
4
3
0.03
0.04
0.05
0.06
0.07
0.08
Height /m
Figure 2. Graph of the average of the travel times squared versus the release height. The
straight line here is a computer generated best fit to the data3.
From the gradient and the angle of slope a value for the acceleration due to gravity, g, was
determined (using equation 5) to be 4.76 +/- 0.12 m.s-2.
5. Discussion
111
Although the results of the experiment do show that for the single angle of slope used, the
vertical height fallen through is proportional to the square of the time it has traveled, the
derived value for g does not agree with the accepted value of 9.81 m.s-2 within graphical
errors. The obtained value of g is approximately half of the expected value, whereas the
error is only ~10%. The discrepancy is therefore much larger than can apparently be
explained by random errors associated with the measurement and therefore needs to be
considered further.
The sources of measurement error include distances (for the height of release and the
angle of the slope) and timing (for the travel time). Neither the meter rule nor the
stopwatch are likely to have appreciable intrinsic errors associated with them. The use of
the rule to determine heights and angles has relatively small errors as discussed above and
no errors have been found in calculations. The estimated absolute timing error (+/- 0.5 s)
arose from consideration of matching the start of the stopwatch with the release of the ball
bearing and its stop with the ball reaching the bottom of the slope. The fact that this error
appears significantly larger than the spread of travel times in repeated measurements (0.1
s) obtained from repeat measurements indicates that there may be a systematic error in
starting and stopping the watch. However, a systematic error of up to +/- 0.5 s would do
little to improve the agreement between the measured acceleration and g.
The explanation for the results obtained lies in the realization that whereas it is true that it
is the translational acceleration down the slope that is measured by this experiment it is
not true that the gravitational force acting down the slope is only converted into this form
of motion. As the title of the experiment states the ball rolls down the hill implying that it
has both translational and rotational motions. In other words the gravitational potential
energy of the ball is converted into both translational and rotational kinetic energy. It
should be possible to reanalyze the results here incorporating the effects of rotational
motion but this is beyond the scope of this report.
6. Conclusions
Galileo’s rolling ball experiment has been performed in which the motion of a ball
bearing down a shallow incline of angle 3.52 +/- 0.03 degrees. Assuming that
gravitational potential energy is entirely converted to translational energy of the ball the
value the value for g was determined to be g = 4.76 +/- 0.12 m.s-2. This value is
approximately a factor of two lower than the expected value. The discrepancy is almost
certainly mainly caused by the fact that gravitational potential energy is converted into
rotational as well as translational kinetic energy as the ball rolls, rather than slides, down
the hill.
References
[1]. “Galileo’s physical measurements” Stillman Drake, Am.J.Phys 54 (1986) 302-306.
[2]. Experiment G2 (Gallileo’s Rolling Ball Experiment) in Preliminary/Foundation Year
Laboratory Course Booklet (2006_7).
[3]. The computer generated best fit gave a gradient of 127 s2.m-1.
Aside: This value is at the high end of the values quoted in the text. Looking more
closely it appears almost certain that the student forced the best fit line to go through the
origin. This was a (commonly made) mistake. To do this the student has assumed not
that t2 = 0, h = 0 is an experimental point but that it is a point known with absolute
certainty. While this may at first seem reasonable, after all the time taken to change
112
height by zero amount will take zero time, the trouble is that hides the effects of any
systematic errors from the data analysis. For example, it is quite feasible that a systematic
error could have been made in measuring the release height or in timing the motion. This
might result in a straight line that does not go through the origin, but, a perfectly valid
gradient. The result is that the student has both hidden any systematic errors and
introduced an error into the gradient and consequently into the calculated value of g. If
the student had spotted the error it would not be valid to present erroneous results, the
data would need to be reanalyzed. However, giving the benefit of (a very small doubt)
this report has been written using the assumption that the student did not force the best fit
line through the origin and should be read with this in mind.
4. Report writing.
The style is intended to be very similar to that of a paper presented to a scientific journal
but the level at which it is written should be such that another student with a similar
background but unfamiliar with the experiment would be able to understand what you
have done, why and what it all means. Reports are separated into sections the expected
contents of which are described below. This is followed by some general advice and
comments on changing expectations through the undergraduate course.
4.1 Contents of the different sections of a scientific report
Abstract
This summarizes the experiment in a single paragraph in ~150 words, featuring
particularly the (numerical) results and principal conclusions. It is entirely separate from
the rest of the report, hence concepts introduced in the abstract need to be introduced
again in the main part of the report.
1. Introduction
Describes the background to, and aim(s) of, the experiment and whatever theoretical
background is needed to make sense of your own work being presented.
There is an expectation that the student reads around the subject before writing the report.
This should be reflected in “Introductory”/”Theory” sections that are not solely derived
from the laboratory handbooks. The source material for this should be quoted and
obviously re-written to fit in with the requirements of the report and to avoid plagiarism.
At the same time the “Introductory”/”Theory” sections should be appropriate for the
report and not overwhelm it.
If necessary, for example if the introduction becomes large and difficult to read, the
section can be split in order to have a distinct "Background Theory" section following
on from the more general introduction.
Unfamiliar/obscure derivations may be included but exclude trivial steps.
The theory section may include a number of equations. These should be on a separate
line, numbered and each of the symbols used should be explained the first time they
appear, e.g.:
E = mc2
(1)
-1
where E is energy (J), m is mass (kg) and c is the speed of light (ms ).
2. Description of the experiment and 3. Results
These sections are very flexible and tend to cause the most trouble for students in years
0,1 and 2.
113
There should be descriptions of the main features of the equipment and general
descriptions of how it was set up and used. These should be written in paragraph rather
than point form, should not be in the form of lists and should not be an instruction set for
the experiment. Greater detail should be included where non-standard/unfamiliar
equipment has been used, where subjective interpretations or procedures were employed
or where significant or systematic errors or uncertainties may have occurred.
If only one experiment was performed the logical flow of the report is clear. However, if
the experiment had two or more parts then things can get complicated. Many students fall
into the trap of separating important procedural information from results: e.g presenting
procedure 1, procedure 2, results 1 and then results 2 etc. Reports using this format are
very difficult to read.
Much better is: procedure 1, results 1, procedure 2, results 2 etc. A question to consider
then is how much common experimental information can be placed upfront before getting
deeply into the experiments?
Large amounts of data are usually best presented in either tabular or graphical form,
choose the most appropriate (but usually not both forms). Diagrams and graphs should
be labeled: Figure 1, Figure 2 etc underneath the figure (see example above) and tables as
Table 1, Table 2 etc above the table (se example below) and all should have an
explanatory title.
Explain how the original data were analyzed, for example indicate whether a value is the
average of a number of measurements and/or refer (by number) to the mathematical
equations used (see notes below). However, the actual mathematical working should not
be included. Graphs should show the best fit straight line (but not the error fits) if
applicable and numerical values should always be quoted with their associated errors.
Again, do not show the mathematical working used to obtain errors.
4. Discussion
The discussion section is very important in that it both brings together the previous
sections and is the point at which students can demonstrate “critical awareness” through
interpretation of the meaning of the previously described results.
Other items that might be discussed are: consistency of readings, accuracy, limitations of
apparatus or measurements, suggestions for improvements of apparatus, comparison of
results obtained by different methods, comparison with theoretical behaviour or accepted
values, unexpected behaviour, future work. However it is clear that some of these are
experimental considerations that could equally well be placed in the previous sections in
the case of a complicated/multi-experiment report.
5. Conclusions
Reports should end with a conclusions section. These should summarize the main results
and findings.
6. References
References should be numbered and placed in the correct order in the text (i.e. the
Vancouver system). They can be denoted by a superscript1 in square brackets [1] or by
other (logical) systems.
The procedure can be stated in words in the following way:
114

At the point in the report at which it is necessary to make the reference insert a
number in square brackets, e.g. [1], the numbers should start with [1] and be in the
order in which they appear in the report.

At the end of the report in the section headed “References” the full reference is given
as follows:
In the case of a book:
Author list, title, publisher, place published, year and if relevant, page number.
e.g. [1] H.D. Young, R.A. Freedman, University Physics, Pearson, San Francisco,
2004.
In the case of a journal paper:
Author list, title of article, journal title, vol no., page no.s, year.
e.g. [2] M.S. Bigelow, N.N. Lepeshkin & R.W. Boyd, “Ultra-slow and superluminal
light propagation in solids at room temperature”, Journal of Physics: Condensed
Matter, 16, pp.1321-1340, 2004.
In the case of a webpage (note: use carefully as information is sometimes incorrect):
Title, institution responsible, web address, date accessed.
e.g. [3] “How Hearing Works”, HowStuffWorks inc.,
http://science.howstuffworks.com/hearing.htm, accessed 13th July 2005
Different publications are likely to insist on one particular system (e.g. Vancouver as done
here or Harvard – authors name and year of publication in text). Lecturing staff may
express a preference.
Appendices
This section is not compulsory but can be used to provide information that doesn’t fit into
or is not vital to the report but the author still wants or needs to present (possibly as
evidence of work carried out). The main text should reference the appendix but it should
not be necessary for the reader to read the appendix to understand the report.
Examples of material included in appendices include: long, non-standard derivations,
computer code, the authors detailed designs for apparatus, results not included in the
report and risk assessments (if required). The appendix should include sufficient
explanation to make sense of this extra information.
Appendices are not usually necessary for year 0,1 and 2 reports but are more common in
years 3 and 4 because of the desire to demonstrate project work.
4.2 General advice
 The report should be written in your own words, i.e. do not plagiarize other peoples
work (including laboratory books, other student’s reports, the web or textbooks).
 Apart from the abstract and conclusions there should be little repetition in reports.
 The past tense is most appropriate and the most commonly used.
 The report should be impersonal (avoid “I”, “we”, “you” etc).
 A well-labelled diagram can be more informative than several paragraphs of prose.
 All diagrams, pictures, graphs and figures should be labelled figure 1, figure 2 etc in
the order they appear and should have a descriptive figure caption.
 Tables should be labelled as table 1, table 2 etc in the order they appear and have a
descriptive table caption.
 Readers will naturally work through the text of the report. This text should therefore
refer to and explain figures, tables equations etc when appropriate. For example,
“Figure x shows…….”.
115



Related to the last point figures and tables should appear at an appropriate place in the
text and be of an appropriate size. The electronic generation of reports means that
there should be no need for full page hand drawn graphs (allow these are still allowed
at Year 0 level).
It is not necessary to include a risk assessment with your final report, the purpose of
that was to ensure your safety when you performed the experiment. However, it may
be required as part of longer reports in the third or fourth years in which case it should
present in an appendix as proof of its existence.
Pages should be numbered and longer reports (3rd and 4th year project reports) should
have a contents page.
4.3 Differentiation between years
1. Style
In essence very little changes of style are expected through the academic years. The aim
is to instill the scientific style of writing from the beginning. Such changes that do occur
reflect the changing content of the report and the audience (reader).
2. Length of reports
Typical report lengths are shown in table 1 for different student years.
Table 1. Typical lengths of reports (pages assumed to be typed and to include diagrams
and tables)
Student Year
Typical word length
0
(1500-2000)
1
(2000-3000)
2
(2000-3000)
3 (interim)
(~3000)
3 (final)
(up to 6000)
4 (interim)
(~3000)
4 (final)
(up to 6000)
3. Scientific content




Experiments in years 0 and 1 are highly prescriptive with well defined aims. In year 2
some of the experiments are likely to allow genuine student enquiry. In years 3 and 4
the two semester projects are open ended, student led and with undetermined
outcomes. At the same time the techniques will likely become more sophisticated, the
physics more advanced (and distinct from taught modules) and the results more
numerous.
Early years reports will inevitably be heavily influenced by the laboratory books
provided. Third and fourth year reports will have no such guidance to fall back on
and 2nd year reports sit somewhere in between.
Early reports may use laboratory books and text books as reference sources whereas
3rd and 4th year reports should make increasingly extensive references to research
papers.
Since longer reports are expected in the 3rd and 4th years the style is perhaps less
similar to scientific papers and more so towards a Masters or Ph.D thesis. Ultimately
though it remains “scientific”.
116
DIARY (LAB BOOK) CHECKLIST (also see page 6)
Date
Experiment Title and Number
Risk Analysis
Brief Introduction
Brief description of what you did and how you did it
Results (indicating errors in readings)
Graphs (where applicable)
Error calculations
Final statement of results with errors
Discussion/Conclusion (including a comparison with accepted results if
applicable)
FORMAL REPORT CHECKLIST ( also see page 8 )
Date
Experiment Title and Number
Abstract
Introduction
Method
Results: Use graphs – and don’t forget to describe them.
Indication of how errors were determined
Final results with errors
Discussion
Conclusion (including a comparison with accepted results if applicable)
Use Appendices if necessary
A risk assessment is unnecessary.
117
Download