Autumn2011manual - School of Physics and Astronomy

advertisement
SCHOOL OF PHYSICS AND
ASTRONOMY
FIRST YEAR LABORATORY
PX 1123
Introductory Practical Physics I
Academic Year 2011 - 2012
If found please return to:
Email:
Welcome to the first (Autumn) semester of the 1st year laboratory, module
PX1123 IntroductoryPractical Physics I. This module will be followed in
the Spring semester by PX1223 IntroductoryPractical Physics II. This is the
manual for PX1123 only. You will need to bring this with you to every
laboratory session as you will find all the relevant information you need for
the laboratory classes. It is essential that you read carefully through the
manual as it contains: the instructions that you will need to follow in order to
undertake the individual experiments; logistical information; tips on how to
keep your laboratory diary and how to write up your end-of-term reports;
background notes on fundamental topics with which you need to be familiar;
and health & safety issues that relate to the experiments themselves. You are
expected to have pre-read each relevant section prior to coming to your
weekly laboratory session.
This manual is divided into 3 sections, described in more detail overleaf, and
should be your first port of call for any information about the laboratory
work. If you cannot find the information that you are looking for, please ask
any member of the teaching team - your Lab Supervisor, the demonstrators
or the module organizer (Dr. C.Tucker, room N1.15).
Lab Supervisor:
Contact email:
Demonstrators:
1
CONTENTS:
I:
II:
Introduction and logistics of the 1st Year laboratory
3
Organisation and administration of the laboratory
3
Recording experimental results in your lab notebook
6
Writing-up full reports of experiments
8
Safety In the Laboratorys: Risk Assessment and
11
Code of Practice
12
Experiments
13
Timetable and list of experiments
13
Check list for experiments
14
Laboratory notes for experiments
III:
Background notes
III.1
Background notes to experiments
15 - 74
75
Introduction to electronics experiments
How to use a Vernier scale
75
82
III.2
Analysis of experimental data: Errors in Measurement
85
III.3
Reporting on experimental work
116
An example of how to write a long report
121
Notes on writing long reports
125
Checklists
129
2
I:
INTRODUCTION AND LOGISTICS OF THE 1ST YEAR
LABORATORY
ORGANISATION AND ADMINISTRATION OF THE LABORATORY
INTRODUCTION
There are 11 laboratory sessions in the Autumn Semester and 11 in the Spring Semester.
They are designed with several objectives.
1. To demonstrate theoretical ideas in physics, which you will encounter in your
lecture courses.
2. To provide familiarity and build confidence with a range of apparatus.
3. To provide training in how to perform experiments and teach you the techniques
of scientific measurement.
4. To give you practise in recording your observations and communicating your
findings to others.
The majority of the work you will do in the laboratory will be experimental, and will be
performed individually. However there will be 1 or 2 sessions designed to give you
practise on experimental technique, the handling of errors, and a small number of group
experiments.
ATTENDANCE
Class Times. Labs run from 13:30 to 17:30 on Monday, Tuesday and Thursday
afternoons. Students will be assigned one laboratory afternoon.
Attendance is compulsory; absence requires a self certificate or medical certificate.
Registration. Attendance will be recorded. Students are expected to sign out of the
laboratory if leaving before the end of the session.
GEOGRAPHY AND MANNING OF THE LABORATORY
The main laboratory suite consists of room N1.34. In addition, there are two dark rooms
which are used for the optics experiments and for the experiment using gases. The far end
of the laboratory is set aside for tea-time refreshments.
The laboratory is maintained by a technician Mr. Nic Tripp, from whom you can buy your
laboratory diary.
ORGANISATION AND SUPERVISION OF PRACTICAL WORK
The lecturer in charge of the teaching of your laboratory is the Lab Supervisor. In
addition there will be 3 demonstrators who, between them, are familiar with all of the
experiements you undertake. These people are there to help you, and answer any
questions associated with your experiment. In addition they will assess, mark and provide
feedback on your work. Use them!
3
All observations made during an experiment should be entered in your laboratory diary
(available from Mr. Nic Tripp at the price of £1:80). Each week you will be allocated an
experiment and you will normally be expected to complete this, performing appropriate
calculations, drawing graphs etc. by 17:30hrs of that day. You will then be given until
16:00 hrs the following day to complete any analysis and draw conclusions on your work,
ready for handing in. The hand in deadline of 16:00 of the day following your
laboratory session is hard and fast! If you have extenuating circumstances as to why
you cannot attend a laboratory session or cannot make the hand-in deadline, you are to
inform the Lab Supervisor prior to this and make alternative arrangements. Further
details on the handing in of laboratory diaries will be given at the beginning of the session
and are laid out below.
It is essential that you put aside about ½ hour before you come to the practical
class in order to read through some of the experimental notes associated with the practical
that you will be undertaking. It is anticipated that you should read all of the introductory
sections up to the expereimental part itself. This will enable you to gain familiarity with
the physics behind the experiment – you should not worry so much about the new lectured
material but refresh your understanding from A-level and school studies. Get yourself
happy with what is expected of you so you can plan your experiment, which will save you
time on the day. Also you must think about the safety considerations that are required for
your experimental work and write a risk assessment, which will be signed off prior to
commencing any practical work.
ASSESSMENT OF PRACTICAL WORK
The responsibility for handing your work in at the correct time is yours, and failure to do
so will usually mean that your work will be marked to provide you with feedback, but that
a mark of zero will be recorded. Exceptions to this rule will normally be made only for
illness for which you have notified the School. If you do think you have another valid
reason for missing the hand-in time, or for not attending the laboratory class in the first
place, you should discuss this with the Lab Supervisor running the laboratory prior to
your absence.
In addition to your weekly lab-diary assessment, in each of the two semesters you will be
required to write up one experiment in the form of a formal report. This will be allocated
by your Lab Supervisor towards the end of each semester. Formal reports should NOT be
written in your lab. diary but wordprocessed on sheets of paper that are either bound or
stapled. Marked reports will be returned you, with feedback, and you should keep these as
they should provide a basis for the reports you will have to write in subsequent years.
Each experiment and each report will be marked out of 20 in accordance with the scheme:
16+ = very good; 12+ = good performance which could be improved; 10+ = competent
performance but with some key omissions; 8+ = bare pass; 7- = fail. Your final module
mark (see Undergraduate Handbook) will be made up as follows:
Formal report
33.3%
Experimental lab diaries
66.7%
4
While the experimental notes of all experiments and reports will be assessed and
individual marks logged, your total marks will normally be obtained by expressing the
total marks you obtain during the session as a percentage of the total which you could
have obtained during the session. Exceptions will normally be made in the cases of
absence due to illness for which a medical certificate has been supplied; absence for an
unavoidable reason of which you notified a member of staff; difficulty with an experiment
for reasons which were not your responsibility and which you discussed with the
demonstrator.
REFRESHMENT ARRANGEMENTS
Tea, coffee, squash and chocolate, will be available in the laboratory about halfway
through the afternoon and provide a mid-point break.
Tea and coffee: Payment for these must be made at the beginning of the semester and will
cover the whole semester. Prices will be announced at the first laboratory class.
Snacks/chocolate: Payment individually at the time of purchase.
5
RECORDING EXPERIMENTS IN YOUR LAB. BOOK / DIARY
AIM: to RECORD the results of your work
The aim of keeping a good laboratory diary is to record your work in a manner clear
enough that you or a colleague could understand and attempt to repeat the experiment. It
is a record of your observations, measurements and understanding of the experiment. It is
not a neat essay containing the background theory or paragraphs copied form other
sources, but a real-time account of your experiemental method and findings.
When assessing your laboratory write-up, the demonstrator is interested in your
measurements, observations, results and conclusions. You should aim to present to
him/her a set of measurements and results taken and recorded in such a way that they can
understand easily what each number means, what results you have derived, and what
conclusions you have drawn. You should also make notes of any difficulties experienced
and sources of uncertainty or error. Ideally the record should be such that you could
yourself reconstruct the course of the experiment later - perhaps 20 years later - without
difficulty. The measurements presented to the demonstrator should be those taken during
the performance of the experiment they should not be rewritten before presentation.
A full written report of the background physics, purpose and extent of the experiment is
not required with the experimental results; that task is performed once a semester when
you are asked to produce a full report for a single experiement only.
A successful and quality record of experimental work is within the reach of all students,
providing:
1) all the measurements needed, or which you think might be needed, are
made at the time the experiment is performed;

Before you begin the collection of data, decide what you are going to do and how
you are going to do it. To achieve this you need to have thought about the
experiment before you begin it, to try out the apparatus and perhaps to have made
some trial measurements.
2) the measurements are recorded clearly and completely;

A sketch of the apparatus, or of parts of the apparatus, labelled to correspond with
the measurements, often helps, and serves as a very useful reminder of the
experimental arrangement. You will find the equipment you use have unique
identification numbers; make a note of these in your lab diary as these will allow
the teaching team to keep a track of acceptable results and any systematic errors.

Make brief, succinct notes of what you have done, rather than a long and detailed
prose. Mention any specific problems and how you have overcome them. Mention
good experiemtnal practise.

Record measurements systematically and concisely and, whenever possible,
tabulate them.
6

Always record first the actual measurements made and only then derive the values
of other quantities from them eg. if you are measuring the distance between two
points, record first the position of the two points against a scale and then subtract
the readings and also record the result. This minizes mistakes and allows you to
check results at a later date..

Record units and remember that a statement of precision is an essential part of
every measurement. A typical complete observation is  = 8.69  0.01 mm.

Do not clutter the layout of measurements with arithmetic calculations - do these
on a separate page or part of the page.

If during the experiment you make a mistake, neatly cross out the incorrect values
and repeat them. NEVER rip out a page of a lab diary.

Whenever possible, plot graphs as the measurements are made – outlier/rogue
data points can be identified readily, enabling repeat measurements to be made as
required. Any trends in the data can also be identified – eg. peaks, discontinuities
etc – in time for the experimenter to take more frequent/closely sampled readings
to confirm the observed behaviour.

Label the axes of graphs. Choose scales for the axes which make plotting easy
and, if possible, which allow the experimental precisions to be recorded sensibly.
Axes do not have to start at the origin; “zoom in” sensibly to best display the
results.
3) the results and conclusions are presented clearly. These in their turn will
be achieved by attention to the following points.

Present the results with a statement of precision and units. Always check that the
results that you have are sensible – are they “in the ball park” that you might
expect?

Quote the generally accepted value of the quantity you have measured, for
example from Kaye and Laby's tables, and try to account for any difference that
you see.

Comment briefly on the experiment and results, and discuss how you might
extend and improve the experiment. This is important, as it demonstrates that
you have both thought about and understood well what you have been doing.
7
WRITING UP FULL REPORTS OF EXPERIMENTS
AIM: to PRESENT the results of your work
The person marking your full report is interested in your description of the experiment.
They are not concerned with the actual measurements or quality of the results but are
concerned with the way these are presented in the report. You should aim to present a
clear, concise, report of the experiment you have performed, at a level able to be
understood by a fellow 1st Year student, who does not have expert knowledge of your
experiment. An example of a full report and further advice are given in section III. Very
importantly, your report must be original and not a copy of any part of the notes
provided with the experiment. It should be a report of what you did; not of what you
would like to have done or of what you think you should have done. That said, credit will
be given for discussions on how one might extend and improve an experiment, and what
might be done if the experiment were to be repeated.
It is normal practise in writing scientific papers to omit all details of calculations, and you
should also do this. Providing your report includes a statement of the basic theory which
you used, together with a record of your experimental observations (summarized if
appropriate) and the parameters which you obtain as a result of your calculations, it will
be possible for anyone who so wishes to check the calculations you perform.
The principles of report writing are simple: give the report a sensible structure; write in
proper, concise English; use the past tense passive voice, for example "... the
potentiometer was balanced ...". The following structure is suggested. It is not mandatory,
but you are strongly recommended to adopt it.
1) Follow the title with an abstract. Head this section “Abstract".

An abstract is a very brief (~50-100 words) synopsis of the experiment
performed. An example is "The speed of sound in a gas has been measured using the
standing wave cavity method for one gas (air) for a range of temperatures near room
temperature and for gases of different molecular weights (air, argon, carbon dioxide)
at room temperature. The speed in air near room temperature was found to be
proportional to T½, where T is the gas temperature in Kelvin, and the ratio Cp/Cv for
air, argon and carbon dioxide at room temperature was found to be 1.402 ± 0.003,
1.668 ± 0.003 and 1.300 ± 0.003 respectively".
2) Follow the abstract, on a separate page, with an introduction to the
experiment. Head this section “Introduction”.

Here, you should state the purpose of the experiment, and outline the
principles upon which it was based. This section is often the most difficult to write.
On many occasions it is convenient to draft all the rest of the report and write this last.
Remember that the reader will, in general, not be as familiar with the subject matter as
the author. Start with a brief general survey of the particular area of physics under
investigation before plunging into details of the work performed.
8

Important formulae and equations to be used later in the report can often, with
advantage, be mentioned in the introduction as, by showing what quantities are to be
measured, their presence helps in the understanding of the experiment. Formulae or
equations should only be quoted at this stage. Derivations of formulae or equations
should be given either by references to sources, for example text books, or in full in
appendices. References should be given in the way described below.
3) Follow this with a description of the experimental procedure. Head this
“Experimental Procedure”.
 Write the experimental procedure as concisely as possible: give only the
essentials, but do mention any difficulties you experienced and how they were
overcome. Division of the description of the experimental procedure into sections,
each one dealing with the measurement of one quantity, is often convenient. If the
introduction to the experiment has been well designed this division will occur
naturally. Relegate any matters which can be treated separately, such as proofs of
formulae, to numbered appendices. Give references in the way described below.
 All diagrams, graphs or figures should be labelled as figures. Give each a
consecutive number (as Figure 1 etc.), a brief title and, where possible, a brief caption.
Give each group or table of measurements a number (as Table 1 etc.) and a brief title,
and use the numbers for reference from the text e.g. “the data in Figure 1 exhibits a
straight….”
4) Follow this section with the results of the experiment, discussion of them and
comments. Head this “Results and discussion”.
 The result of the experiment can be stated quite briefly as "The value of X
obtained was N +  (N) UNITS". For example "The viscosity of water at 20°C was
found to be (1.002  0.001) x 10-3 N M-2 s".
 Discussion of the result, or of measurements, method etc., can be cross-referenced
by quoting the figure, table or report section numbers.
5) Follow this section with your conclusions. Head this “Conclusions”.
 The conclusions should restate, concisely, what you have achieved including the
results and associated uncertainties. Point the way forward for how you believe the
experiment could be improved
6) Follow this section with references. Head this “References” or “Bibliography”.
 The last section of the main body of the report is the bibliography, or list of
references. It is essential to provide references. There are two main styles used (along
with many subtle variations) to detail references. In the Harvard method, the name
9
of the first author along with the year of publication is inserted in the text, with full
details given, in alphabetical order, at the end of the document. The second style,
favoured here is known as the Vancouver approach, is slightly different. At the point
in your report at which you wish to make the reference, insert a number in square
brackets, e.g. [1]. Numbers should start with [1] and be in the order in which they
appear in the report. References should be given in the reference or bibliography
section, and should be listed in the order in which they appear in the report.
Where referencing a book, give the author list, title, publisher, place published, year
and if relevant, page number eg. [1] H.D. Young, R.A. Freedman, University Physics,
Pearson, San Francisco, 2004.
In the case of a journal paper, give the author list, title of article, journal title, vol no.,
page no.s, year. e.g. [2] M.S. Bigelow, N.N. Lepeshkin & R.W. Boyd, “Ultra-slow
and superluminal light propagation in solids at room temperature”, Journal of Physics:
Condensed Matter, 16, pp.1321-1340, 2004.
In the case of a webpage (note: use webpages carefully as information is sometimes
incorrect), give title, institution responsible, web address, and very importantly the
date on which the website was accessed eg. [3] “How Hearing Works”,
HowStuffWorks inc., http://science.howstuffworks.com/hearing.htm, accessed 13th
July 2008
6) Follow this section with any appendices. Head this “Appendices”.

Use the appendices to treat matters of detail which are not essential to the main
part of the report, but that help to clarify or expand on points made. Give each
appendix a different number to help cross referencing from other parts of the report
and note that to be useful appendices must be mentioned in the main body of the
report.
Health Warning: In subsequent years it may be necessary to develop this standard
report layout to deal with complex experiments or series of experiments.
10
SAFETY IN THE LABORATORY
The 1974 Health and Safety at Work Act places, on all workers, the legal obligation to
gurad themselves and others against hazards arising from their work. This act applies to
students and teachers in university laboratories.
Maintaining a safe working environment in the laboratory is paramount. The following
points supplement those contained in "School of Physics Safety Regulations for
Undergraduates", a copy of which was given to you when you registered in the School.
1.
It is your responsibility to ensure that at all times you work in such a way as to
ensure your own safety and that of other persons in the laboratory.
2.
The treatment of serious injuries must take precedence over all other action
including the containment or cleaning up of radioactive contamination.
3.
None of the experiments in the laboratory is dangerous provided that normal
practices are followed. However, particular care should be exercised in those
experiments involving cryogenic fluids, lasers, gasas and radioactive materials
(Experiments 4, 6 and 9). Relevant safety information will be found in the scripts
for these experiments.
4.
If you are uncertain about any safety matter for any of the experiments, you
MUST consult a demonstrator.
5.
All accidents must be reported to a laboratory supervisor or technician who will
take the necessary action.
6.
After an accident a report form, which can be obtained from the technician, must
be completed and given to the laboratory supervisor.
UNDERGRADUATE EXPERIMENT RISK ASSESSMENT
The experiments you will perform in the first year Physics Laboratory are relatively free
of danger to health and safety. Nevertheless, an important element of your training in
laboratory work will be to introduce you to the need to assess carefully any risks
associated with a given experimental situation. As an aid towards this end, a sheet entitled
Code of Practice for Teaching Laboratories follows. At the commencement of each
experiment, you are asked to use the material on this sheet to arrive at a risk
assessment of the experiment you are about to perform. A statement (which may, in
some cases, be brief) of any risk(s) you perceive in the work should be recorded as an
additional item in your laboratory diary account of the experiment.
11
SCHOOL OF PHYSICS & ASTRONOMY: CODE OF PRACTICE FOR
TEACHING LABORATORIES
Electricity
Supplies to circuits using voltages greater than 25V ac or 60V dc
should be "hardwired" via plugs and sockets. Supplies of 25Vac, 60V
dc or less should be connected using 4 mm plugs and insulated leads,
the only exceptions being"breadboards". It is forbidden to open 13 A
plugs.
Chemicals
Before handling chemicals, the relevant Chemical Risk Assessment
forms must be obtained and read carefully.
Radioactive
Sources
Gloves must be worn and tweezers used when handling.
Lasers
Never look directly into a laser beam. Experiments should be
arranged to minimise reflected beams.
X-Rays
The X-ray generators in the teaching laboratories are inherently safe,
but the safety procedures given must be strictly followed.
Waste Disposal
"Sharps", ie, hypodermic needles, broken glass and sharp metal
pieces should be put in the yellow containers provided. Photographic
chemicals may be washed down the drain with plenty of water. Other
chemicals should be given to the Technician or Demonstrator for
disposal.
Liquid Nitrogen
Great care should be taken when using as contact with skin can cause
"cold burns". Goggles and gloves must be worn when pouring.
Natural Gas
Only approved apparatus can be connected to the gas supplies and
these should be turned off when not in use.
Compressed Air
This can be dangerous if mis-handled and should be used with care.
Any flexible tubing connected must be secured to stop it moving
when the supply is turned on.
Gas Cylinders
Must be properly secured by clamping to a bench or placed in
cylinder stands. The correct regulators must be fitted.
Machines
When using machines, eg, lathe and drill, eye protection must be
worn and guards in place. Long hair and loose clothing especially ties
should be secured so that they cannot be caught in rotating parts.
Machines can only be used under supervision.
Hand Tools
Care should be taken when using tools and hands kept away from the
cutting edges.
Hot Plates
Can cause burns. The temperature should be checked before
handling.
Ultrasonic Baths
Avoid direct bodily contact with the bath when in operation.
Vacuum
Equipment
If glassware is evacuated, implosion guarding must be used in
order to contain the glass in the event of an accident.
12
II:
EXPERIMENTS
TIME TABLE AND LIST OF EXPERIMENTS
Week
Experiment
Title
Page
Autumn Semester (PX1123)
0
1
Induction including Exercise 1. Handouts and talks. Straight
line graphs, including log graphs, errors and how to combine
them.
15
1
1a
1b
Group Experiment: Young’s Modulus
Group Experiment: Coefficients of Friction
17
19
2–6
(see list)
2
3
4
5
6
Statistics of Experimental Data (Gaussian Distribution)
Computer Data Logging and RC Circuits
Variation of Resistance with Temperature
The Cathode Ray Oscilloscope and Circuit Construction
Optical Diffraction
23
28
35
39
45
7 – 11
(see list)
7
8
9
10
11
X-rays
Large Scale Structure of The Universe
Propagation of Sound in Gases
AC-to-DC Conversion Using Diode Circuits
Microwaves
49
55
60
63
68
13
CHECKLIST
 Read through the notes on the experiment that you will be doing BEFORE coming to
the practical class. You will be expected to have read all the introductory notes and
refreshed yourself of any knowledge of the subject taught in school
 Read carefully through any additional sections that might be useful in Section III – eg.
use of electronic equipment, statistics., and also the diary checklist given at the end of
this manual.
 Think about the safety considerations that there might be associated with the practical,
having read through the lab notes. This can then be discussed with your demonstrator
prior to writing your risk assessment.
 On turning up to the lab, listen carefully to any briefing that is given by your
demonstrator: he/she will give you tips on how to do the experiment as well as
detailing any safety considerations relevant to your experiment.
 Write up the safety considerations.
 Check that the size of any quantities that you have been asked to derive/calculate are
sensible - ie. are they the right order of magnitude?
 Read through your account of your experiment before handing it in, checking that you
have included errors/error calculations, that you are quoting numbers to the correct
number of significant figures and that you have included units.
 Staple any loose paper (eg. graphs, computer print-outs, questionnaires etc.) into your
lab book.
14
Exercise 1: Interpreting data
1.
A series of experimental results is given below. In each case the mean value of the
experimentally determined variable is given, together with the error.
(a) R = 0.732 
E( R ) = 0.003 
(b) C = 9.993 F
E( C ) = 0.018 F
(c) T ½ = 2.354 min
E( T ½ ) = 11 sec
(d) R = 2.436 M
E( R ) = 23 
(e) W c = 11.562935 KHz
E( W c) = 3.1 Hz
(f) d = 62165.551 m
E( d ) = 26 cm
(g) f = 20 cm
E( f ) = 0.03 cm
For each quantity, write down:


2.
the best final statement of the result of each experimental determinations
the percentage error in each mean value.
In the following questions the values of Z1, Z2 . . . are the given functions of the
independently measured quantities A, B and C. Calculate the values of, and errors
in, Z1, Z2 etc from the given values of, and errors in, A, B and C.
(a) Zl = C/A
A = 100
E(A) = 0.1
(b) Z2 = A-B
B = 0.1 E(B) = 0.005
(c) Z3 = 2AB2/C
C = 50
E(C) = 2
(d) Z4 = B loge C
3.. The variation of resistance, R, of a length of copper wire with temperature, T, is given
by:
R = Ro (1 + T)
where Ro and  are constants.
Experimental data from a particular investigation (similar to Experiment 4) are given
in Table 1.3.
15
T(K)
300
320
340
360
380
400
T(K)
420
440
460
480
500
520
R()
2415
2490
2585
2625
2710
2755
R()
2820
2910
3050
3030
3115
3155
Table 1.3: Data for question 3
a)
b)
c)
d)
Which are the dependent and independent variables?
Plot a graph to show the variation of R with T.
Determine Ro and estimate the likely error.
Determine  and estimate the likely error.
4. The activity, A , of a radioactive source is given by
A = Aoe-t
where Ao is the activity when time, t, = 0 and  is the disintegration constant. Data
obtained by a 1st year student undertaking Experiment 6 are given in Table 1.4.
A (Counts in 10 sec)
5768
3391
1963
1231
718
415
t (mins)
0.5
2.5
4.5
6.5
8.5
10.5
Table 1.4: data for question 4
a) Plot a graph on linear paper showing the variation of A with t.
b) Plot a suitable graph on linear graph paper to determine  and Ao
c) Plot a suitable graph on semi-log paper to determine  and Ao
5. In one 1st Year experiment, measurements are made of the velocity of sound in a gas,
c. This can be related to , the ratio of the principal specific heats of the gas by

c2 m
,
kT
where m is the mass of one molecule of gas, k is the Boltzmann constant and T is the
absolute temperature. Determine a value for  from the following data which was
obtained from an experiment with nitrogen:
c = (344  20) ms-1; T = (292  1) K
16
Experiment 1a: Measuring Young’s Modulus
Note: This experiment is carried out in pairs.
Outline
Most students will be familiar with the concept of Young's Modulus from A level studies.
It is an extremely important characteristic of a material and is the numerical evaluation of
Hooke's Law, namely the ratio of stress to strain (the measure of resistance to elastic
deformation). You will design a basic experiment to verify Hooke’s law and determine
Young’s Modulus for a bar of wood.
Experimental skills
 Making and recording basic measurements of lengths, distances (and their
uncertainties/errors).
 Making use of repetitive measurements to improve error.
 Careful experimental observation and recording of results.
Wider Applications
 Young Modulus, E, is a material property that describes its stiffness and is therefore
one of the most important properties in engineering design.
 Young's modulus is not always the same in all orientations of a material. Most metals
and ceramics are isotropic, and their mechanical properties are the same in all
orientations. However anisotropy can be seen in some treated metals, many composite
materials, wood and reinfoirced concrete. Engineers can use this directional
phenomenon to their advantage in creating structures.
 Young's modulus is the most common elastic modulus used, but there are other elastic
moduli measured too, such as the bulk modulus and the shear modulus.
1. Introduction
The relation between the depression produced at the end of a horizontal weightless rule by
application of a vertical force F, as represented in Figure 1.1, is given by:
d
FL3
,
3EI a
[1]
where L is the projecting length, E is Young's modulus for the material of the rule and Ia
is the geometrical moment of inertia of cross section.
For the rectangularly-sectioned rule provided, which has width a and thickness b,
Ia 
ab 3
12
[2]
17
Figure 1.1 : Representation of the deflection of horizontal rule by force, F
2. Experiment


Clamp the metre rule to the bench so that part of its length projects horizontally
beyond the bench edge.
Make suitable measurements to explore the validity of equation [1] and to measure E
for wood.
Reminder: Concluding remarks
Note: This reminder and the advice below are given since this is an early experiment - do
not expect to see such prompts in the future.

Summarise the main numerical findings (as always with errors), important
observations and what is understood and not understood at this time.
18
Experiment 1b: Coefficients of Friction
Note: This experiment is carried out in pairs.
Outline
Most students are probably familiar with the mathematics of friction as applied to static
and moving bodies on the flat and on slopes. In this experiment the behaviour of a real (if
a little contrived) system of a short length of dowel travelling down a slope of variable
angle is investigated. Experience indicates that the system can behave unusually,
requiring the experimentalist to take data reproducibly and carefully note down their
observations.
Experimental skills
 Making and recording basic measurements: angles and times (and their errors).
 Making use of trial/survey experiments.
 Careful experimental observation and systematic approach to data taking.
Wider Applications
 Funny thing friction, sometimes you want it, sometimes you don’t; the rotation of
wheels on a car should be as frictionless as possible, but friction between tyres and the
road is absolutely essential.
 The difference between coefficient of friction in the limiting and kinetic cases leads to
“stick-slip” effects, where systems once they start moving move quickly, e.g. in
hydraulic cylinders and earthquakes.
1. Introduction
The motion of a body down a slope is a classic mechanics problem. In elementary texts
two types of systems are considered; zero and non-zero friction. The friction between two
surfaces is characterised by a dimensionless constant called the coefficient of friction, μ
and can often be related to the frictional force FF by
FF  FN ,
[1]
where FN is the normal or reaction force between the body and the surface. Two types are
considered: limiting (or static) friction (μL) that prevents a static body from beginning to
move; and kinetic friction (μK) that acts on moving bodies. Usually μK is thought to be
slightly lower than (μL) but near enough so that they are considered equal in calculations.
This is illustrated in Figure 1.2, for a body initially at rest on a surface and subject to a
driving force that increases with time. The frictional force increases and matches the
driving force until the limiting condition is met, then the body starts to move and the
kinetic friction, which is slightly less than the limiting friction, operates always in the
opposite direction to that of the motion.
19
LFN
Friction
force
KFN
No
motion
motion
time
Figure 1.2. The frictional force acting on a body as the driving force is increased from zero.
1.1 Body on a slope
A body on a slope is an interesting system as there is no need to introduce external forces
in order to observe the effects of friction. In the following discussion, the angle of the
slope to the horizontal is given by θ, the mass by m and the acceleration due to gravity by
g.
FN=mg.cosθ
FF
FS=mg.sinθ
θ
mg
mg.cosθ
Figure 1.3. Forces acting on a body on a slope. The weight of the body can be resolved
perpendicular and parallel to the slope. The perpendicular component is exactly balanced by a
reaction force, FN.
As the angle of the slope increases the force on the body due to gravity acting down the
slope, Fs increases as
FS  mg sin 
[2]
At the same time the reaction force decreases as
FN  mg cos 
[3]
This is important because, from equation 1, the reaction force determines the frictional
forces.
The critical angle, θC
20
With no external forces acting the frictional force always acts up the slope and a critical
angle, θc can be defined at which the forces down and up the slope are identical and
beyond which the body starts to move down the slope. At the critical angle
or
[4]
mg sin  C  mg L cos  C
tan C   L
Therefore a simple experiment of the angle at which the body starts to move reveals μL.
Angles greater than the critical angle
Since in this regime the body is moving, it is the coefficient of kinetic friction that applies.
Now although there is an imbalance between the forces and the overall acceleration, the
acceleration, a, down the slope is given by:
a  g sin   g K cos   g(sin    K cos  )
[5]
Since this acceleration is constant (in ideal conditions) the familiar equations of motion
can be used. For example, the time, t a body starting from rest takes to move down a
slope of length, s is given by
s  0.5at 2
[6]
2. Experiment
2.1 Apparatus
The simple apparatus used here consists of a channel, a stand to support it, a length of
dowel and a stop watch. The arrangement of the support and channel should be as
follows:
 The support should be placed on the upper bench and the bottom of the channel on the
lower bench.
 The channel should be supported so that it is “L” shaped, with a slight angle so that
the dowel remains close to the upright. (A “V” shaped arrangement should not be
used as it has been found that the dowel becomes easily wedged).
 Running the forks on the support through the holes in the channel ~30 cm from the top
of the channel seems a secure, stable and convenient method.
Note: The maximum angle of the slope permitted in this experiment is 30o.
2.2 Part 1. Survey/trial experiments (including timing errors)
Survey (or trial) experiments are a vital part of performing any new procedure; they are
used to get a feel for the behaviour of the system, to determine the most appropriate
methodology, to understand the important measuring ranges etc. In many first year
experiments, these trials are hidden from the students, in order to make best use of the
available time and apparatus. Nonetheless they will have been carried out by
demonstrators and supervisors in order to generate the lab scripts.
Therefore, this part of the experiment is being used as an opportunity to take students
through the surveying process.
 So, spend ~10 minutes “playing” with the equipment and making a note your
observations and some measurements if appropriate.
 Pick suitable conditions to perform a study of the reproducibility of “your” timing.
Note that this is not as easy as it sounds since an aim is to be able to later distinguish
between your timing error and real variations within the experiment.
21
2.3 Part 2. Determine the coefficient of limiting friction, μL.
 Use the experience you have gained to design and perform an experiment to determine
μL.
Your diary entry will need to describe your methodology and how the error was
determined and what you think it corresponds to.
2.4 Part 3. Determine the coefficient of kinetic friction, μK.
 Use the experience you have gained to design and perform experiments to determine
μK, exploring angles between θC and 30o.
 There are no obvious straight line graphs here, instead it is suggested that a graph of
μK against angle is plotted.
Reminder: Concluding remarks
Note: This reminder and the advice below are given since this is an early experiment - do
not expect to see such prompts in the future.

Summarise the main numerical findings (as always with errors), important
observations and what is understood and not understood at this time.
22
Experiment 2: The statistics of experimental data; the Gaussian
distribution.
Outline
The statistical nature of measured data is examined using an experiment in which balls
bearings are randomly deflected as they roll down an incline. Random behaviour is
expected to result in a “Gaussian” distribution, the most common mathematical
distribution in experimental physics. The experiment dwells on the progression from
small to large data sets, the emergence of the well known shape of the distribution and the
implications for data analysis and error estimation (i.e. the relationship to “accuracy and
precision” and “random and systematic errors”).
Experimental skills
 Statistical analysis of data in general.
 Analysis using the Gaussian distribution in particular.
Wider Applications
This experiment illustrates the unseen statistics behind all practical physics:
 When dealing with a small number (say ~ 12) data points, as you often do in these
laboratory experiments, it should always be remembered that the measurements
represent “samples” of an underlying data “distribution”.
 The majority of physics experiments result in underlying data distributions that are
Gaussian.
 Other important distributions include Poisson, Lorentzian and Binomial. The
distribution is governed by the underlying physics and/or statistics.
1. Introduction
Virtually all experiments are influenced by statistical considerations and have underlying
distributions of various types. However in most cases either not enough data is collected
or the data is not analysed in such a way as to reveal this fact. Consequently it is entirely
possible to perform crude but quite reasonable data analysis with little understanding of
its context. Clearly the training of physicists should progress them beyond such a
superficial level. This experiment is a very important role in training by taking you
through the techniques used when dealing with small, medium and large sets of data.
The experimental set up chosen uses random processes to produce a distribution that
consequently should be Gaussian and is appropriate here since most experiments produce
such distributions. What is rare is the opportunity for students to observe the emergence
of a distribution and consider the effect on data and error analysis.
Ultimately though, always remember that the concern of an experiment is to express a
measurement as “(value +/- error) units”. Statistics is simply the tool by which the
“value” and the “error” are determined. Reminder:
 Systematic errors - the result of a defect either in the apparatus or experimental
procedure leading to a (usually) constant error throughout a set of readings.
 Random errors - the result of a lack of consistency in either in the apparatus or
experimental procedure leading to a distribution of results.
 Accuracy - determined by how close the measured is to the true value, in other words
how correct the measurement is. A value can only be accurate if the systematic error
is small.
23

Precision - determined by how “exactly” a measurement can be made regardless of its
accuracy. Precision relates directly to the random error - a value can only be precise if
the random error is small (high precision means low random error, low precision
means high random error).
1.1. Simple statistical concepts
In all the experiments a series of values xl, x2 .... xn is obtained. Often the experimental
values differ, mainly due to the fact that some variable in the experiments has been
changed (usually the aim would then be to plot the data on a straight line graph). In the
discussion and the experiments that follow the measurements will be of nominally the
same value, represent a sample of all the possible measurements and the differences are
due to variations in the systems being measured, the equipment used for measuring, or the
operator.
From such measurements (taking xi as the ith value of x and n as the total number of
measurements) a number of statistical values can be found that are of relevance to the
understanding of the experiment:
1 n
Arithmetic mean
μ   xi
[1]
n i 1
The arithmetic mean has a special significance as this represents the best estimate of the
“true value” of the measurement. The error in an experiment can then be understood to
reflect the possible discrepancy between the arithmetic mean and the true value.
Superficially and practically for small n an estimate of (twice) the error might involve:
Data range
xmax - x min
Probable error
the range in which 50% of the values fall
With larger n (a larger sample) formal statistical terms such as “standard deviation”
become appropriate. The standard deviation, σ(x) of an experiment is a value that reflects
the inherent dispersion or spread of the data (an experiment with high precision will have
a low standard deviation) and so is, like the “true value” an unattainable idealised
parameter. Practically, the available sample can be used to obtain a “sample standard
deviation”, σn(x) (the equivalent of finding the arithmetic mean of the measurements) and
this can be modified to give the “best estimate of the standard deviation”, sn(x):
sample standard deviation
1 n

 n ( x )    ( xi   ) 2 
 n i 1

best estimate of the standard deviation
 n 
sn ( x)  

 n 1
1/ 2
 n ( x)
12
[2]
[3]
Whilst standard deviations are related to errors and may be reasonable to use in some
circumstances they are not appropriate when there are a large number of measurements
and the distribution is well defined (see below for more on distributions). Here the
accepted error is the (best estimate of the) standard error:
s (x)
 n( x )
Best estimate of standard error
[4]
 ( xn )  n

n1 2
n  11 2
Note: All of the above values can be found without reference to the particular distribution
of the data.
24
1.2. Distributions
If measurements occur in discrete values (as they will in the following experiments) the
distribution can be drawn by plotting the number of times (frequency) a value is recorded
versus the value itself. (If the measurements are continuous then the values can be split
up into data ranges (eg x to x + dx) and then the frequency counted.)
However, the frequency of occurrence clearly depends on the number of attempts which
are made. A more fundamental property is the probability which experimentally is given
by
probability, P = number of occurrences
[5]
total number of events, n
It should be clear from this that the sums of probabilities should equal one.
mathematical functions that describe distributions are always probability functions.
The
1.3 The Gaussian (or Normal) distribution
All experimental results are affected by random errors. In practice it turns out that in
many cases the distribution function which best describes these random errors is the
Gaussian distribution given by:
  ( x   )2 
1
1
P( x ) 
. exp 
[6]

1
2

2





( 2 ) 2
P(x)
where μ is the mean value of x and  is the standard deviation. An example of a Gaussian
distribution is shown in figure 1; it is symmetrical about the mean has a characteristic bell
shape and ~68% of the measured values are expected within ± 1σ of the mean (this range
is slightly larger than that covered by the “probable error”).
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
FWHM
FWHM
-4
-3
-2
-1
0
1
2
3
4
x
Figure 1 Gaussian probability function generated using
xn  0 and σ(x) = 1 resulting in the
x-axis being in units of standard deviation. The FWHM is wider than 2σ(x).
d2.
Experimental
25
2. Experimental
2.1 Apparatus
 The apparatus used here consists of a pin board, down which steel balls are rolled
individually (so that they do not interfere with each other). There is a row of 23 “bins”
at the base numbered from -11 to 0 to +11 (the discrete values representing the results
of this experiment).
 The pins are intended to induce a random motion of the balls so that the balls have a
distribution about their “true value” that is Gaussian.
 The design is such that the true value (ideal result) of the experiment is zero.
However, various biases can be imagined that might affect this and lead to a
systematic error (overall bias) that will be constant provided the equipment is not
disturbed.
 Approximately 50 balls are supplied and these constitute a “batch”.
2.2 Procedure
Although split into two parts it should be considered as a single continuous experiment in
which the number of trials, n, increases. In order to be able to monitor the “result”, and
the emerging Gaussian distribution, it is necessary to keep track of the results in the order
in which they are obtained. It would be impractical to note the result in order for every
ball (trial) however it is really only necessary to pay close attention to the first few trials.
 The first part of the experiment pays close attention to the “first batch” of ~50 trials.
 In the second part a further 4 batches are recorded and allow the accumulation of a
large data set. The total number of trials is then ~250.
2.2.1 Small-medium number statistics (n = 1 to ~50)
Note: In order to mimic the low n experiments that students usually perform the first
batch must be undertaken in stages; this ensures that unprejudiced decisions about errors
are made at each stage. Note: it will be very easy for diaries to become unintelligible
whilst working through this section - use headings, notes and comments to avoid this.
(i) First roll one ball down the slope and note its position.
 Clearly this “measurements is our current best estimate of the “true value”.
 What is the “result” of the experiment at this stage (i.e. value +/- error)? Is it in fact
possible to estimate an error (note - it must be non zero) at this stage? If it is not
possible then what are the implications for deciding on the size of the error bars that
are often drawn on graphs based on single measurements?
(ii) Roll another two balls down the slope (total = 3) and note their positions
 The best estimate of the “true value” is now the average of three measurements
(relevance: e.g. timing experiments are often performed three times).
 Realistically the estimated error here is obtained from the data range.
 Write down the result of the experiment at this stage (value +/- error).
Remember each trial should be performed identically - you should be aware of and write
down the details of the procedure at this point. It would be entirely reasonable to change
(improve) the methodology. This would entail repeating the first three trials (for
consistency later) and the diary entry should be clear.
(iii) Roll a further nine balls down the slope (total = 12) and note their positions

The best estimate of the “true value” is now the average/mean of a total of twelve
measurements (relevance: experiments in which straight line graphs are generated
often have approximately this number of data points).
26

The estimated error. With 12 measurements simply using the data range to obtain an
error value ought to be too pessimistic and statistical techniques can start to be used
(even though there are not enough data values for the shape of the distribution to have
emerged). Calculate and compare values for (0.5 x) range, the probable error,
standard deviations and standard error described above.
(Note: the above calculations can be performed using the statistical functions of a
calculator. This will save time later, but at this point students must confirm that the
correct method is being used by showing hand working and comparing with calculator +
statistical functions).
(iv) Roll the remainder of the batch down the slope and note their positions in order.
 For totals of 24 and ~50 trials calculate and compare values for (0.5 x) range, the
probable error, standard deviations and standard error.
 Use the values for n = 50 to draw a histogram and compare with shape of the Gaussian
distribution shown in figure 1. How well defined is the Gaussian distribution?
2.2.2 Large number statistics (n up to ~250)
In order to be able to monitor the further development of the experimental “result” and the
data distribution a further 4 batches of balls will be used. It would be impractical to note
the result in order for every ball (trial), instead send the balls down in batches (of ~50)
recording the distribution for each batch.
 Draw a suitable table in which to record the measurements.
 Perform and record the measurements.
Data distribution
 Draw a second table in which to record the calculated cumulative distributions for the
total of 1 (from section 2.2.1) 3 and 5 batches of measurements.
 For each case calculate the mean and sample/best estimate of the standard deviation
and standard error.
 Use the values for n ~ 250 and equation 6 to calculate the corresponding Gaussian
distribution and plot this on top of the measured distribution. Comment on the
agreement between them.
2.3 Analysis of the “result” of the experiment as a function of n
This section considers all of the results obtained.
 Consider (giving an explanation/justification) what is the most appropriate error value
to use for n = 3, 12, 24, 50 150 and 350. One decision here is; at what n does it
become appropriate to use standard error?
 Summarise the above in a table with columns for “value”, “most appropriate error
value” and “error type” (e.g. range, standard error etc).
 Plot a graph(s) of mean value, μ against n (for n = 3, 12, 24, 50, 150, and 250) using
the chosen error for the error bar.
 Finally, for the concluding remarks and drawing on the previous graph, summarise
what has been learnt about the systematic and random errors and accuracy and
precision of the experiment as n was increased. Is there any evidence for a bias
(systematic error) in the experimental set up? (Note: Just in case you’ve missed it so
far - the mean value alone provides no evidence for a bias (systematic error) it must be
considered with an appropriate error).
27
Experiment 3: Computer Data Acquisition (and RC Circuits)
Use of computers in this experiment
Due to multiple users of these computers, do not save copies of your work onto the PC
hard-drive. Ensure that you obtain hard copies of your data as you go along, by using the
printer.
Outline
Although most experiments in the first year laboratory involve taking measurements by
hand, the use of data-loggers, often run by computers, is ubiquitous at research level.
This experiment demonstrates both the advantages and potential pitfalls when one uses a
computer to digitally capture an oscillating (analogue) voltage produced by a signal
generator. “Under sampling” of the signal, which leads to “aliasing” is revealed by use of
Fourier transformations of the data to reveal the frequency components. Finally the
system is used to measure the discharge of a resistance-capacitor (RC) circuit and
determine its “time constant”.
Experimental skills
 Basic use of data loggers and signal generators.
 Awareness of digital signal processing effects: (under) sampling leading to aliasing.
 Use of analysis functions built into software.
 A very basic introduction to the use of Fourier transformations in the analysis of
periodic functions.
 Very simple wiring (of an RC circuit).
Wider Applications
 Computer based data acquisition and associated signal processing is everywhere!
 Fourier transforms are not restricted in signal processing/analysis but appear in many
fields of physics. For example, undergraduates are often taught the mathematics of
Fourier transforms in optics courses. Closely related, and performed in 1st year
laboratory, x-ray diffraction patterns can be considered to represent the 3D Fourier
transform of crystal lattices. You will not be taught the maths of Fourier theory yet,
but an appreciation of its uses will be advantageous in future discussions.
 RC circuits are widely used for example to modify analogue voltage signals. A later
experiment considers their use in rectifying circuits which convert ac voltages to dc the RC component appears in the final “smoothing” of the signal.
1. Introduction
Note: A data logger will be taken to be a device that records measurements to file usually
as a function of time. Computer systems may also control an experiment by both setting
an independent variable and measuring a dependent variable, but can equally act simply
as a data logger.
Performing experiments and taking data automatically via use of data loggers or computer
controlled equipment is almost universal at research level. The advantages of computers
are most obvious in the cases where there are large amounts of data to be handled
(acquired, processed and stored) and/or where values change at a high rate. However,
28
there are significant disadvantages. A major one is in the perception by many people that
all errors disappear when measurements are made by computers - they most definitely do
not! This is perhaps borne by the inscrutability of computers - it can very difficult to
figure out what they are doing and how they arrive at their answers.
The above limitations would be acceptable if our undergraduate courses were aimed at
training operators, but they are not. As a scientist it is necessary to understand and trust
the results of experiments and to do this it is necessary to understand the equipment and
their associated limitations.
Apparatus
PC, Pocket CASSY with U,I sensors, Signal generator, Cyclon cell, 5000µF capacitor, 20
kΩ resistor.
1.1 Data acquisition
Systems used to acquiring data in the form of software files can superficially come in
many forms but ultimately are all very similar. Whilst not attempting to be exhaustive,
systems have the following features:
 Starting with the parameter to be measured; this may come in many different forms,
e.g. temperature, displacement speed, light intensity etc etc. However, in all cases a
“transducer” is used to measure the parameter and convert it with a well known
conversion factor, into a voltage.
 A voltage from a transducer that varies continuously with time is known as an
analogue signal.
 This analogue signal must be converted to a digital signal in order to be read by and
make sense to a computer and this is achieved by an analogue to digital converter
(A→D converter).
 A→D converters output values with a fixed number, n of bits and consequently have a
fixed resolution. The CASSY system uses a 12 bit converter and so the signal for a
particular range is split into 2n values. A voltage range of +/-3 V therefore has a
resolution of 6/212 ~ 1.46 mV (this is what produces steps in data).
 The time axis too will be subject to limitations. Data is collected or “sampled” at set
time intervals. Acquisition systems can be limited by the minimum time interval
allowed and/or by the maximum number of points that can taken. Here a maximum of
16000 points can be collected with a minimum sampling interval of 100 μs
1.2 Controlling experiments
In the set up used in this experiment the computer is simply involved in passively taking
data, i.e. in data logging. However, the CASSY system can also control experiments, for
example by setting a variable and measuring a dependent variable. There are a number of
experiments run under the CASSY system in the second year laboratories.
1.3 Sampling data
The importance of the sampling interval is illustrated in Figure 3.1 in which an analogue
sinusoidal signal is sampled at an interval that is slightly less than the period of
oscillation.
29
Sampled in this way the data would appear on a computer screen as a succession of points
that bear no relation to the original data. Clearly a good representation of the original data
simply requires a large (or sufficient) number of points per period of oscillation - or
equivalently a sample frequency much higher than the signal frequency.
1.5
signal
1
0.5
0
-0.5
-1
-1
-0.5
0
0.5
1
time /s
Figure 3.1. A sinusoidal signal with a period of 0.5 s (frequency of 2 Hz) sampled
(indicated by dots) at a sampling interval of ~0.32 s (sampling frequency, fsample ~3.1 Hz).
Of interest in this experiment is in identifying the sampling frequency sufficient to allow
the extraction of meaningful information and what happens when the sampling frequency
is insufficient, i.e. the signal is “under sampled”.
It turns out that to extract the frequency of a signal (see next section) the minimum
requirement is a sampling frequency twice the signal frequency - a useful “rule of
thumb” for experimentalists. However, to obtain an accurate signal shape much higher
sampling frequencies are required.
1.4 Fourier transforms and aliasing
Fourier transforms (FTs) will be used as a “black-box” signal processing/analysis tool in
this experiment. The mathematics of how they work will come later in taught modules.
This section only aims to give an introduction. The term “Fast Fourier Transform (FFT)”
will be seen and this refers to how the transform is performed, here Fourier and Fast
Fourier Transforms will be taken to mean the same thing.
In Figure 8.1 the signal is displayed in the “time (t) domain”, i.e. the signal is plotted
against time on the x-axis. However, it is also possible to represent the signal in the
“frequency (1/t or f) domain” i.e. the x-axis is in terms of frequency. Since the signal in
figure 1 is composed of a function oscillating with a single frequency when represented in
the frequency domain it would be expected to be a single peak at this frequency. “Fourier
transform” is the term used to describe both the process of converting data between
domains and the new representation of the data.
The power of FTs in signal processing arises from its ability to take a complicated signal
made by the sum of a range of frequencies and split them into its components. For
example, taking a musical chord, a FT would display the notes that make it up.
30
A particular effect of under-sampling a signal that will be examined using FTs is that of
aliasing. Aliasing generally refers to effects that result in different signals becoming
indistinguishable, i.e. “aliases” of one another.
2. Experiment
The practice of computer data acquisition is addressed initially by examining the
importance of the sampling frequency in measuring sinusoidal signals from a signal
generator, aided by the use of Fourier transforms. This is followed by a study of the
discharge of a capacitor through a resistor
2.1 Computer and software
 If not already started, turn on the computer. As the computer is not networked, log on
using the “student” account (no password required).
 To start the CASSY software, from the “start” button click on “programs”, “CASSY
Lab” and then “CASSY Lab” again.
 You should now see the “Settings” window. An icon representing the “pocket
CASSY” interface box and its U, I sensor should be visible. It is from here that the
settings associated with the acquisition and display of data are set.
 Since we want to use the U input, which is on the left hand side of the sensor, leftclick on this part of the icon. Three windows will now appear: “sensor input settings”,
“measuring parameters” and “voltage U1”. The changes that need to be made from
the default settings are described below:
sensor input settings - the default measuring range is +/- 10V, this needs to be changed to
+/-3V This range is large enough to cope with the signal generator output and battery
voltage (~2 V) and has the required (millivolt) precision.
measuring parameters - the “measuring interval” (dt), the time between points and the
“number” of points (n) are set in this window. The product of these two values (n.dt)
gives the total measuring time and this value is also displayed. Note: that if n is not
specified the system will collect data until instructed to stop and the graph will keep
resetting the time axis to accommodate the readings.
voltage U1 - simply gives a reading of the voltage value, both as a digital value and a
pointer.
2.2 Examination of sinusoidal waveforms
Methodology
Setting up for a measurement
 Connect the signal generator, from the 50Ω output, across the “U” input connections
of the small CASSY interface box.
 A signal generator amplitude of ~1 V and zero dc offset is required. This is
conveniently achieved by: selecting a square wave from the signal generator (SG);
setting a frequency fSG ~ 5Hz, making sure DC offset is turned off (button out), then
adjusting the output level and observing the pointer.
 Select a sinusoidal output on the signal generator.
 Before acquiring data. In the “measuring parameters” window (if you can’t see this
click the “toolbox” icon) set measuring interval as dt = 10 ms (fsample = 100 Hz) and n
= 250 (measuring time 2.5s). These parameters will remain unchanged for the
following examination of sinusoidal waveforms.
31

Click the “stopwatch” icon to start taking measurements. Right click on the y axis of
the displayed graph and choose an appropriate scale. To take another set of data, click
on the clock icon again, collection will start as soon as save data?: “no” is clicked (and
yes this is slightly quirky).
Survey measurements at 5, 20 and 100 Hz
 Qualitatively describe the recorded fSG = 5 Hz signal (20 measurements per cycle) are there sufficient samples for the recorded data to be a good representation of a
sinusoidal waveform of constant amplitude and frequency?
 Repeat for fSG = 20 Hz (5 measurements per cycle) and 100 Hz (1 measurement per
cycle) signals.
 Print out the graph (not the data table) for the final frequency, fSG = 100 Hz.
The above should have made the point that data acquisition can go wrong, the next task is
to examine what is happening more closely.
Quantitative frequency measurements (fSG =10-100 Hz)
 Acquire data at a frequency of fSG =10 Hz.
 As accurately as possible, determine the period and so the frequency of the signal
from the graph, fgraph, (right click on the graph and “display coordinates” of the crosshair”). Note from here-on measurements are being made so errors are required.
(Fast) Fourier Transforms will now be introduced. To set these up:
 Click on the toolbox icon and select “Parameter/Formula/FFT”.
 Click on the “New Quantity” button and select “Fast Fourier Transform” half way
down the settings box.
 A “Frequency spectrum” tab should have appeared above the data table on the left of
the screen, click on this tab to show the FT. Since the signal is a single frequency a
single peak should have appeared.
 Measure this frequency, fFT and compare with fSG and fgraph.
For the first measurement performed the frequency will be determined by examining both
the V(t) data and its FT. Subsequently only the FT method will be used.
 Returning to 10 Hz, acquire a measurement. From the V(t) data determine the period
and so the frequency (with an estimate of its error).
 Look at the Fourier transform (frequency spectrum). As described above there should
be a single peak corresponding to the signal frequency (fFT). Use the cursors to find
the frequency (with an estimate of its error).
 Compare the three frequencies: as set on signal generator (fSG), measured from signal
versus time graph (ft), measured from FT (fFT).
 Now acquire data for fSG = 20 to 100 Hz in 10 Hz steps. Look at and briefly describe
the Vs(t) data at each frequency but only use the FT to measure the frequency, noting
your values in a table.
 Plot a graph of fFT/fSG versus fSG.
 Comment on the meaning of the previous graph (e.g. the “rule of thumb” implies that
fFT = fSG for fsample ≥ 2fSG should be observed, what is the agreement between fFT and
fSG at low fSG?).
Note that as a result of the sampling frequency a signal may have a different, lower
(measured) frequency assigned to it; it therefore has an “alias”.
2.3 Measurement of the discharge of a capacitor through a resistor.
32
This is a convenient experiment to perform both as an example of computer data
acquisition and as preparation for later experiments with electrical circuits.
Resistor-capacitor circuits (a reminder)
A capacitor is a device which may store electrical charge. Equal positive, +Q, and
negative, -Q, charges are held on conductors inside the capacitor, so that there is overall
charge neutrality. The greater the charge Q that is stored in the capacitor, the greater is the
potential difference V between its two terminals:
V Q/C
[1]
where the constant C is called the capacitance of the capacitor.
By connecting a capacitor across a battery, charge will flow onto the conducting plates of
the capacitor until the voltage across the capacitor equals that across the battery, so
preventing any further charge flow. A schematic of the circuit used in this experiment is
shown in Figure 3.2. The rate at which the capacitor charges depends on the battery
voltage V0, the capacitance C, and the resistance R of the circuit.
R
C
Figure 3.2: Circuit arrangement
The voltage across the capacitor, charging from 0 V and starting at t = 0 s is given by the
equation
V  V0 ( 1  e t / RC )
[2]
A charged capacitor may be discharged by connecting a wire across its terminals, and the
rate of discharge again depends on V, C and the resistance R. The voltage across the
capacitor, discharging from V0 and starting at t = 0 s is given by the equation:
V  V0e t / RC
[3]
The quantity RC must have the dimensions of time if equations [1] and [2] are to be
correct. The product RC is known as the "time constant" of the circuit, the higher the RC
the slower the circuit charges and discharges.
In the experiment the charging of a capacitor will be used to determine suitable
parameters for the CASSY system, but only the discharge of the capacitor will be
analysed. Equation [3] for the capacitor discharge may be written by taking natural logs
as:
ln( V )  ln V0  t / RC
[4]
Thus either C or R can be determined graphically if the other quantity is known. ( is
known as the time constant and gives us a measure of the rate of charging of a particular
RC combination.)
33
Experimental apparatus and technique
 Be very careful to ensure that you do not short circuit either the battery or the
(charged) capacitor; this can result in the flow of very large currents.
 You are provided with capacitors and resistors whose values have a quoted tolerance
of ±10% and ±5% respectively.
 The 5000μF capacitor and 20kΩ resistor will be measured. For this combination
calculate the expected time constant,  = CR.
Initial measurements
 Before making up the circuit in Figure 8.2, use the system to measure the e.m.f. of the
battery.
 In the same way check that the capacitor is fully discharged. If it isn’t connect it to a
1 kΩ resistor until it is fully discharged.
 Without making the connection to the battery, set up the circuit in Figure 8.2 with the
data logger connected across the capacitance.
 With the measuring (sample) interval still set to 100 ms and the number of points not
specified, start taking data then make the connection to the battery. Whilst data is
being taken: by right clicking on the y-axis set a suitable scale (the x (time) axis
should re-scale itself).
 With a limit of 125* on the number of data points decide on suitable measuring
interval and time to acquire the discharge data. Ensure the capacitor is fully charged
before acquiring these data.
* The data will be exported to EXCEL and 100 points is more convenient than thousands.
Measuring the capacitor discharge
 Set the measurement parameters determined above.
 When ready, start data acquisition then start the discharge by removing both
connections from the battery (to remove it entirely from the circuit) before shorting
them together.
 Print out the graph, but do not print out the table.
Analysing the data
As is often the case, although there are a lot of analysis functions built into the data
acquisition software, they do not necessarily the best tools to use. Here, although the
software will find the logarithm of data it only uses log10 whereas equation [4] requires
loge. Although the conversion is not difficult it is easier and more instructive to export the
data. (Another good reason to export data is that the graphs produced by CASSY are not
of good enough quality for formal reports.)
 Open EXCEL
 Right click on the data table and copy/paste it to EXCEL.
 Use the graph function on EXCEL to make an appropriate graph in order to determine
the time constant (RC).
 There is now a problem. Although EXCEL will give a quality of fit number it does
not give an error; and for any value determined in an experiment to have meaning
there must be an associated error.
 To determine errors print out the graph and do it by hand in the usual way.
 Compare the time constant and V0 values with expectations.
 Use a multi-meter to determine an accurate resistance and use this to calculate the best
value for the capacitance. Within tolerance, are your values in agreement with those
quoted?
34
Experiment 4: Variation of Resistance with Temperature.
Safety Aspects: In this experiment you will use the cryogen liquid nitrogen (boiling point
77.3K). Please ensure that you read the safety precautions, write a risk assessment AND
seek the assistance of a demonstrator before using this.
SAFETY PRECAUTIONS IN THE HANDLING OF LIQUID NlTROGEN
Avoid contact with the fluid, and therefore avoid splashing of the liquid when transferring
it from one vessel to another. Remember that when filling a "warm" dewar, excessive
boil-off occurs and therefore a slow and careful transfer is necessary. Do not permit the
liquid to become trapped in an unvented system. If you do not wear spectacles, safety
glasses (which are provided) must be worn when liquid nitrogen is being transferred from
one vessel to another.
FIRST AID
If liquid nitrogen contacts the skin, flush the affected area with water. If any visible
''burn" results contact a member of staff.
Outline
All materials can be broadly separated into 3 classes, according to their electrical
resistance; metals, insulators and semiconductors. This resistance to the flow of charge is
temperature dependent but the dependence is not the same for all material classes, because
of the physical processes involved. In this experiment you will determine the behaviour
of electrical resistance as a function of temperature for a metal and a semiconductor. You
will confirm the linearity or otherwise of these behaviours.
Experimental skills
 Ability to keep a clear head and organize a one-off experiment, paying careful
attention to safety aspects.
 Make and record simultaneous measurements of a number of time-varying quantities.
 Determine realistic errors in these quantities and combine them.
 Gain experience of liquid cryogens.
 Fit measured data to linear, polynomial and logarithmic expressions.
Wider Applications
 Many branches of physics and its applications involve the study and use of materials
at cryogenic temperatures (those below ~ 150K). By understanding the temperature
dependence of material behaviour, we can use it to our advantage.
 Modern imaging and communication systems rely on the sensitive, noiseless and
reproducible detection and transfer of electrical information. This is often achived by
using cooled semiconductor devices.
 Some materials become superconducting at cryogenic temperatures (i.e a temperature
somewhat above absolute zero). This phenomenon has found application in Medical
imaging (MRI scanners depend on the huge magnetic fields achievable only by using
superconducting coils); Astronomical imaging (superconducting detectors are used to
count 13 billion year old photons) and transport (MAGLEV trains).
35
1. Introduction
In this experiment you will investigate the variation of the resistance of: 1) a
semiconductor (a thermistor); 2) a metal (copper) in the temperature range from ~ 120 290 K.
For a metal the following equation [1] describes the linear behaviour of resistance R with
temperature T.
RT = R273(1 + (T)) ,
[1]
Where RT is the resistance at temperature T (in Kelvin), R273 is the resistance at 273K
and is a constant known as the temperature coefficient of resistance, which depends
only on the material being considered.
However the behaviour may be more closely described by a 2nd order polynomial fit.
RT = R273 {1 + (T) + (T)}2,
[2]
where  is another constant.
For a typical intrinsic semiconductor the electrical resistance obeys an exponential
relationship with temperature. It takes the form of equation [3] .
RT = a eb/T ,
[3]
where RT is the resistance at T and a and b are constants.
By using equations [1], [2] and [3], you are to find suitable graphical ways to verify or
disprove these relationships. You may use Excel (or another plotting package familiar to
you) to plot your data, BUT remember to take care with axes, apply suitable error bars
and think about what your results mean.
2. Experiment
2.1 Apparatus
The metal you will test is in the form of a coil of fine wire. The semiconductor is a
thermistor. Both of these are attached to the top of a copper rod. They are held in good
thermal contact with it by a low-temperature varnish.
The temperature of the specimens can be reduced by immersing the copper rod to various
depths in liquid nitrogen, which boils at 77.3 K. The liquid nitrogen is poured into a
Dewar flask contained in the box which supports the copper-rod assembly. The
liquid-nitrogen level is gradually increased by adding liquid nitrogen through the funnel.
An insulating cap is provided which, when placed over the top of the rod, thermally
isolates the specimens from the surroundings and allows their temperature to fall to a
value determined by the depth of immersion of the rod in the liquid nitrogen.
36
The temperature of the specimens is measured with a thermocouple. This consists of two
junctions of dissimilar metals arranged as shown in Figure 4.1.
If the two junctions are at different temperatures an e.m.f. is generated which, to a
good approximation, is proportional to the temperature difference between the two
junctions. By calibrating such a thermocouple, temperature differences can be determined
by voltage measurements and these can be used to measure temperature if one standard
junction is held at a well-defined fixed temperature.
Figure 4.1: Representation of back-to-back thermocouple junctions and circuit
In this experiment we use a copper-constantan thermocouple. One junction of this is
embedded with the specimens in the varnish; the other, the standard, is kept at 77.3 K by
immersion in liquid nitrogen contained in a separate Dewar flask. You will calibrate the
thermocouple with the standard junction in liquid nitrogen while that attached to the metal
rod remains at room temperature.
The resistances of the copper and thermistor are read from multimeters suitably
connected. The voltage across the thermocouple is also read by a multimeter. Ensure you
can read all 3 scales simultaneously.
2.2 Calibration of the thermocouple
Connect a multimeter to the appropriate thermocouple terminals on top of the rod.
Immerse the free junction in liquid nitrogen and record a voltage. Take another voltage
reading when the junction is at room temperature. You can now calibrate the
thermocouple scale by assuming that the voltage is linearly related to temperature
difference. (This is not strictly true but will suffice for our purposes.) Check your
calibration with a demonstrator and ensure that you know how to use the thermocouple as
a thermometer for the rest of the experiment.
2.3 Resistance measurements
The magnitudes of the coil and thermistor resistances will be determined using
multimeters set to the ohms range.
Measure RC (the resistance of the copper coil) and RTh (the resistance of the thermistor) at
room temperature.
37
Place the insulating cap on top of the rod and start to add liquid nitrogen through the
funnel. Note the readings on the 3 multimeters (thermocouple voltage, Rc and RTh).
Gradually add more liquid nitrogen and repeat .The object of the experiment is to obtain
as many measurements of Rc and RT as possible over as wide a temperature range as
possible.
Remember to ensure that you have a simple diagram of your apparatus that would allow
you to set the experiment up again.
Experimental Notes







You must work quickly and efficiently if you are to obtain sufficient experimental
points on the graphs
Handle the Dewar flasks carefully.
DO NOT touch the copper rod when it has been immersed in liquid nitrogen. If
you do, you may freeze to the cold metal and give yourself a severe burn
You will find that there will be little change in temperature of the coil and the
thermistor when liquid nitrogen is added initially, but take care not to add too
much liquid nitrogen at any one time or a large temperature drop may result. Once
the rod has been cooled, it is not easy to raise the temperature again in the course
of the experiment. This is a one hit expereiment!
The lowest temperature you are likely to reach will be at best ~ 120 K.
Make notes in your lab diaries of anything that happens during the experiment,
e.g. where you note a change of range on the multimeter.
Make a note in your lab diary of the specific pieces of equipment that you have
used.
3. Data analysis
Plot suitable graphs of your data and investigate the validity of equations [1] and [2] for
the metal and equation [3] for the thermistor. Finding values of , , a and b.
You may use a computer package (Excel is recommended) to fit the equations but be
careful to check your axes, show error information and quote gradients and results to a
sensible number of significant figures.
Does the variation of resistance in a metal vary linearly with temperature? Which
equation gives the best fit to the data? What do you notice about the variation for a
semiconductor? Is the exponential fit of equation [3] good enough?
How might the experiment, errors in the data, or your experimental method be improved?
38
Experiment 5: The Cathode Ray Oscilloscope and Circuit Construction
Before attending the laboratory you are recommended to read the "Introduction to
Electronic Experiments", section III.1, for additional information on the equipment that
will be used.
Outline
A cathode ray oscilloscope (CRO) is a common piece of electronic test equipment. It
allows observation of constantly varying signal voltages, usually as a two-dimensional
graph of one or more electrical potential differences using the vertical axis, plotted as a
function of time on the horizontal.
The purpose of the first part of this experiment is for you to gain familiarity with such a
useful piece of equipment.
In the second part of the experiment, you will determine the I-V characteristics of a diode
and resistor.
Experimental skills
 An introduction to using an oscilloscope.
 To determine the I-V characteristics of a resistor.
 To determine the I-V characteristics of a diode.
 Familiarisation with electrical prototype boards and multimeters.
Wider Applications
 Oscilloscopes are used in the sciences, medicine, engineering, and
telecommunications industry. General-purpose instruments are used for maintenance
of electronic equipment and laboratory work. Special-purpose oscilloscopes may be
used for such purposes as analyzing an automotive ignition system, or to display the
waveform of the heartbeat as an electrocardiogram.
 Although an oscilloscope displays voltage on its vertical axis, any other quantity that
can be converted to a voltage can be displayed as well.
 Oscilloscopes are commonly used to observe the exact wave shape of an electrical
signal. In addition to the amplitude of the signal, an oscilloscope can show distortion,
the time between two events (such as pulse width, period, or rise time) and relative
timing between two signals.
1. Introduction to the CRO
In its simplest form, the cathode-ray-oscilloscope (CRO) consists of an electron gun, a
deflection system and a display system (as shown in Figure 4.1).
39
Figure 5.1: Schematic diagram of cathode ray oscilloscope
The electron gun comprises:
(i) a heated cathode which emits electrons that are drawn to the first anode,
(ii) a grid whose potential is made negative with respect to the cathode and whose
function is to control the electron flow (thus controlling the display brightness) and
(iii) the focusing and final anodes, the combined electric fields of which focus the beam to
a fine point on the screen.
The deflection system comprises the X- plates and the Y- plates: two sets of parallel
plates so arranged that the field created between opposite plates causes a corresponding
deflection of the electron beam. The displacements caused by the X- and the Y- plates
may be considered independent.
The display system is a screen coated with a fluorescent material or phosphor which,
when struck by the electron beam, emits visible light.
2. Experiments with the CRO
2.1 Application of a p.d. across the Y- plates
Note that some of the basic functions of the scope are described in Section III.1 of the
"Introduction to Electronic Experiments" section. In particular see Figure 3.
Set up the Kikusui oscilloscope controls as follows:
POWER
ON
TIME/CM
EXT (fully anti-clockwise)
VOLTS S/C
1 V cm-1
DC/AC switch
DC position
40
Movement of the X-shift   and Y shift   controls allows horizontal and vertical
movement of the spot. Set both of these controls to their centre position to find the spot.
If spot has not appeared, increase the intensity control.
Having located the spot on the screen reduce its brightness immediately to avoid
damage to the screen and adjust the focus to produce a sharply defined spot.
For many purposes the deflection sensitivity of the cathode-ray tube is inadequate
(typically requiring 15 V for a 1 cm deflection of the spot on the screen) and so, to
produce a suitable deflection, the input signal is usually amplified before application to
the Y- plates. The amplifiers in the oscilloscope are calibrated, their gain being set by the
volts cm-1 control (labelled volts/div on the oscilloscope front panel).
Check that the yellow knobs in the centre of the Volts/div control are turned fullyclockwise until they click.
Connect a Leclanche cell to the input terminal (CHANNEL 1). The Leclanche cell is used
for calibration and its e.m.f. is a standard 1.50V. Select a suitable sensitivity range on the
volts cm-1 switch and note the deflections produced by the cells. Investigate the accuracy
of the calibration of the oscilloscope. Now connect the cell you are given and determine
its e.m.f. Estimate the precision of the result. Do you think that you are justified in calling
the measured potential difference an e.m.f.? [Hint: think about the voltage measured
across the terminals of a battery of e.m.f. E and internal resistance r when connected into
a circuit of resistance R.]
Change the DC/AC switch into the AC position when a cell is connected to the input
terminals and note the result.
How useful is the calibration procedure just outlined? What happens if we need to use
another volts cm-1 range?
Replace the cells by an oscillator (set to a frequency of about 1 kHz) taking the output
from the “50  output”. Explain the form of the trace. On the oscilloscope DC setting,
investigate the effect of applying the sum of an AC and a DC signal by pressing the DC
Offset switch on the oscillator and varying the Offset level. Repeat on the AC
oscilloscope setting. Summarise the effect of the DC and AC settings on the oscilloscope.
2.2. The Time Base
The oscilloscope is generally used to display a stationary trace representing some portion
of the waveform of a time-varying voltage. Usually, voltage is plotted on the Y- axis and
time plotted on the X- axis of the screen. This "time-axis" is created by moving the spot
across the screen at a constant, predetermined rate. The spot, having reached the end of
the axis, must then be returned rapidly to the origin of the axes to allow the trace to be
repeated indefinitely. The eye sees a continuous trace.
41
To deflect the trace in the manner described above, a sawtooth waveform, similar in form
to that shown in Figure 5.2, is applied to the X-plates. This time-varying voltage is called
the "time base".
Figure 5.2: Waveform applied to X-plates
Remove all connections to the oscilloscope input. Set the trigger level control to its
central position (at which point the marker line on the knob will be uppermost). Set the
TIME/CM to 0.2 ms cm-1. Describe the resulting trace on the screen.
More information about the oscilloscope can be found in Section III.1. Section 2, page 76
3 Introduction to Circuit Construction
Electronic components can be classified as linear or non-linear. Resistors, capacitors and
inductors are linear devices, because the current flow (I) through them is proportional to
the applied potential difference (V). Diodes and transistors have more exotic I-V
characteristics.
The CRO will be used as a high-impedance VOLTMETER and the multimeter will be
used as a MILLIAMMETER.
In part 4.1 of the experiment, you will determine the I-V characteristics of a diode. A
diode is formed by the junction of a p-type and an n-type semiconductor. Electron
movement in the region of the junction forms a DEPLETION layer, in which there are no
charge carriers, ie, an insulating layer. When FORWARD BIASED, the layer narrows,
and at about 0.6 V the layer vanishes, and the diode then offers very little resistance to the
flow of current (Figure 5.3). When the diode is REVERSE BIASED, the depletion layer
becomes wider and little current flows. The diode is a RECTIFIER, allowing current to
flow in one direction only, as demonstrated in part 4.2 of the experiment.
In comparison with the diode, the properties of a resistor may appear uninteresting.
However, it is undoubtedly one of the most popular circuit elements. Its principal uses are
for limiting the flow of current and as a current-to-voltage converter.
42
or
Figure 5.3: Forward- and Reverse-biased diode. Note convention for supply polarity
4. Experiments in Circuit Construction.
4.1 I-V Characteristics of a Resistor (Ohm's Law)
Familiarise yourself with the prototype board. Plug the board into the mains.
Build the circuit (Figure 5.4) on the prototype board using a resistor with the colour code
yellow, purple, red (and gold) for R. Use your multimeter set on the "1 mA" scale for
measuring the current and your scope set on "dc" for measuring voltages. When making
connections between the scope and the prototype board, ENSURE THAT EARTH
CONNECTIONS ARE COMMON. The earth lead of the scope is colour-coded with the
conventional green and yellow bands common to domestic cables. Vary the input voltage
and measure the current (I) flowing through the resistor for various values of V. Plot these
values directly on to a graph of I vs V.
Figure 5.4: I vs V for a resistor. Note that the ground connection for the -5V to +5 V supply is made
internally. You need only connect one wire from the variable supply to the circuit (i.e to the
milliammeter).
The gradient of your graph is equal to 1/R. Measure this and calculate R. Compare this
value with the colour-coded markings on the resistor. Use your multimeter set on the
"ohm range" to verify your deductions. Which is the "best" value?
Briefly discuss any sources of error. How could the experiment be improved?
43
4.2 I-V Characteristics of a diode
Repeat the above experiment using the circuit of Figure 5.5 to determine the I-V
characteristics of a diode, a non-linear circuit element. The input voltage (Vin) is again
provided by the -5V to +5 V variable dc supply. Note that the 1 k resistor is necessary to
limit the flow of current through the diode, which might otherwise overheat and be
destroyed.
Figure 5.5: I vs V for a diode
As in the previous experiment, vary Vin and make measurements of the current flowing
through the diode (I) as a function of the potential drop across the diode (V). For part of
the characteristic you will have to increase the sensitivity of your scope. Check the
position of zero volts on the screen after changing ranges. Plot a graph of I vs V but be
selective – it may not make sense to plot the whole of the measured range i.e. if
something is not changing at all it may be sufficient to describe this in words. For small
values of V, you may find that you have to increase the sensitivity of the milliammeter. Be
sure to take plenty of readings in regions where the graph is non-linear (this is why you
must plot the graph in the lab) and you will probably have to plot the non-linear region
to a greater sensitivity.
From your graph, describe the action of the diode. Note that it "switches on" at about 0.6
V. Determine the approximate values of the diode's "resistance" in these forward
(conducting) and reverse biased regions. The forward resistance is very low, which is why
the 1 k limiting resistor was required in the circuit of Figure 5.5.
44
Experiment 6: Optical Diffraction
Safety Aspects: You must take great care when using the laser to avoid damage to your
eyes. In no circumstances must you look along the main beam. You must also take care
that specularly reflected beams do not enter your eye when you are adjusting the various
components. Check with a demonstrator before starting the experiment.
Before coming to the lab, remind yourself about optical diffraction. Use an A level
reference or read some of Chapter 36 (p990) of The Wiley Plus “Principles of Physics”.
Outline
In optics Fraunhofer (or far-field) diffraction is a form of wave diffraction that occurs
when field waves are passed through an aperture or slit. In this experiment you will study
quantitatively and qualitatively various diffracting objects and their diffraction patterns,
by using a laser as a source of monochromatic light and a series of apertures, aligned on
an optical bench.
Experimental skills
 Using a HeNe laser, and taking relevant safety considerations.
 Careful experimental alignment and set-up using an optical bench.
 Making use of observations and trial/survey experiments (as mentioned in Experiment
3) prior to taking detailed measurements.
Wider Applications
 Any real optical system (a microscope, a telescope, a camera) contains finite sized
components and apertures. These give rise to diffraction effects and fundamentally
limit the obtainable resolution of any optical device. (There may be other optical
imperfections too, such as scratches or misalignment.)
 Thus, the resolution of a given instrument is proportional to the size of its objective,
and inversely proportional to the wavelength of the light being observed.
 An optical system with the ability to produce images with angular resolution as good
as the instrument's theoretical limit is said to be diffraction limited. In astronomy, a
diffraction-limited observation is achievable with space-based telescopes, of suitable
size.
1. Introduction
Diffraction is the name given to the modification of a wavefront as it passes through some
region in which there is a diffracting object. The object is usually an obstacle or an
aperture in an opaque sheet of material. Huygens’ Principle postulates that all points on
the modified wavefront act as secondary sources of radiation. According to Figure 6.1, at
any point P beyond the object the secondary waves superpose, or interfere, to give a
resulting disturbance which is characteristic of the diffracting object. This resulting
disturbance is usually referred to as the diffraction pattern of the object, although
interference pattern would be a better name.
45
Figure 6.1: Diffraction through a slit
The form of the diffraction pattern also depends on the distance, D, of the observation
plane from the object. Diffraction effects can be divided conveniently into two
categories.
(1) Near-field, or Fresnel diffraction, for which D is fairly small
(2) Far-field, or Fraunhofer diffraction, for which D >> a 2 , where a is the size of

diffracting unit and  is the wavelength of the scattered radiation.
In this experiment you will be concerned only with Fraunhofer diffraction effects. The
experiment consists of studying, either quantitatively or qualitatively or both, various
diffracting objects and their diffraction patterns.
2. Experimental set-up and adjustment of the apparatus
2.1 The laser
The source of radiation is a 1 mW helium-neon (HeNe) laser which emits a coherent
beam of light of approximately 4 mm2 cross-sectional area.
Switch on the laser and adjust it so that the beam is travelling parallel to the longitudinal
axis of the optical bench. Make a crude adjustment first by standing back and using your
eye to judge how parallel the the axis of the laser is to the optical bench. Then, fine
adjustment can be made by checking the beam position on a piece of white card as it is
moved along the optical bench. Hold the white card in one of the holders provided and
check that the beam strikes the card at the same point, which may be marked with a cross,
wherever along the bench it is. Make adjustments using the vertical and transverse fine
adjustment knobs on the laser baseplate. Don’t spend too much time doing this; if you’re
having trouble, talk to a demonstrator.
46
2.2 Objects and holder
Mount the three-jaw slide holder in a saddle positioned close to the laser.
You are provided with a series of mounted 2” x 2” slides, etched into which are various
diffracting objects. These slides are unprotected and must only be handled by their edges
to avoid damage.
SLIDE 1
SLIDE 2
SLIDE 3
SLIDE 4
SLIDE 5
Diffacting object(s)
One-dimensional diffraction grating.
Double slits
A series of single slits of different widths.
Two-dimensional diffraction grating.
One-dimensional diffraction grating
3. Measurement of the width of the central peak
Place slide 3 in the slide holder and mount it close to the laser at one end of the bench.
Adjust it horizontally until the light is passing through slit C and displaying a clear
diffraction pattern on the wall. Always look along the bench, away from the laser when
making adjustments.
Measure the distance, D, between the slide and the wall. Observe the pattern on the wall
and sketch it, to scale, in your lab book. Is the pattern what you expect? What is the
diffracting object?
Accurately measure the width of the central peak, W.
The peak width W is given by:
W = Kan,
[1]
where K depends on D and  , and a is the width of the slit (Figure 6.1). Repeat this
measurement for slits D, E, F and G. Compare the width of the central peak with the slit
widths, which are given in m , on the packet containing the slides. (Record all
measurements in metres!) Rearrange equation [1] so that a plot of W as a function of a
will give you a straight line graph and, using appropriate graph paper, plot a graph to find
the integer n. What do you think is the relationship between K, D and  ? (Hint: use
dimensional analysis to work it out and then refer to the literature to check the correct
equation.)
4. Determination of the wavelength of the laser light
Now use SLIDE 1 to obtain the diffraction pattern as illustrated in Figure 6.2. Using the
travelling microscope and the Rayleigh mean method (if in doubt, ask a demonstrator),
determine the repeat distance d of this one-dimensional grating. Place the slide in the
slide holder so that the grating is illuminated by the laser and the diffracted beams lie
approximately in a horizontal plane. Maximise the size of this pattern so that you can
47
easily determine the zeroth order (centre) and as many higher orders as possible. Sketch
and describe the pattern.
Now, by careful experimental measurement it should be possible to determine the
wavelength of the laser light.
The wavelength  of the light from the laser is given by

d sin  m
,
m
[2]
where the angle θm is indicated in Figure 6.2
Figure 6.2: Defining d and θm
Because θm is small, sin  m  tan  m 

x ( m)
, and [2] becomes
D
d x ( m)
D m
[3]
Note x(m) is the distance between the centre of the pattern and the mth diffraction spot.
Rearrange the equation to plot a suitable straight line graph in order to determine  , the
wavelength of the HeNe laser. Check that your answer is sensible!
5. Two dimensional grating
SLIDE 4, is a two-dimensional diffraction grating. Use any convenient diffraction
method to find the ratio of the repeat distances in the two principal directions.
Remember to sketch your observations and discuss.
48
Experiment 7: X-rays
1. Introduction
Safety Aspects: Intense X-ray beams are harmful to human tissue. The protective cover of
the equipment is interlocked such that the X-ray beam is switched off when the cover is
opened.
THE CRYSTALS ARE FRAGILE - TREAT THEM WITH CARE.
X-rays are electromagnetic radiation of shorter wavelength than light. X-rays have
wavelengths of about 0.1 nm whereas light has wavelengths of about 500 nm. X-radiation
has many important uses, for example study of the structure of solids. The use of X-rays
for this purpose will be explored in Semester 2. However, in this experiment some of the
properties of X-rays and their interaction with matter are studied.
The experiment consists of the following parts.
(a) Measurement of the X-radiation spectrum emitted by a copper-target X-ray tube.
(b) Study of the effect of passage through the foils of various elements on the emission
spectrum of the X-ray tube.
(c) Measurement of the absorption characteristics of various elements.
The wavelength of electromagnetic radiation is usually measured by means of a
diffraction grating. In order to obtain reasonable angles of diffraction the spacing of the
grating elements must be of the same order of size as the wavelength of the radiation. The
spacing of the atoms in simple crystals are typically of the order of 0.1 nm and the
electrons in the atoms scatter the X-rays. Consequently simple crystals make convenient
diffraction gratings for X-rays. The grating in this experiment is a crystal of sodium
chloride cut in the form of a plate. The atomic-scale structure of sodium chloride consists
of alternate sodium and chlorine ions arranged in a face-centred cubic arrangement. The
arrangement of ions and its orientation with respect to the plane of the crystal plate used
here is shown in Figure 7.1, where a is the basic cube repeat distance.
Figure 7.1: Arrangement of ions in plate of sodium chloride
49
The relation between the spacing of the ions for the above arrangement, the wavelength 
of the X-rays and the angle 2 of diffraction is
m  2a sin 
(1)
where m is the order of diffraction. For the face-centred-cubic arrangement of sodium
chloride the lowest non-zero-intensity order of diffraction (other than m = 0) is m = 2
2. The X-ray apparatus
This consists mainly of a copper-anode X-ray tube, specimen holder and detector carriage
enclosed in an X-ray proof housing (Figure 7.2).
.
Figure 7.2: The X-ray apparatus
The white indicator light indicates power on (mainly the supply to the X-ray tube
filament); the red indicator, and the audible warning, indicate high voltage and X-rays on.
The X-rays can be produced only when the X-ray-proof cover is closed. The whole
apparatus is interlocked and is entirely safe. Its use will be outlined by the demonstrator.
3. Measurement of the X-ray tube emission spectrum
The wavelength of the X-rays is measured by measuring the angle 2 at which they are
diffracted in the second order (the lowest non-zero-intensity order) from the sodium
chloride crystal. For sodium chloride a = 0.564 nm, and for m = 2, so show that equation
(1) becomes
  0.564 sin  (nm)
(2)
Clamp the sodium chloride crystal in the specimen holder (Figure 7.2) with its long axis
vertical and with the largest ground face of the crystal in the X-ray beam. Insert the 1 mm
50
slit primary-beam collimator into the X-ray tube housing with the slit vertical and place
the 1 mm slit diffracted-beam collimator in the detector carriage position 18. View the
crystal face, and the primary-beam collimator slot, through the slit in the diffracted-beam
collimator. If necessary, rotate the primary-beam collimator until its slit is parallel to that
of the diffracted-beam collimator and to the crystal face.
Place the circular-aperture slide in detector carriage position 17 and the Geiger-Muller
detector in its holder in position 26 (Figure 7.3)
Figure 7.3: Plan view of component arrangement
Set the scaler unit high voltage to 400 V. Move the detector carriage from 25° to 40° in
steps of 1° between 25-28°, 1/6° between 28-33° and 1° between 33-40°. The 1/6° steps
can be made with the thumb wheel. At each position count for 10s and record the count
and angular position 2 (Figure 7.4). Estimate the count error by repeating several times
at a fixed angle (a) on a peak (b) away from a peak.
Figure 7.4: Plan view of arrangement during measurement of count rate
51
Plot the count against 2 to give the spectrum. It will be seen that this consists of a
general background of radiation with two prominent peaks. The longer-wavelength peak
is called the K line, the shorter is the K line. Calculate their wavelengths.
What do the peaks represent? What are the widths of the peaks? What might cause this
width?
4. Characteristic X-ray spectra
X-ray spectra can be considered to arise from transitions between energy levels
characterized by a quantum number nl and levels with quantum number n2. The energy
associated with each level is given by
En 
Rhc ( Z   ) 2
n2
where:
R is a universal constant 1.0968 x l07 m-1;
Z is the atomic number, which for Cu = 29;
h = 6.626 x 10-34 Js;
 is the screening constant.
This relationship is identical to that which is applied to the hydrogen spectrum apart from
the appearance of the screening constant. This arises because, unlike the case of hydrogen
where there is only one electron, a given electron experiences a field due to the charge on
the nucleus modified by the field due to the other electrons.
The energy emitted when the atom changes from a state defined by n1 to that defined by
n2 is observed as a quantum of frequency  such that
1
1
E  h  Rhc ( Z   ) 2  2  2 
 n2 n1 
The K line results from a change of n from 2 to 1; the K line results from n = 3 to 1.
Use your wavelength data to do the following.
(a) Calculate values for  for the Cu transitions. Comment on any differences you
observe.
(b) Draw an energy-level diagram for copper (lines separated by a distance proportional to
the energy separation and clearly labelled). Use  calculated for K. What energy would
be needed to remove an innermost electron entirely from a copper atom?
The K series of lines are so-called because the final state is the n=1 or K shell of the atom.
Other shells exist for n = 2, 3 etc and are called the L,M, etc shells.
The L series of lines results from transitions which finish at n = 2, so that the L line is

produced when n = 3  n = 2.
What wavelength would you predict for this line? (Ignore the screening effect). Could you
detect such radiation with your apparatus? [Consider equations 1 and 2].
52
5. Effect of Passage through Foils
When X-radiation passes through a foil its intensity is reduced according to the equation
I x  I 0 exp( x)
where:
 is the linear absorption coefficient;
x is the thickness of the foil;
I0 is the incident intensity;
Ix is the transmitted intensity.
Replace the circular aperture slide in the detector carriage position 17 by a copper foil and
determine the magnitude of  for absorption of (a) K and (b) K radiation.
6. X-ray Absorption edqes
The absorption coefficient is heavily dependent on the wavelength of the X-rays. This can
be best understood if we consider the physics of the absorption process.
As X-rays pass through matter they interact with the atoms and lose energy. The main
energy-loss process is ionization. The X-rays interact with the electrons which are bound
to atoms of the absorber and lose the energy which is required to remove the electrons
from the atom. This process is described by
h '  h  B.E.
where ' is the frequency of the X-rays after absorption;  is the incident frequency
and B.E. is the “binding” energy required to ionize the electron concerned.
Thus the energy lost in any one such ionization process will depend on:
a) the absorbing atomic species;
b) the shell from which the electron is removed as the B.E. for an electron n, say, the L
shell will differ from that for an electron in the K shell.
Consider a simple case where X-rays pass through an absorber which is composed of
atoms which have electrons only in the K and L shells. X-rays of the smallest energies
(lowest frequency, largest wavelength) will be heavily absorbed by ionizing electrons in
the L shell, but will not have enough energy to ionize electrons in the K shell. This
process is described by
h '  h  E L
where EL is the B.E. of the electron in L shell.
If the energy of the X-rays is increased, the X-rays become more penetrating and the
magnitude of the absorption coefficient falls rapidly. In general,   3, so that the
variation is as shown in figure 7.5.
53
Figure 7.5: Variation of absorption coefficient with wavelength and frequency.
At a certain critical frequency (and equivalent wavelength), the X-ray energy has been
increased to such an extent that the X-rays are now able to ionize not only electrons in the
L shell, but also electrons in the K shell. Thus the absortion coefficient increases very
rapidly as shown in figure 7.6.
Figure 7.6: Variation of absorption coefficient with wavelength and frequency
for two transitions
This discontinuity is termed a K absorption edge and will occur at well defined
wavelengths which are characteristic of the absorber concerned. To a good approximation
the frequency K associated with the K absorption edge is given by
h K  EK ,
where EK is the binding energy of the electron in the K shell. Similar absorption edges
may occur for L,M, etc shells.
Where on a wavelength scale would you expect to find the copper K absorption edge in
relationship to the  and  lines? Is this supported by your values for the linear absorption
coefficients?
Determine the position of the copper absorption edge. Insert the copper foil in position 17
and investigate the intensity of transmitted radiation over a wide range of angles.
Plot log10 (I0/Ix ) against 2 and hence determine the position and wavelength of the edge.
How does this compare with your estimate in 4(b)?
54
Experiment 8: Large Scale Structure of The Universe
1. Introduction
In this experiment you will be using a simulation of an optical telescope to make a 3D
map of the galaxies in a small section of sky. This is then used to infer the large-scale
structure of the Universe. The telescope is equipped with two instruments:
 a TV camera for locating the 2D position of a galaxy
 a spectrograph for taking the spectra of the light detected from the galaxy.
Mapping the universe in 3 dimensions is not a simple task. Finding two of the coordinates
of a distant galaxy is trivial; they are just the position of the galaxy on the sky. For the
third coordinate we need to rely on the fact that the entire universe is expanding at a
constant rate, and therefore an object which is twice as far away as a second object, is
travelling away from us at twice the rate. This relationship (the Hubble law) was
discovered by Edwin Hubble in the 1920’s. The rate of recession (and therefore the
distance to the object) can be measured using the Doppler effect.
The galaxies you will be mapping are from a small section of sky as shown in Figure 8.1,
stretching from Right Ascension 12h – 16h, and spanning 5 degrees in Declination, (see
below). You will assume that all the galaxies lie in the same plane in declination, so you
will only consider two of the three coordinates for a galaxy.
Right Ascension (RA) is the East-West coordinate, and is made up of 24 hours with each
hour split into 60 minutes of 60 seconds. Declination (Dec) is the North-South
coordinate, and runs from –90 to 90 degrees, with each degree split into 60 arcminutes,
and each arcminute split into 60 arcseconds. (Note that 1 arcsecond in Dec is NOT
equivalent to 1 second of RA).
Figure 8.1: Portion of the sky used in the survey.
55
2. Aims




To find galaxies in a restricted area of the sky using a list compiled by earlier
astronomers.
To take spectra of these galaxies using simulated telescopes and spectrometers.
To measure the wavelengths of features in the spectra, and use them to calculate the
radial velocity of the galaxy.
To plot the galaxies onto a quasi-3D plot, and measure distances to key features.
3. Theory
3.1 Calculating Redshifts
You are going to use the measured wavelength of three spectral lines to calculate the
radial recessional velocity (i.e. how fast the galaxy is travelling away from us) of galaxies.
Because the galaxies are moving away from us, the light from them is redshifted by an
amount which is proportional to the recessional velocity. This is caused by the Doppler
effect. The three spectral lines are the Calcium K, H, and G bands, which normally have
wavelengths of 3933.67, 3968.85 and 4305 Å respectively (an angstrom (Å) is 10-10m).
The fractional redshift (z) can be calculated using equation 1.
z = (  measured - laboratory) / laboratory
[1]
The recessional velocity of the galaxy can then be calculated using equation 2, where c is
the speed of light measured in kms-1.
v = cz
[2]
3.2 Hubble’s law
In order to calculate the distance to a distant object, Hubble’s redshift-distance relation is
used; (equation 3) where v is the recessional velocity of the object, D is the distance to the
object and H is the Hubble constant. The value of H is not well known, but a value of 71
kms-1Mpc-1 is a reasonable figure.
v = HD
[3]
The units of distance in this equation are Mpc, the conversion to metres is:
1 Mpc = 3.1 x 1022 m
As you read through the lab manual make sure you answer all the questions. Ask the
demonstrator if you are unsure about any of the answers.
4. Experiment
4.1 Getting Started
First, ask the Lab demonstrator for a list of sources to look at.
Click on the Clea_lss icon.
56
Select Login from the File menu.
Enter your name and computer number (ask the lab demonstrator for the number).
Select Run from the File menu.
4.2 The Telescope Control Panel
First open the dome by clicking on the Dome button. You are presented with a view of
the sky containing many stars and galaxies. Notice how the objects move across the field
of view, this is caused by the rotation of the earth. This needs to be counteracted by
turning the tracking on. This engages the motors that continually adjust the telescope to
stay locked onto an object.
When you have turned on the tracking, you are ready to point the telescope at an object.
This can be achieved in two ways, either use the N, S, E, W, buttons to slew the telescope
a short distance (note that the slew rate can be adjusted), or click on Set Coordinates to
move the telescope directly to a specific position.
There are two different views you can access. The initial view is the Finder View, which
as a large field of view, and is used for finding objects to look at. Move the telescope so
that one of the galaxies in your list is in the red square, and click on the Change View
button. This changes to the Instrument View, which shows a magnified image of objects
that were in the red square in the last view. There are also 2 red lines in this view; these
show the position of the spectrometer slit. In order to take a spectrum of the galaxy,
position the telescope so that part or all of the galaxy lies between the two slits.
4.3 Galactic Spectra
When you have done this, click Take Reading, you will be presented with a blank
spectrum. Clicking Start/Resume Count begins to take the spectrum. The amount of time
you need to integrate (i.e. collect light) for, depends on the brightness of the source and
also the telescope you are using (see later). Clicking Stop Count shows you the spectrum
you have generated. Note that you can start integrating again if you are unhappy with the
quality of the spectra, by clicking on the Start/Resume Count button. The most important
piece of information in the spectrum is the position of each of the three spectral lines.
You can measure this by holding the mouse button down on the spectrum and moving the
cursor to the peak of the line.
Return to the viewfinder and locate the galaxy with co-ordinates:
R.A.= 12 25 48.2
Dec = 28 54 00.0
In the spectrum view, important information about the galaxy is written below the
spectrum. An important number is the galaxies magnitude; this determines how bright a
galaxy is. The brighter a galaxy, the lower (or more negative) its magnitude.
1. What is the name and magnitude of this galaxy?
2. Sketch the spectra of this galaxy, making sure you label the axes correctly (don’t
forget units). Label the position of the three spectra lines.
3. Why are there dips (absorption lines) in the galaxy spectra?
4. How can these lines be used to determine the distance to a galaxy?
Locate the galaxy at:
R.A.= 12 06 30.0
57
5.
6.
7.
8.
Dec = 29 30 34.0
What is the name of this galaxy?
Which of these two galaxies is the brightest?
Which of these two galaxies is the furthest away?
Give two reasons why one galaxy could be brighter than another, what is the most
likely reason in this case?
To accurately measure the spectra of a galaxy, the position of the tips of the absorption
lines must be clearly defined. One way to quantify this is the signal to noise ratio (SNR)
of the observation. The gives the ratio of the amount of radiation measured from the
galaxy compared with the amount of unwanted noise, this will decrease with integration
time. Choose any three galaxies and find the minimum SNR needed to reliably measure
the position of the three lines. Calculate the average of these values. This is the SNR you
should use for the rest of the experiment.
4.4 Different Telescopes
There are three different telescopes available to you, with the following primary mirror
diameters, 0.4m, 0.9m and 4.0m. You start off at the 0.9m telescope. You can change the
telescope by using the Telescope menu when in the Finder View. Notice that initially you
cannot select the 4m telescope, this is to simulate the high demand on large telescopes.
You can request time on this telescope in the Telescope menu, if you are successful, you
will be allowed to use the telescope 13 times before you need to reapply. If you are
unsuccessful, you have to wait for 10 minutes before you are allowed to reapply. Make a
table of the integration time needed to reach your required SNR for each telescope.
9. How is the integration time related to the size of each telescope? Explain why this
is the case.
10. Which telescope is the best to use for locating the galaxies?
4.5 Mapping the Sky
Now you can begin to map the sky. Make sure that you are using one of the 2 larger
telescopes, and select one of the galaxies from your list. Integrate on this source, until
you have a sufficiently high signal to noise. In your lab book, record the positions of the
3 lines, and also the telescope used, the object name, apparent magnitude, and the signal
to noise achieved. Use the wavelengths of each line to calculate the recessional velocity
of the galaxy, as described earlier. Click on Record Meas and enter the three wavelengths
and velocities, then click Verify/Average to check your results. If there are no problems,
click OK and then Return to return to the telescope instrument view. Clicking on Data>
Review in the File menu allows you to view all the information you have just entered, and
if necessary amend it.
Repeat this for the other sources in your list, remembering to save the data regularly using
the save command in the File > Data menu. It would take too long to calculate the
velocity of each of these spectra by hand. Instead make an EXCEL spreadsheet in which
you can calculate the velocity of each galaxy for each of the three lines. You can then just
re-enter the measured position of the spectral lines for each new galaxy. Ask the
demonstrator if you are unsure how to use an EXCEL spreadsheet to do this.
When you have completed the sources, save your dataset, and also export the data to the
plotting program by selecting Save Results for Plot in the File menu.
58
4.6 Plotting your Results
Open the file ‘plot.txt’ and plot the points by clicking Plot the current file in the Plot
menu. Because the sample of galaxies covers a very small range of Dec, you have plotted
RA against recessional velocity; you can assume that all the galaxies lie in a plane of
constant Dec. You will need to print out both the plot and the save file (called
‘name.csv’) and stick them in your lab book.
5. Additional Questions
Figure 8.2 is a diagram showing the combined results from all the galaxies measured by
another lab group. Use this to answer the following questions.
11. Does matter in the universe appear to be randomly distributed on the large scale,
or are there clumps and voids?
12. The most densely populated region of the diagram (which appears like a stick
figure of a man) is the core of the coma cluster of galaxies. What are the
approximate Right Ascension and Velocity of this feature?
13. Use Hubble’s redshift-distance relation to determine the approximate distance to
the Coma Cluster. Give your answer in both mega parsecs and light years (1 pc =
3.26 ly).
14. The actual value of Hubble’s constant is (71  4) kms-1Mpc-1. Use this to find the
error on your distance to the Coma cluster. Assume that there is no error in the
measurement of the redshift of the cluster.
15. How far away is the farthest galaxy included in this study? How does this
distance compare to the limit of the observable universe, which is about 4.6x109
pc?
16. Discuss the problem of completeness of the sample, which is based on a catalogue
of galaxies identified on photographic plates. What sorts of objects might be
missing from the survey? How could we improve the completeness?
Figure 8.2 Diagram of combined previous results from all the galaxies.
59
Experiment 9: Propogation of Sound in Gases.
Note: This experiment is performed in the dark room.
SAFETY ASPECTS: MAKE SURE THAT THE ROOM FAN IS SWITCHED TO
EXTRACT AND IS WORKING.
Outline
The speed of sound is commonly used to refer specifically to the speed of sound waves in
air, although the speed of sound can be measured in virtually any substance and will vary.
The speed of sound in other gases will be dependent on the compressibility, density and
temperature of the media. You will investigate these dependencies by studying the sound
waves set up in various gases contained in a gas cavity.
Experimental skills
 Observation of longitudinal waves.
 Understand the use of a microphone as an acoustic to electric transducer.
 Hence using an oscilloscope to study non-electrical waves.
 Careful use of gases and gas cylinders.
Wider Applications
 In dry air at 20°C, the speed of sound is 343 metres per second. This equates to 1,236
kilometres per hour, or about one kilometer in three seconds. The speed of sound in
air is referred to as Mach 1 by aerospace professionals (i.e the ratio of air speed to
local speed of sound =1).
 The physics of sound propogation, reflection and detection is used extensively for
underwater locating (SONAR), robot navigation, atmospheric investigations and
medical imaging (Ultrasound).
 The high speed of sound is responsible for the amusing "Donald Duck" voice which
occurs when someone has breathed in helium from a balloon!
1. Introduction
The speed of propagation of a sound disturbance in a gas depends upon the speed of the
atoms or molecules that make up the gas, even though the movement of the atoms or
molecules is localised. The r.m.s. speed of molecules of mass m in a gas at Kelvin-scale
temperature T is given by;
1
 3kT  2
c

 ,
 m 
where k is the Boltzmann constant. The sound is not propagated exactly at the speed
2
1
2
1
  2
 c  but at   times it, where  is the ratio of the principal heat capacities of the
3
gas.
Thus
2
1
2
Csound =  kT 
 m 


1
2
[1]
Measurement of csound for known T and m therefore enables  to be determined1.
60
In this experiment the speed of sound in gaseous argon, air (mainly nitrogen) and carbon
dioxide is measured by analysing the standing waves in a cavity.
2. Experiment
2.1 Apparatus
The standing wave cavity is shown schematically in Figure 9.1.
Figure 9.1: Standing wave cavity
The loudspeaker, driven from an oscillator, directs sound into the tube; standing waves
are obtained by adjustment of the piston and detected by the microphone insert at the end
of the tube. The output from the microphone is amplified and displayed on the
oscilloscope. Ensure that the amplifier is turned off when you have finished this
experiment.
Consider and write down the relationship between the length of the tube and the
wavelength of sound for standing waves in closed and open tubes. Revise these
expressions having considered this material using reference 1 or another source. Should
you treat your equipment as having two closed ends or one open and one closed? Why?
Show that the length of the tube L is related to the wavelength as L = λ/4, 3 λ/4, 5 λ/4, 7
λ/4

i.e. L  2n  1 , where n is an integer .
4
1 H.D.
Young and R.A. Freedman, “University Physics”, Pearson, San Francisco, 2004
Note. The volume of sound coming from the speaker should be made as small as possible.
Use the most sensitive Volts/Div setting on your oscilloscope.
2.2. Experimental procedure
There may be traces of carbon dioxide in the tube from the previous experiment. This
must be removed by pushing the piston in and out of the tube over its full travel several
times.
Switch on the oscillator, and set it to give a sound at 1000 Hz. Find the approximate
positions of the maxima in the signal amplitudes. Plot the signal amplitude as a function
of piston position for all the accessible maxima (you will need to select a suitable step
size). Now plot the piston position for each maximum on a graph and deduce the
wavelength  from the gradient. Calculate csound from the relation csound = f, where f is
the frequency of the sound. Repeat the measurement for a number of other frequencies up
61
to 5000 Hz. Consider whether there is any significant variation in your results, and
attempt to account for it. Record the atmospheric temperature. Consider what affect the
temperature might have on the measured speed of sound.
Repeat the experiment at one of the higher frequencies with the monatomic gas argon in
the tube. Before attempting this, liaise with the demonstrator, who will arrange for the
supply of the gas from the gas cylinder.
Repeat the measurements at one frequency with carbon dioxide in the tube. Note any
differences in the quality of the signal obtained. Why does this happen?
Use your results to calculate the value of , the ratio of the principal specfic heats of each
of the three gases, from equation [1].
In equation [1],
k = Boltzmann constant = 1.38 × 10-23 J K-1
T = temperature in Kelvin
m = mass of one gas molecule i.e. relative molecular mass × 1.66 × 10-27 kg
The relative molecular masses of argon, nitrogen and carbon dioxide are 40.0, 28.0 and
44.0 respectively.
Tabulate the values of  you obtain, together with the values given by the kinetic theory of
gases.
62
Experiment 10: AC to DC Conversion using Diode Circuits.
Outline
Almost all electronic circuits require a DC source of power. For portable low-power
systems batteries may be used. Usually, however, electronic equipment is energized by a
power supply, a device which converts the alternating waveform from the power lines into
an essentially direct voltage. The study of ac-to-dc conversion is the subject of this
experiment.
Experimental skills
 Become familiar with the computer simulation package Electronics Workbench.
 To investigate the operation of a full-wave rectifier circuit.
 To investigate the operation of a simple power supply and examine its regulation
under various loads.
 To build real circuits, if time permits.
Wider Applications
 Endless! The process of rectification (AC to DC conversion) is used extensively in
power supplies. While almost everything in your home runs off standard AC power,
many devices actually use DC internally.
 Two indicators that a device actually uses DC are: the ability of the device to run on
batteries; and the presence of a device outside the unit that powers it. These small
"bricks" with one plug for the wall and another for the device are often called AC
adapters
1. Introduction
The emphasis in this experiment is to simulating building diode circuits on the computer.
You may then build the same circuits using actual components.
Write up your diary for this experiment in just the same way as you would for any of the
other experiments. Record clearly any readings you take. Try and interpret results
whenever possible. If you make printouts of any circuits, stick them securely to your lab.
book and label them clearly.
1.1 Starting up Electronics Workbench
Electronics Workbench is a computer simulation program to enable you to experiment
interactively with the fundamental theories of electronic circuits. The program is easy
(and fun) to use and can simulate quite complicated circuits. Don't forget to use the help
facility if you get stuck (just press Fl).
1. Login as usual
2. On start menu, go to:
Networked applications/Departmental Software/Physx/Multisim 7
(if there is an error msg about ‘registry’ – click OK)
3. You are now in the workbench environment.
63
2. A Half-Wave Rectifier
To get some practice using Electronics Workbench, you'll begin by constructing a circuit
you have already met. See “The diode as a rectifier”, in experiment 5
2.1 Clear the workspace and assemble the circuit (see Figure 10.1) using 1 diode, 1
resistor, 2 earths, oscilloscope and function generator. The diode, resistor, connectors and
earth can be selected and dragged from the parts bin (on left hand side of the workspace).
Choose the resistance value and diode type, when inserting the components. The resistor
should have a value of 1 kΩ and the diode should be of type 1N4001GP. You can rotate
or flip components by selecting them (with the RH mouse button) and then choosing 90
clockwise or 90 counter-clockwise. The oscilloscope and function generator icons
should be selected from the equipment shelf (on right hand side of screen) and placed on
the workspace. The function generator will be used to provide an ac voltage source.
Attach wires to the icons themselves. Connect the + and - terminals of the function
generator to the circuit. Connect the A channel of the oscilloscope to record the input
voltage and the B channel to record the voltage across the load resistor. Double click on
the icon to zoom open the face of the oscilloscope or the function generator.
Figure 10.1: Half-wave Rectifier Circuit.
2.2 On the function generator face (zoom open by double-clicking on the icon with the
mouse), you can input sine, square or triangular wave forms by pressing on the
appropriate symbol. Select a sine wave in this way. Choose a frequency of 50Hz. Leave
the duty cycle, and offset values unchanged at 50 and 0 respectively. Change the
amplitude value to the required input voltage, say 10 V. (If the output comes from the +
and - terminals, the peak-to-peak value will be four times the amplitude value shown.)
2.3 On the oscilloscope face set the time base to 5.00 ms/div and select Y/T. Set both
channel A and channel B to 10 V/DIV and click on DC in both cases. Set the trigger to
AUTO. Attach a wire from the oscilloscope icon’s ground terminal to a ground
component.
2.4 Go to Simulate, select Analyses and choose Transient Analysis. Click Apply.
2.5 Click the ‘switch’ in the top right-hand corner of the screen to activate the circuit.
You should be able to see both the sine-wave input signal and the rectified output signal
64
on the CRO. Identify which trace is which. It may help to move one of the traces
vertically by changing the y position from zero on the oscilloscope face. Sketch the output
characteristic (current (y) versus voltage (x); see experiment 5) of a diode. From your
sketch explain the operation of this circuit and account for the traces you obtained.
2.6 Make a print-out of your circuit and output traces by choosing the Print Circuit
option from the File Menu. Before printing, click the reverse button on the oscillope to
change the oscilloscope screen to a white background. To print instruments choose Print
Instruments from the File Menu then select the instrument you wish to print. Remember
to attach any print-outs securely to your lab. book and explain clearly what they represent.
3. The Full-Wave Bridge Rectifier
3.1 Connect the bridge rectifier circuit shown in Figure 10.2. You can use the same
circuit you assembled in section 2 above but now include 4 diodes instead of 1 in the
arrangement shown. Note that now you cannot observe the input and output of the circuit
simultaneously. Explain why. (Think about the earth connections). Keep the same value
for the resistor you had before (i.e. l k)
3.2 Observe the output waveform on the CRO. Sketch or print out the output waveform.
Sketch the circuit in your diary and draw on the current path during both the positive and
negative halves of the cycle. Use this to explain the shape of the output waveform.
Figure 10.2: Full-wave Bridge Rectifier Circuit.
4. A Mains-Operated DC Power Supply
Although the output from the bridge rectifier circuit is always of the same polarity
(positive in the case above), it varies with time. You can smooth out the time-varying
parts of the signal by simply connecting a capacitor across the load resistance.
4.1 Select an electrolytic capacitor of value 10 F from the parts bin and then from the
components family list. Connect it across the load resistor (Figure 10.3). Set the load
resistance to l k. How does the output trace differ to that obtained without the capacitor.
65
Change the value of the capacitance to 1F and then to 100F. How does the trace change
in each case?
To change the component value, double-click on it and choose replace from the menu.
4.2 The degree of rectification can be estimated by measuring the peak-to mean ratio. See
Appendix. You can use the AC and DC buttons on the CRO here in exactly the same way
as you did before on the real oscilloscope in Experiment 5. If V(ac) is the voltage
amplitude measured on ac and V(dc) is the voltage amplitude measured on dc then the
peak to mean ratio P is given by :
V dc 
P
V dc   V ac 
Wait until the trace has settled down into a steady state, after 1 or 2 cycles, before taking
any readings. Estimate the peak to mean ratio for load resistances of 1 k, 10 k and
100 k and for capacitance values of 1 F, 10 F and 100 F so that you have 9 values
in total.
Figure 10.3: DC Power Supply Circuit.
4.3 Look at your results carefully. How do both R and C effect the peak to mean ratio?
What do you notice about the peak to mean ratio for different combinations of R and C
for which the product of R and C is the same? What is the significance of the RC product?
(Look in Measurement of Capacitance, Experiment 7 if you’re not sure.) Use this
knowledge to explain the operation of this circuit and to account for the traces you
obtained.
EXTENSION
5. Real-life experiment
5.1 Now construct the circuits in parts 3 and 4 for real. The p.d. V (a few volts) is
obtained from an isolating transformer which is fed by the signal generator. Use your
oscilloscope to verify that the output from the transformer is 50Hz. Note that since you
are using an isolating transformer the output is “floating” i.e. not tied to earth (zero volts).
Refer to Introduction to Electronics Experiments (Section III.I, page 75) in this Lab
Manual. Make sure you connect the diodes in the correct direction (see Figure 10.4). Use
66
a resistance substitution box as the variable load resistance. As before, don't try to observe
both the input and output at the same time. Without the capacitor, sketch the output
waveform you observe on the CRO. Look especially at the waveform in the region near
zero volts. Why are there flat regions? (Think about the two output characteristics you
sketched in section 2.6). Observe and record what happens as you vary the load
resistance?
Figure 10.4: Diode Polarity
5.2 Now connect a 10 F electrolytic capacitor across the load resistance as shown in
Figure 10.5. Observe the correct polarity of the electrolytic capacitor; if connected
incorrectly it could overheat and rupture. Set R to l0 k, sketch the output waveform and
compare it with the one obtained without the capacitor.
Figure 10.5: Electrolytic capacitor connected across the load resistance
5.3 Vary R in powers of 10 from 1000  to 10 M. For each value of R, calculate the
peak-to-mean ratio.
5.4 Remove the 10 F capacitor and replace it with a 1000 F electrolytic capacitor.
Repeat the procedure in point 5.3 above.
5.5 Compare your results with your earlier simulated results. Do you observe the same
trends in both cases? How useful do you think it is to simulate the circuit on a computer
before doing the real experiment?
67
Experiment 11: Microwaves
Safety
Although the microwave power used in this experiment is very low students should take
care not to look directly into the source when it is switched on.
The resistor mounted on the back of the transmitter does get hot after extended use.
Outline
The properties of waves in general and electromagnetic waves in particular are examined
by using microwaves of wavelength ~2.8 cm. The properties examined include
polarization, diffraction and interference. The interference experiments are similar to
those performed with visible light at much shorter wavelengths (and sound with similar
wavelengths). However, the macroscopic wavelength of microwaves is exploited to
reveal behaviour not readily accessible at short wavelengths, in particular phase changes
on reflection and edge diffraction effects.
Experimental skills
 Experience of handling microwave radiation, sources and detectors.
 Experience of polarized electromagnetic radiation.
Wider Applications
 Microwave radiation is used in communications, astronomy, radar and cooking.
Mobile phones use two frequency bands at ~ 950 MHz and ~18850 MHz.
Astronomy - the cosmic microwave background radiation peaks at λ= 1.9 mm.
Microwave ovens use a frequency of 2.45 GHz wavelength of 12.2 cm. The
oscillating electric field interacts with the electric dipole in water molecules so that
they rotate, have more energy and so get “hotter”. Since water molecules in solid
form cannot rotate ice is an inefficient absorber of microwave radiation.
 The manipulation of polarization is an important way to exploit electromagnetic
radiation. This is not restricted to plane polarization. For example “circularly”
polarized light is exploited in the latest 3D films shown at cinemas.
 Electromagnetic radiation detection is common to many branches of physics. For
example with an array of detectors similar to the ones used here and some optics
astronomical imaging becomes possible – this is a very active research area within this
School.
Equipment List: Microwave generator, two detectors (point probe and horn), Multimeter (using mV or V scale, depending on equipment), metal plates and grid.
1. Introduction
The name “microwave” is generally given to that part of the electromagnetic spectrum
with wavelengths in the approximate range 1mm - 100 cm (10-3-1 m). This compares
with the visible region with wavelengths of 4 to 8 x 10-7 m. Microwaves therefore have a
wavelength which is >20,000 times longer than light waves. Because of this difference it
is easier in many cases to demonstrate the wave properties of electromagnetic radiation
using microwaves.
1.1 Electromagnetic Waves
68
An electromagnetic wave is a transverse variation of electric and magnetic fields as
shown in figure 1 and travels through space with the velocity of light (3 x 108 m s-1).
Because it is a transverse wave it can be “polarized”, meaning that there is a definite
orientation for their oscillations. As shown in Figure 11.1 an electromagnetic wave is
composed of electric and magnetic fields oscillating at right angles. The direction of
polarization is defined to be the direction in which the electric field is vibrating. (This is
an arbitrary matter; the magnetic field could equally well have been chosen to define the
direction of polarization). Plane polarized radiation means that the electric field (or the
magnetic field) oscillates in one direction only.
Figure 11.1 The electric and magnetic fields in an electromagnetic wave. E is the
electric field strength, B is the magnetic flux density. The wave propagates with a
velocity of 3 x 108 m s-1.
The microwave transmitter provided emits monochromatic plane polarized radiation. A
normal light source is a mixture of many different directions of polarization so that its
average polarization is zero.
An electric field is defined in terms of both an amplitude and direction and is therefore a
vector. It is useful to think of polarized radiation in terms of vectors. The detectors of
(microwave) electromagnetic radiation used in this experiment are polarization sensitive
(some are not). In this case the relative orientation of the transmitter (and electric field)
and the detector (receiver) is important and is illustrated in Figure 11.2.
Electric field
direction of polarised
electromagnetic
radiation
orientation of
polarisation sensitive
detector
θ
Figure 11.2. Plane polarised radiation incident at an angle θ with respect to the
sensitive
direction of the detector.
69
In Figure 11.2, if the amplitude of the electric field of the incident radiation is E0 the
component that is experienced by the detector is E0cosθ. Some detectors give an output
that is proportional to the amplitude of electric field, however many have an output
proportional to the intensity, I (or power). Intensity is proportional to the square of the
electric field, so for an aligned field and detector
I = I0 = kEo2
whereas at an angle, θ,
I = kEo2cos2θ = Iocos2θ.
From the above, the angular dependence of the signal is capable of revealing something
about how the detector/receiver used operates.
“Diffraction” and “interference” both relate to the superposition of waves and are
essentially the same physical effect. Custom and practice dictates which term is used in a
particular circumstance. The essential principles should be familiar to 1st year physics
students and will not be repeated here.
2. Experimental
2.1 Apparatus: The Microwave Equipment
 The transmitter incorporating a Gunn diode in a waveguide and a horn gives plane
polarized* radiation and is operated at 10 V, fed by a power supply.
 There are two receivers*, one is a feed horn receiver the other is a probe.
 The feed horn receiver is the most sensitive and is both polarization dependent* and
directional.
 The probe is non-directional, but is still polarization dependent and is less sensitive.
 The receivers are connected to a voltmeter on its mV range.
*The polarization of the transmitter and horn receiver is vertical if the writing on the back
of the units is horizontal. The probe receiver placed supported by its stand on the bench is
sensitive to vertically polarized radiation.
Important:
 Reminder: Do not look into the transmitter when it is turned on.
 Neither receiver should be placed nearer than 10 cm from the transmitter.
 Stray reflections are a big problem when undertaking microwave experiments.
To minimise these, the experiment should be carried out on the top level of the
bench and all objects (bags, hands and arms etc) should be kept out of the beam
whilst taking measurements.
2.2. Standing waves and the determination of wavelength
To create a stationary (standing) wave a reflecting surface is placed in the path of a
progressive wave to reflect the wave along its own path. The resulting waveform should
be similar to that shown in Figure 11.3 where the distance between successive nodes (or
antinodes) is half a wavelength.
70
Antinode
/2
E
_
Node
Incident wave velocity c
Reflected wave velocity c
Figure 11.3. Depiction of the standing waves set up when a wave is reflected off a
surface.




The (aluminium) reflector plate should be approximately 1 metre from the microwave
source.
Place the probe in the region of the standing waves and move the reflector plate either
towards or away from the transmitter. (A very similar experiment can be performed
by moving the detector with the reflector plate fixed.)
The probe will pass through the wave form given in Figure 11.3 and when the probe is
connected to the meter in the receiver it will display successive maxima and minima.
Determine the wavelength of the microwave, by recording the position of several
maxima and plotting a graph of distance versus maxima (the slope will give a value
for half a wavelength). Does the wavelength agree with the value written on the back
of the transmitter horn?
2.3 Plane polarised electromagnetic radiation
This section consists of a number of experiments to reveal the behaviour of the
microwave source and receivers/detectors as well as some of the properties of plane
polarized radiation.
Plane polarization and receiver sensitivities
 Position the transmitter and horn receiver 0.5 m apart with both oriented for vertically
polarized radiation. Align the transmitter and detector by maximising the signal and
make a note of the signal.
 The polarization of the emitted radiation and polarized sensitivity of the receiver can
be demonstrated by rotating the transmitter through 90o. Find the minimum possible
signal and record it.
 Repeat for the probe receiver and compare the properties of the two receivers.
 Return the transmitter and horn to their vertical position. Place the large metal grid
between the two, rotate it and observe the variation in the received signal. What effect
does the grid have? Why?
Detection of polarized radiation: angular dependence
Either by using the metal grid or by rotation of the transmitter, deduce the dependence of
the measured power on the angle of polarization. (This may be quite tricky.)
 Find a suitable way of measuring the angle of rotation and vary this in 15 degree steps
from 0 o to 180 o. Record the measured signal.
 Tabulate the signal measurements along with the expected values for cosθ and cos2θ
dependencies. What do the results imply?
71
2.4 Demonstration of interference effects
This part of the experiment builds up a microwave analogue of the single slit optical
interference experiments. By concentrating on the straight through beam the experiment
complements optical diffraction experiments. The general arrangement is shown in
Figure 11.4.
A
transmitter
to meter on
receiver
x
A'
Figure 11.4. Schematic of the experimental arrangement for interference from a
single slit
(the transmitter is shown relatively much closer to the slit than is required)
The experiment is performed in four parts whilst keeping the distance between the front of
the transmitter and the plane AA’ constant (at ~0.6 m). This will allow all results to be
compared.
(i) No slits in place
This section gives an indication of the spread of microwaves emitted from the source.



Position a 1 m rule on the bench top to provide an indication of position in the AA’
plane.
Moving the probe in 2 cm steps between measurements, take 8 measurements either
side of the centre line, i.e. 17 measurements in all.
Plot the data. Note: The graph shows the distribution of microwave power in the
“beam” emitted from the transmitter.
(ii) Single slit: variable slit width probe fixed in straight through position
This section investigates the effect of slit width on the straight through beam.



Position the two large plates equidistant from the front of the transmitter and the plane
AA’, with a slit width of 3 cm.
Keeping the centre of the slit on the line between transmitter and probe, take
measurements as the separation of the plates (width of the slit) is increased in 2 cm
steps up to ~21 cm and then in 1 cm steps up to ~35 cm.
Plot the data and compare with (i).
Note: The above results have all the hallmarks of interference.
72
(iii) Single plate: variable plate position, probe fixed in straight through position
This section seeks to provide an explanation for the results found in (ii).



Position one large plate as above but with one of its edges directly in the line of sight
between the source and the detector. Make a note of this position and then move it
across a further 5 cm to obscure the detector.
From this starting position take readings as the probe is moved out of the beam. Take
readings every ~2 cm for the first 10 cm and every 1 cm for the final 10 cm (20 cm
movement in total). (You can always add more readings if you need to.)
Plot the data and consider whether two such single plates can explain the results in
(ii).
Note: There is very little scattering of radiation behind the plate.
The origin of interference
If all has gone well, the two plate/single slit the interference behaviour of the straight
through beam can now be understood to arise from the addition of the effect of two single
plates. The single plate behaviour is better considered to be an example of “straight edge
diffraction” where the straight through beam from the emitter interferes with a secondary
source of radiation reflected from the edge of the plate.
As the plate is moved away from the centre line the path difference, between the straight
through and reflected beams, increases. From this argument it might be expected that the
first turning point, corresponding to a path difference of λ/2 (phase difference of π), would
be a minimum, whereas clearly it is a maximum. This is explained by the reflection at the
edge producing a (negative) phase shift in the re-emitted radiation.

Use Pythagoras theorem to determine the phase shift** caused by reflection at the
edge. See Appendix at end.
** A simple reflection (as in 2.2) would be expected to result in a -π phase shift, however
with this geometry the Gouy effect is reported to result in a further -π/4 phase shift giving
a total of -3π/4.
(iv) Single slit diffraction pattern: fixed width
This section seeks to illustrate the fundamental equivalence of light and microwaves by
generating a (familiar) single slit diffraction pattern.


Position the two large plates as in (ii) but with a separation of 11 cm.
Moving the probe in 2 cm steps between measurements, take 8 measurements either
side of the centre line, i.e. 17 measurements in all.
 Plot the data and compare the first minimum with its expected position (given λ = 2.8
cm).
(Note: Here due to diffraction, minima are expected at n = d.sin, where d is the slit
width.)
Appendix
The experimental arrangement is shown in figure 6 where the source is considered to be a
point - a parallel beam would be more appropriate for a visible laser/edge arrangement.
The distance from plane of sheet to the source and detector is the same.
73
Source
L
d
Metal sheet
δ
Detector
Figure 6. Schematic of experimental arrangement for edge interference. The paths for microwaves
travelling directly between source and detector and via the edge are shown.
The geometric path difference (found using Pythagoras) is 2δ where
  (d 2  L2 )1 2  L
Extrema (i.e. maxima and minima) in intensity occur, taking into account the Gouy effect
when:
(m  1) / 2  2(d 2  L2 )1 2  2 L  3 / 8
where m is a positive integer. Note half wavelength path lengths give alternating max and
min and so the “extrema”.
74
III: BACKGROUND NOTES
III.1: Experimental Notes:
INTRODUCTION TO ELECTRONICS EXPERIMENTS
In these experiments you will be required to build a variety of analogue electrical circuits
and to make measurements of potential differences, current flows etc. The following notes
give advice on building circuits and how to use test equipment, such as oscilloscopes,
multimeters and signal generators. The final section gives advice on eliminating faults in
electrical circuits.
1.
Building Circuits
BREADBOARDS are used to make circuits in some experiments. This is a purpose-built
board which allows you to make all the necessary connections between components by
means of plugs and sockets and eliminates the need for soldering. Figure 1 shows a
diagram of a breadboard of the type you will use.
Figure 1: The breadboard you will use in Yr 1experiments with details of connections.
75
At the top of the breadboard are a set of connections which can be connected by 4mm
connectors or by bare wire if the tab highlighted is pushed in. There is a choice of having
a variable DC voltage or a constant voltage given by the yellow/green/blue and red/black
respectively. The green plug is the ground socket, and the range of voltages offered by
the variable power supply is between 11.5V.
The grid of blue sockets has its own methodical set up too. Sets of 5 horizontal sockets
are connected within themselves, but are independent of the sets above and below.
Furthermore sockets within a vertical column are connected, as there are four of these
vertical sets, it can be useful to set one to 0V, one to positive voltages and one to negative
voltages. As a result, you must think about the points at which you connect a wire, as it
needs to be in the appropriate row or column in order to complete the circuit.
You are advised to construct circuits so that they resemble as near as possible the circuit
diagrams in the script. You will find this of great benefit when trying to locate faults. Note
that two interconnecting wires are indicated by a dot placed at their intersection in a
circuit diagram. Wires which simply cross each other are not connected.
2.
The Oscilloscope
The basic functions of the scope are shown in Figure 2. Most of the functions are self
explanatory. In addition, you should note the following:
(i)
VOLTS/DIV. Ensure that the central yellow knob is turned fully clockwise to the
CAL position. The markings then represent VOLTS/cm.
(ii)
AC-DC-GND SWITCH. The normal setting of this switch should be to the DC
position. The input is then directly coupled to the input amplifier of the scope.
When switched to GND the input is shorted to ground and the scope displays zero
volts. When the switch is set to AC, a capacitor is introduced between the input
and input amplifier. The capacitor blocks dc but passes ac. It is useful for
displaying ripple voltages which are superimposed on large dc voltages.
(iii)
TRIGGER LEVEL. This controls the scope's ability to reproduce a steady trace
on the screen. If the trace flickers, first check that the switch above the CH2 input
(INT TRIG) is set to either CH1 or CH2, depending on which channel you are
using to display your signal. Next check that the TRIGGER LEVEL is set to
AUTO: first set the SWEEP MODE on AUTO and then rotate the knob marked
LEVEL until the trace becomes steady (probably best in the LOCK position). If
the trace still continues to flicker, the signal is probably too small to operate the
internal circuit and your only recourse is to amplify the signal further.
76
Figure 2: The Oscilloscope
Additional Notes on Timebase trigger
For the analysis of time varying voltages the trace on the oscilloscope screen must be
stationary. If the timebase were "free-running", that is, not synchronised to some multiple
of the repeat-time or period of the input waveform then the trace on the screen would not
be stable.
To synchronise the timebase to the repeat time or period of the input waveform a "trigger"
is used. The trigger circuit in the C.R.O. 'fires' or emits a pulse when the input voltage
passes a set threshold level. This pulse is then used to initiate the timebase cycle. In this
experiment the input to the trigger circuitry is normally taken from the Y- input amplifier.
Sometimes it is found necessary to apply an alternative, externally-derived voltage direct
to the trigger circuit via the external trigger input.
The trigger is sensitive to both slope and polarity of the input waveform and can be set to
fire on a particular slope and on positive or negative polarity. Hence, if a periodic
waveform such as a sinusoid is applied to the input terminals, the trigger can be set to fire
once every cycle at a fixed point in the cycle (Figure 3). The timebase cycle shown would
lead to a stationary trace representing one cycle of the input waveform.
77
Figure 3: Understanding the timebase
78
Notes on the AC and DC components of the oscilloscope waveform.
Figure 4(b)
Figure 4(a)
Figure 4(c)
A general time-varying voltage such as that shown in Figure 4(a) may be divided into two
components:
(i)
a D.C. component, equal in magnitude to the mean value (ie, the average over all
time) of the waveform (Figure 4(b)) and
(ii)
an A.C. component which remains when the D.C. component has been removed
from the waveform (Figure 4(c)).
The oscilloscope amplifiers may be D.C. or A.C. coupled by use of the D.C./A.C. switch
on the panel. Try this on the waveform you are observing. When the switch is set to the
D.C. the trace represents both the D.C. and A.C. components as shown in Figure 4(a).
Setting the switch to A.C. removes the D.C. component just leaving the A.C. component
as in Figure 4(c).
If the switch is moving from D.C. to A.C. the trace will be seen to shift up or down, the
amount by which it moves being equal to the D.C. component of the waveform. So, to
find the ratio of the peak value to the mean value,
(i)
set the buttons to D.C. and measure the peak voltage Vp,
(ii)
depress the A.C. button noting the voltage Vm by which the trace drops, and
(iii) calculate the ratio Vp/Vm.
79
3.
The Multimeter
The multimeter you will encounter in your first year experiments (and many subsequest)
is a hand held digital device shown in figure 5. It is capable of measuring direct and
alternating voltages and currents, resistance, and diode readout. You must select the
mode of operation on a central switch, apply your terminals correctly and select the
appropriate measuring range.
Display
Range Button
Rotary Switch
Terminals
Figure 5: The Multimeter
4.
The Signal Generator
The output from the oscillator is available from the bottom right BNC socket. The signal
amplitude can be varied by means of the attenuator (O dB or -20 dB) and the variable
output level. Three different waveforms are available: sine, triangular and square. The
OFFSET knob works only when the DC OFFSET button is depressed.
5.
Resistance Colour Codes
Resistors are colour-coded to indicate their resistance, tolerance and power-handling
capacity. The background colour indicates the maximum power of the device. You will
use only 0.5 W resistors (dark red background). The four coloured bands can be read as
described below to determine the resistance and tolerance.
The final gold or silver band gives the tolerance as follows:
gold ± 5%
silver ± 10%
80
Digit
Colour
Multiplier
No. of zeros
0
1
2
3
4
5
6
7
8
9
silver
gold
black
brown
red
orange
yellow
green
blue
violet
grey
white
0.01
0.1
1
10
100
1k
10 k
100 k
1M
10 M
-2
-1
0
1
2
3
4
5
6
7
Table 1.1: Resistor colour-codes
Example: red-yellow-orange-gold is a 24 k, 5% resistor.
6.
Finding Faults in Electronic Circuits
During the course of the laboratory work you will probably encounter practical
difficulties. You should always try to solve these problems yourself, but if you are unable
then you should call on the assistance of the demonstrator.
Occasionally, a circuit will fail to operate because of a faulty component, but more often
than not problems arise from the incorrect use of test equipment, the omission of power
supplies from circuits, or the use of broken test leads. Faults are not usually apparent to
the naked eye, but they may be detected quite easily by following a systematic checking
procedure such as that outlined below. If after following these procedures your circuit
still doesn't work, then DO NOT HESITATE TO ASK THE DEMONSTRATOR FOR
HELP.
(i)
Ensure that you understand how to use each piece of test equipment. If in doubt,
consult the demonstrator.
(ii)
Examine the circuit for any obvious faults. Is the circuit identical to the circuit
diagram in the script? Are the components the correct values? Are there any loose
wires or connectors which could short out part of the circuit?
(iii) The fault may lie in the circuit itself, in the signal generator which supplies the input
signal, or in the measuring equipment. Switch on the power supply to the circuit and
apply the input signal. Use both channels of a double-beam scope to measure
simultaneously the input and output signals of the circuit. Check at this stage to see
whether the scope leads are faulty. Ensuring that you do not earth any signals (see
next section), connect the scope to the input and output of the test circuit. If there is
no input signal, disconnect the signal generator and test it on its own. If the
81
generator functions only when disconnected from the circuit, it implies that the fault
lies in the circuit and that it is possibly some type of short circuit, most likely
associated with incorrect earthing. If there is an input signal but no output signal,
the fault lies in the circuit.
(iv) A common fault which occurs when using more than one piece of mains-powered
equipment is the incorrect connection of earth lines. ALL EARTHS MUST BE
CONNECTED TO A COMMON POINT, otherwise the signal may be shorted out.
(v)
If you have established that the fault lies in the circuitry, use your scope to examine
the passage of the signal through the circuit. Components which you regard as
faulty should be isolated or removed from the circuit for further testing.
(vi) If you trace a fault to a piece of mains-powered equipment, DO NOT ATTEMPT
TO REPAIR THE FAULT YOURSELF. Report the fault to the demonstrator or
technician and ask for replacement equipment.
HOW TO USE A VERNIER SCALE
Vernier scales are used on many measuring instruments including the travelling
microscope that we will use in the laboratory. We will begin by looking at the general
principle of a vernier scale and then look at the particular scale we will use.
Figure 5 shows a vernier scale reading zero. Note that the 10 divisions of the vernier have
the same length as 9 divisions of the main scale. If the smallest division on the main scale
is 1mm then the smallest scale on the vernier must be 0.9mm. This vernier would then
have a precision of 0.1mm and results should be quoted to ±0.1mm.
10
0
Main scale
Vernier
0
Figure 5: Vernier Scale
Let us see how it works. Examine figure 6. The position of the zero on the vernier scale
gives us the reading. Here it is just beyond 2mm so the first part of the reading is 2mm.
The second part (to the nearest 0.1mm) is read off at the first point at which the lines on
the main scale and the vernier coincide. Here it is the 4th mark on the vernier (don’t count
the zero mark). The reading is therefore 2.4 mm.
82
10
0
0
Figure 6: using the vernier
To see why examine figure 7, which is an alternative version of figure 6.
x
D1
D2
0
1
0
Figure 7: why a vernier works
In essence we have been finding the distance X, which is simply given by:
X = D1 – D2 = 4×1mm - 4×0.9mm = 4 ×0.1mm = 0.4mm
So that is the general principle. Let us see how the travelling microscope scale works.
In this case the smallest division on the main scale is 1mm, which implies that the
smallest division on the vernier is 49/50 mm = 0.02 mm
As an example the reading in figure 1.8 is 113.68mm.
83
Best Match
Figure 8: example reading = 113.68mm.
Note: unlike the examples in figures 5-7 the vernier is above the main scale.
84
III.2 ANALYSIS OF EXPERIMENTAL DATA: ERRORS IN
MEASUREMENT
Contents
1. Introduction
1.1 Important concepts of measurements and their associated “errors”
1.2. The importance of estimating errors (with examples)
2. The nature of errors (a discussion in terms of single measurements)
2.1. Classes of errors
2.2 Illegitimate errors
2.2.1 Mistakes in calculations
2.2.2 Mistakes in measurement
2.3 Systematic errors
2.4 Random errors
2.5 The interplay between systematic and random errors
2.6 A note on experimental skill and personal judgement
3. Presentation of measured values
3.1 Accuracy and precision
3.2 Significant figures
3.2.1 How many significant figures should be used for a value?
3.3 The acceptable ways of presenting measured values
3.3.1 Required format for undergraduates
3.3.2 Alternative forms that may be met
4. Calculating with measured parameters and combining errors
4.1 Error propagation: the general case
4.2 Commonly occurring special cases
4.3 Notes on performing error calculations
5. Multiple measurements (of a single parameter)
5.1 Introduction
5.2 Importance of repeat or multiple measurements (of a single value)
5.3 Introduction to statistics (distributions, populations and samples)
5.3.1 Distributions
5.3.2 Line-shapes
5.3.3 Terminology: “Populations”, “samples” and real experiments
5.3.4 Experimental information found from a distribution
5.3.5 Extraction of information as a function of sample size
5.4 The statistics of distributions
5.4.1 Mean
5.4.1 Variance (mean square deviation) and standard deviation
5.4.2 Standard error
5.5 Summary - what to use as the random error as a function of n
6. Multiple measurements: straight line graphs
6.1 Introduction
6.2 Presenting experimental data on graphs
6.3 Finding the Slope and Intercept (and their errors)
6.3.1 The two approaches
6.3.2 Finding gradient, intercept and their errors by hand
85
6.3.3 Finding gradient, intercept and their errors by computation
6.4 Error bars (and outliers)
6.4.1 When to use error bars
6.4.2 Outliers
6.4.3 Dealing with a small numbers of data points
6.5 Forcing lines to be straight
7. Some experimental considerations
7.1 Terminology
7.2 Comparing results with accepted values
7.3 y = mx relationships
8. Some important distributions
8.1 Binomial statistics
8.2. The normal (or Gaussian ) distribution
8.3 Poisson distribution
8.4 Lorentzian distribution
Additional reading
These notes are intended to be just a brief guide to errors in measurement. For further
details the following books are recommended:
G.L. Squires "Practical Physics" 3rd ed Cambridge University Press (1985)
N.C.Barford "Experimental Measurements: Precision, Error and Truth" 2nd ed J.Wiley
(1985)
P.R. Bevington "Data Reduction and Error Analysis for the Physical Sciences" McGrawHill (1969)
“Squires” is a very good, very accessible book that is available in the library. It has a
strong emphasis on the relationship to experiment, was referred to extensively when rewriting these notes and is highly recommended.
1. Introduction
This document is intended as a reference guide for undergraduates in all years of physics
degrees in Cardiff University. Most of the concepts covered in this document are covered
in 1st year courses and may be considered an essential basis for any experimentalist.
There are many more sophisticated and specialist approaches that may be met during an
undergraduate degree course that are beyond the scope of this document.
As the title of this indicates this document is concerned with a particular aspect of the
analysis of experimental data. A good start is therefore to consider what is meant by
analysis:
“Analysis” generally is the detailed examination of “something” (in this case data). It is
performed by a process of breaking up “something” that is initially complex into smaller
parts to gain a better understanding of it.
(Data) analysis is therefore a type of problem that needs to be solved. With any type of
problem often the most difficult part is finding a way to start addressing it. One place to
start is by considering “errors”. But before that, some terminology.
86
1.1 Important concepts of measurements and their associated “errors”
The “true value” (of the physical quantity being measured) is as its name suggests.
Determining the best estimate of the “true value” of something is usually an important
aim of physics experiments.
The above statement causes a problem. It is not usually* possible to be certain of “true
values”, experiments can only ever provide “measured values” and discrepancies are
expected.
The word “Error” in scientific terminology is usually quoted as meaning "deviation from
true value" or "uncertainty in true value" it is not the same as "mistake"
Consequently it is the “measured values” or the “best estimate of the true value” that
must be expressed along with their associated errors. Undergraduates in this School are
asked to do this using the form**:
(measured value +/- error) units
[1]
The measured value and its error clearly define an interval (from value - error to value +
error). The situation isn’t entirely straightforward so for now all that will be claimed is
that the experiment suggests that the “true value” lies within this interval.
This document is mainly concerned with methods of deciding upon reasonable/realistic
estimates for the error. It will reveal the underlying importance of statistics and explain a
method of combining errors whilst avoiding becoming a course in mathematics.
Although there will be some discussion of how errors arise in different experimental
circumstances and their importance in extracting meaning from experiments these are not
of primary concern. However, whilst ignoring specifics, it should be recognised that to
improve understanding (our ultimate aim) it is often necessary to obtain “better”
measurements with smaller errors achieved through use of better instruments and/or
experimental technique.
* It would be wrong to say that there aren’t cases where exact true values can be found,
for example:
 How many electrons are allowed to exist in a particular atomic orbital?
 How many legs does a bird have?
** There is more on this and some alternative forms in usage later.
1.2. The importance of estimating errors
In order to get any meaning from measurements it is essential that the value obtained is
quoted with a reasonable estimate of its error. Put the other way around, measurements
without errors are meaningless.
Since the determination of errors is a time consuming process and the bane of students’
experimental lives this requires some justification.
Example: Suppose a student measures the resistance of a coil of wire and writes down:
"The resistance of the coil of wire was 200·025  at 10oC and 200·034  at 20 oC, so the
resistance increases with temperature".
Without more information, the student's statement is not justified. We must know the
errors in the measurements to say if the difference between the two figures is significant
or not. If the error is ± 0·001 , i.e. each value might be up to 0·001  higher or lower
than the stated value, then the difference between the two resistances is significant. But if
the error is ± 0·01  the two values agree within errors and the difference is not
significant.
87
Example: Two students perform an identical experiment to determine the acceleration
due to gravity, g (on the Earth’s surface this has a value of 9.80+/-0.02 m/s2 - note that the
error in g here arises from the variation in its value over the Earth’s surface).
The first student returns g = 11+/-2 m/s2 and the second student g = (10.2 +/- 0.3) m/s2.
What can be said about these results?
 Without considering errors, all that can be said is that the results from the second
student “appear” better than from the first.
 With errors only the first students result agrees with the known value.
 But then again, the smaller error quoted by the second students implies that this data
set is “better” in some sense (possibly resulting from more careful or skilful
experimentation) and hints that there may be an underlying problem with the
equipment or with the way the experiment was carried out.
Clearly there are problems with both data sets and it is not possible to get to the bottom of
this just by looking at the numbers. However, errors are necessary in order to start to get
an understanding of what is happening.
The next step in this case would be to go back to the original data to see if there were
problems with the analysis carried out. If the analysis was reasonable in both cases it may
well be that the second student has unearthed an issue with the experiment.
It would be highly unlikely in this case that some new physics has been unearthed but
with a different experiment this is one way that science works.
2. The nature of errors (a discussion in terms of single measurements)
Initially restricting discussion to single measurements of a physical parameter allows a
sensible progression through the subject. However, almost all of what is included here
applies equally to the more complicated cases with multiple measurements.
2.1 Classes of Error
The term "error" represents a finite uncertainty in a measurement due to intrinsic
experimental limitations. These limitations can arise from a number of causes, here they
will be considered as being of two distinct classes. These are:
 Systematic errors - these are the result of a defect either in the apparatus or
experimental procedure leading to a (usually) constant error throughout a set of
readings.
This type of error can be difficult to track down. One test is to perform measurements
of well known value, if there is discrepancy there may well be a significant systematic
error present.
 Random errors - these are the result of a lack of consistency in either the apparatus
or experimental procedure leading to a distribution of results (if/when they are
repeated) that is equally positive and negative.
This is the type of error usually responsible for the spread of results when
measurements are repeated.
Good results are only obtained by eliminating illegitimate errors and minimising both
systematic and random errors.
In addition to the above, another type of error needs to be mentioned. It is different
because its errors are not intrinsic to the experiment and so is often ignored when errors
are discussed..
 Illegitimate errors (or mistakes) - these are the result of mistakes in computation or
measurement. This class of error is worthy of consideration because mistakes happen
and have to be dealt with ethically and with scientific integrity. Such errors are
88
usually (but not always) easily identified as obviously incorrect data points or values
far from expected.
The rest of section 2 discussed these classes of errors in turn and in more detail.
2.2 Illegitimate errors (mistakes)
Reminder: this class is usually ignored since definitions of scientific errors excludes it.
One way of viewing this is that science works on the implicit assumption that every effort
has been made to eradicate all mistakes from experimental results before they are
presented. Scientists being human, mistakes will get through (some are really difficult to
identify) but published work is open to being checked by others.
At this point it is a good idea to distinguish between mistakes in calculations and
measurement.
2.2.1 Mistakes in calculations
These are simple to deal with (when identified) as there is no judgement involved, either a
mistake has been made or it hasn’t. Students are generally poor at going back to their
original data and checking calculations even when faced with values that are out by orders
of magnitude. You will make mistakes with calculations and you will need to go back
over your numbers to figure out where. Hint, if you are out by factors of ~10, 100, 1000
etc the place to start is any conversion between units (e.g. millimetres to metres).
Example: Subtle calculation errors can arise through the number of significant figures
used in performing a calculation. In some contexts you might be fully aware - in “back of
the envelope” calculations rounding approximations such as g = 10 m/s2 or e = 10-19 C
might be made in order to facilitate quick combination of values and this is fine when
order of magnitude results are adequate. However, when accurate values are required,
premature rounding can introduce illegitimate errors.
2.2.2 Mistakes in measurement
These are far more contentious as there is a danger of consciously or sub-consciously
manipulating results possibly to fit certain pre-conceived expectations. This is scientific
fraud. But, it is also true that mistakes can be made - with a subsequent need to ignore
otherwise misleading results.
So how is this handled with scientific integrity? The general principal is to not let
yourself get into the situation where you might be tempted to fiddle results.
Example. After data collection it may become apparent that an individual data point lies
far removed from all the others.
Partly based on how far out this point lies a decision may then be made to ignore this data
point in further analysis. However, in the analysis it should be made clear that such a
decision has been made and why (if it isn’t clear), the point should be labelled as an
“outlier”. This process allows re-analysis with inclusion of the outlier - such a process
may be performed in any case in order to see its effect.
Example. During a measurement it may be suspected that a mistake has been made, for
example in counting the number of swings of a pendulum, in starting/stopping a timer or
in the settings applied to an instrument. If it is known, or suspected at the time of
performing the measurement, that an error was made then the data point or set of points
can be safely discarded. However, if the measurement only becomes suspect as a result of
the values obtained then it is not valid to discard them out of hand, they then fall into the
category of “outliers”.
In both of the above examples the issue is best resolved by performing repeat
measurements (not often possible in years 0 and 1 but required from year 2 onwards).
89
There will very little further consideration of illegitimate errors in this document.
2.3 Systematic errors
Systematic errors can arise in an experiment in a number of ways. For example :

Zero error: from use of a ruler that is worn at the end, or a voltmeter may read a
non-zero value even when no voltage is applied across its terminals.

Calibration error: an incorrectly marked ruler can produce a systematic error
which may vary along its length. Wooden rulers are good to about 1/2mm in 1 metre.
Even expensive steel standards must be used at correct temperature to avoid a systematic
error.

Parallax error: this may occur when reading the position of an object or a pointer
against a scale (e.g. a ruler) from which it is separated. The reading can depend on the
viewing angle.
Timing errors are a common example of systematic errors. Apart from errors introduced
by a clock running too slowly there is also the tendency of a human operator (or indeed
electronics) to start a clock consistently too soon or too late (which may show up as a zero
error).
To achieve good results systematic errors must be carefully considered and reduced so
that they become insignificant (in most cases it is impossible to remove them entirely).
Two tricks that can be useful here: (i) compare the results to another experiment made
using different apparatus and using a different method; (ii) where possible use the
equipment to make measurements of known values. In both cases, if there is good
agreement there is greater confidence that the systematic error is insignificant and results
can be trusted.
2.4. Random errors
These as mentioned arise from fluctuations in observations so that results differ from
experiment to experiment. It is easy to see that these will arise when experiments are
performed by hand as human factors will mean that way that it is performed is not exactly
the same. But in a similar fashion measuring instruments are also prone to variation, for
example: both mechanical and electrical instruments will vary with the ambient
temperature (and other factors), both analogue and digital instruments suffer from
rounding errors, low signal measurements are prone to the effects of noise etc.
The reduction of random errors can be achieved in three ways: improvement of the
experiment, refinement of technique and repeating the experiment.
2.5 The interplay between systematic and random errors
Illustrated in figure 1 are the results of a number of measurements of a quantity x (which
could be a length, voltage, temperature etc.).
x
(a)
true value
x
(b)
Figure 1 (a) Random errors only, any systematic error is insignificant. (b) Significant random and
systematic errors present.
90
In this figure the position of the true value is marked and each small vertical line marks
the result of experimental determinations of x. In figure 1a the results are scattered about
the true value with no bias for low or high values, so you would expect the average of all
the results to be close to the true value. This is the case where random errors dominate any systematic errors are negligible. In figure 1b, there is, in addition to random errors, a
systematic error which means that average value is shifted to a value smaller than the true
value.
From the above it is clear that:
 Measured values close to the true value are obtained if the systematic error is small
 A small systematic error will only be revealed when the random error is small.
Less obviously:
 It is possible to have a small random error even with a large spread of data points this is addressed later in the section on multiple measurements.
 Systematic and random errors are always present. However, systematic errors are
ignored when they are small compared to random errors.
2.6 A note on experimental skill and personal judgement
Experimental skill and personal judgement are both important. Students should find this
statement both worrying and reassuring at the same time. Worrying because simply
following a set of instructions often produces bad results, reassuring because there are
rewards for practical ability and training. Bad results can be understood to be the
consequence of having significantly larger random and systematic errors. So how can this
come about?
Example: The error in a length measured with a rule will be influenced by the fineness of
the graduations on the scale, but the position of the scale relative to the object and how the
system is viewed are important (for both random and systematic errors) as is the ability to
interpolate between graduations (mainly for random errors).
Generally, experimenters should understand the equipment in use, acquire a feel for it
and, based on this, subsequently use their judgement. This applies equally to experiments
in which the data acquisition is handled by a computer. There is a tendency for students
to have a greater trust in results obtained via a computer. This is dangerous and it is better
to treat all equipment with the same initial (healthy) mistrust.
3. Presentation of measured values
Knowing about classes of errors it is now possible to discuss the presentation of measured
values in greater detail, starting with more of the terminology that accompanies it.
3.1 Accuracy and precision
As with “errors” the terms "accuracy" and "precision" have distinct meanings in
experimental science. In fact, accuracy is closely linked to both systematic and random
errors whilst precision relates only to the random error.
 Accuracy - The accuracy of an experiment is determined by how close the
measurement is to the true value, in other words how correct the measurement is.
From the above sections it should be clear that a value can only be accurate if the
systematic error is small, however, even with a small systematic error a measurement
will lose accuracy if the random error increases.
 Precision - The precision of an experiment is determined by the size of the spread of
values obtained in repeated measurements regardless of its accuracy. As illustrated in
figure 2 a smaller spread of values corresponds to a more precise measurement. From
91
the above sections, a value can only be highly precise if the random error is small.
Precision and random error are essentially equivalent - the random error is often
termed the precision of a measurement.
Figure 2 Two groups of measurements of x with different precisions (for a small systematic error the values
are distributed about the true value).
Some examples may serve to illustrate these definitions:
Example: Supposing a steel rod is measured to be 1.2031+/- 0.0001 m in length, i.e. its
length has been expressed to the nearest 0.1mm. This measurement implies a precision of
0.1 mm. But suppose that, due to wear at the end of the ruler used to measure the rod, this
figure is in error by 1mm. Then, despite the quoted precision, the measurement is
inaccurate.
Note: The precision quoted here is more formally known as the “absolute precision”.
This is distinct from the “relative precision” which is given in terms of the fraction (or
percentage) of the value of the result. In this case the relative precision is 0.0001/1.2031
= 8x10-5 (or 0.008%).
Example: Suppose that the true value of a temperature of an object is 20·3440 oC: a
measurement of 20·3 +/-0.1oC is accurate (it agrees with the true value within errors); a
measurement of 20·33+/-0.02 oC is both accurate and more precise (and could be claimed
to be “more accurate”); a measurement of 20·322 +/- 0.005oC is more precise but now
must be stated to be inaccurate because it does not agree with the true value within error.
The terms “accuracy” and “precision” as defined allow results and experiments to be
considered more meaningfully. The second example illustrates that as the random error in
reduced and precision improves systematic errors, previously hidden, start to emerge.
When systematic errors are evident there is little usually little point in improving the
precision further - steps should first be taken to reduce systematic errors.
In the rest of this guidance it will be implicitly assumed that systematic errors are
negligible compared to random errors. This will allow the discussion to be presented
such that when a more precise measurement is made, the accuracy will also be greater.
Bear in mind that in real experiments this will not always be true.
92
3.2 Significant figures
In the previous section it was seen that as the precision of the experiment improved the
number of significant figures (s.f.s), used to quote the result, increased. By contrast, by
their nature errors are estimates (i.e. imprecisely known) and so can only be quoted to 1
or 2 s.f.s. This can be a little confusing at first and, perhaps not surprisingly, a common
mistake that students make is to use an incorrect number of significant figures. This
section uses two examples in an attempt to clarify the situation - ultimately it is simply
common sense.
3.2.1 The use of significant figures
Example: A measurement of distance can be correctly quoted as (4.85 +/- 0.02) mm or
(0.485 +/- 0.002) cm or (0.00485 +/- 0.00002) m. These values are equivalent, all we’ve
done is change the units:
 The significant figures (s.f.s) are 4,8 and 5 hence in his case all measured values are
quoted to 3 s.f.s.
 The largest figure (4 in the above example) is the most significant figure and the
smallest number (5 here) is the least significant figure.
 The position of the decimal point therefore has no bearing on the number of s.f.’s.
 The error here is quoted to one s.f..
 The number of significant figures used for the measured value is determined by the
least s.f. in the error. This is also the (fixed in this example) precision of the
measurement.
Example: To illustrate this further take the temperatures given in an example in section 1
- (20·3 +/-0.1)oC, (20·33+/-0.02)oC, (20·322 +/- 0.005)oC. These measured values are
quoted to 3,4 and 5 significant figures (s.f.) respectively, this contrasts with their errors
(here) always quoted to 1 s.f. (remember that a maximum of 2 s.f.s are allowed for errors).
In all cases, the size/decimal place of the least significant figure in the error determines
the least significant figure in the value and therefore the precision of the measurement.
The 3 values quoted are therefore of different precisions.
Finally, it would be wrong to quote these values in the following ways:
(20·33 +/-0.1)oC (value more precise than error)
(20·322 +/- 0.0005)oC (error more precise than value)
(20·322 +/- 0.125)oC (to many s.f.s in the error)
3.3 Acceptable ways of presenting measured values
3.3.1 Required format for undergraduates
Reminder: the format required by the School has already been given as (measured value
+/- error) units. The subtleties of the required format will be addressed using an example,
the value of a distance S:
S = (2.36 +/- 0.04) km





[2]
The value and error are enclosed in brackets because the units apply to both.
The form above allows easy use and appreciation of both numbers and units.
The alternative form (23650 +/- 40) m is equally as good.
The alternative form (2365000 +/- 4000) cm is less easily appreciated.
Using powers of 10 instead of prefixes (such as k for kilo) is certainly allowed.
93



If a power of 10 is quoted, rather than incorporated in the units it must go outside the
brackets, e.g. R = (2.36 +/- 0.04) x103 m.
If a power of 10 is quoted then the exponent will be a positive or negative integer, n.
(Some publications may insist that the exponent should be an integer multiplied by 3,
i.e. use 103n, but this is not something that we insisted upon for undergraduates lab
diaries or reports).
The value of the quantity and its error should be quoted to the same power of 10 and
in the same units so that they can be compared easily (e.g. 2.36 km +/- 40 m) would
not be acceptable).
3.3.2 Alternative forms that may be met
The required format above is an unambiguous style of presentation but other formats are
used in which the error is not given explicitly. Students should be aware of the different
ways of presenting data as they should always be clear of the errors associated with any
experimental values that they meet.
Alternatives to the required format: The simplest way of indicating the precision of a
measurement is through the number of significant figures quoted (as is done in the
required format). Here though no error is given and an error (or precision) of 1 in the
final figure is inferred. For example, if presented with a length given as 1.23 m, the
inference is that in the required format it would be given as (1.23 +/- 0.01) m.
Clearly there is potential for ambiguity here. For example, if there was a requirement to
present all length in mm’s then with the above example there is a temptation to quoted the
value as 1230 mm which is clearly wrong as the zero is not significant. The value could
instead be quoted as 1.23 x 103 mm.
Although not recommended here scientists often quote one more figure than is justified by
the error. In the required format this might appear as (1.232 +/- 0.01) m and it is clear that
the last figure is not significant. Where the error is not quoted then it is necessary to
distinguish between figures that are significant and those that are not and this can be done
with by placing insignificant figures in bracket or as a subscript, i.e. 1.23(2) m or 1.23 3 m.
The reason for quoting an extra figure is to avoid introducing (a form of illegitimate) error
if the value is used in subsequent calculations (see section 4 below “Calculating with
measured values..”).
Fundamental constants and material parameters: Almost certainly the most common
measured parameters that students are exposed to are the fundamental constants quoted in
textbooks, lab books, data books etc. Following that may be material properties such as
the speed of sound in air or the density of water. It can be forgotten that these parameters
are (almost always) measured parameters and so are known to limited precision. So what
to make of the values presented?
It is a fact of life that the presentation of these “known”* or “accepted”* values does lack
consistency, although in many cases it is clear what has been done. For example in the
School’s “Mathematical Formulae and Physical Constants” handbook fundamental
constants are quoted to (mostly) 3 s.f.s. Since the constants are known to much greater
precision than this, here it is obvious that the values have been rounded - and because of
this the final figure has a precision (error) of 1. In addition, constants handbooks
generally indicate the associated errors and often reference the source of the information.
The situation is less clear for example when values are rounded but not obviously so, and
it should be remembered that values quoted in old publications may be out of date.
94
* Undergraduate experiments often measure parameters that have well “known” or
“accepted” values. The precision with which they are established lends itself to thinking
that these are “true” values and they may reasonably be used this way in teaching
laboratories. However, bear in mind that at the limits of their precision there may well be
disagreements between the different laboratories or experiments used to determine them.
4. Calculating with measured parameters and finding overall errors (error
propagation)
Sometimes in science finding the parameter that we measure directly is the main point of
the experiment, sometimes it is necessary to incorporate it into a function, combine it with
known constants or combine a number of measured parameters and constants. For
example, the value of a resistor R can be found by measuring the current I through it and
the voltage V across it and using R = V/I.
The process of using functions or combining values is usually straightforward. However,
it is not obvious how the corresponding errors are determined, a process commonly
known as “error propagation”. (Reminder - only random errors are being considered
here.)
This section starts by considering the general case before presenting the outcomes for
commonly occurring special cases.
4.1 Error propagation: the general case
The problem here is to find the overall change of a function due to (small) changes in its
component parts. The answer can be found using calculus, if a value z is a function of x
and y, (i.e. z = f(x,y)) partial differentiation can be used to find the effect of a small change
in either x or y. (Partial differentiation is taught in the first year and the process is
essentially one of differentiating with respect to (w.r.t.) one variable whilst holding all the
others constant).
The partial differential of z with respect to x (holding y constant) is given by z x so that
the change in z (i.e. Δz )due to a small change in x (i.e. Δx) is:
z
z 
x
[3]
x
There is a similar expression for changes in z due to changes in y and the total change in z,
i.e. the “total differential” is then given by
z 
z
z
x  y
x
y
[4]
The above equation concerns two variables but clearly the number of terms on the right
hand side would increase to match the number of variables in an arbitrary function. Even
so, Δz in the above equation cannot be used as the combined error arising from the errors,
Δx and Δy, in x and y respectively. The reason is that in the above equation the signs of
both the derivatives and the errors are important. As presented then the signs of multiple
terms (2 here) could lead to the situation where two large but opposite contributions
cancel each other, resulting in an underestimated error.
One way to resolve this issue would be to add the magnitudes of the terms on the rhs of
the equation. However, this is equivalent to having the errors contribution due to x and y
always reinforcing each other which is not realistic either. Instead, the conventional
solution is to square all of the terms, i.e.:
95
2
2
 z 
 z 
(z )    x 2    y 2
 x 
 y 
2
[5]
Δz in this equation is the overall error. The resulting errors are realistic and are often said
to have been combined in “quadrature” (quadrature is often used to mean squaring).
Example. Resistance, R = f(V,I) = V/I.
The aim is to show how the overall error for resistance is found using the values and
errors for voltage and current.
First consider the total derivative
R
R
1
V
R
R
R 
V 
I  V 
I  V  I
V
I
I
V
I
I2
Rearranging
R V I


R
V
I
Squaring each term
 R 
 V 
 I 

 
  
 R 
 V 
 I 
2
2
2
This methodology used here for a quotient can be used generally and the more common
results are given in the next section.
4.2 Commonly occurring special cases
In the table below one or two measured parameters (A and B) and a constant k are
combined through addition, subtraction etc. to produce a result Z. The error Z in Z is
then expressed in terms of the errors, A and B, in A and B respectively.
Table 1. Rules for finding errors when values are combined or functions used
Z=A+B
Z=A-B
(Z)2 = A)2 + (B)2
Z=AB
Z=A/B
(Z/Z)2 = A/A)2 + (B/B)2
Z = kA
ΔZ = k
Z = k/A
ΔkΔA/A2
Z = An
Z/Z = nA/A
Z = ln A
Z = A/A
Z = eA
Z/Z = A
Note: to find the error when constants are present simply consider that the error in the
constant is zero.
Example: If the length of a rectangle is (1.24 ± 0.02) m and its breadth is (0.61 ± 0.01) m.
What is its area and the error in the area?
Here A = 1.24 m, A = 0.02 m, B = 0.62 m, B = 0.01 m, Z is the area and ΔZ is the
error in the area, found by combining errors.
Area, Z is the product of A and B, i.e. Z = AB = 0.7564 m2.
96
(Z/Z)2 = A/A)2 + (B/B)2
= (0.02/1.24)2 + (0.01/0.61)2
= 2.602 x 10-4 + 2.687 x 10-4 = 5.289 x 10-4
So that
Z/Z = 0.023
or
Z = 0.023 x 0.7564 = 0.0174 m2
So the area can be expressed as (0.756 ± 0.017) m2 or as 0.76 ± 0.02) m2.
The appropriate rule is
4.3 Important notes on performing error calculations
Performing error calculations can be tedious and time consuming. But it has to be done
and it is worth paying attention to the numbers. It is inevitably true that different
parameters will have different contributions to the final error. Being aware of this can be
useful in at least two ways:
 Error contributions that are significantly smaller than others may reasonably be left
out of calculations, saving time. This is easily performed by comparing the relative
precision of the contributions, i.e. comparing ΔA/A with ΔB/B etc.
 The relative precision of the different contributions is instructive in indicating
weaknesses in the overall experiment, e.g. where to spend effort to find
improvements.
5. Multiple measurements (of a single parameter)
5.1 Introduction
As has already been mentioned, repeated or multiple measurements are important in
experimental work associated with the reduction of random errors. In fact one of the
cardinal rules of experimental work is that whenever possible repeat measurements should
be made.
This section is concerned with repeated measurement of a single parameter. The more
common situation for physics labs is where a variable is changed and the resulting x, y
data set plotted on a (preferably straight line) graph is dealt with later.
5.2 Importance of repeat or multiple measurements (of a single parameter)
A single measurement of a parameter relies on (often personal) estimates of an error based
on the equipment being used (for example on the smallest graduation of a meter or rule).
When repeated measurements are made:
 The second measurement acts as a check that the first one is reasonable, i.e. not
subject to gross error through carelessness.
 A relatively small number of repeats indicates the range within which the true value
lies.
 A relatively large number of repeats indicates the range and the distribution of
measurements - and allows the (random) error of the measurement to be reduced so
improving its precision.
If an estimate is made of the random error then repeated measurements can act as a test of
whether this was correct and therefore that the measurement was understood.
As the number of measurements, n increases from 1 to infinity the way that the data is
handled and error determined changes, however the mathematics follows statistically
accepted rules*. In the following discussion attention will be paid to the number of
measurements as this has clear experimental relevance. In teaching laboratories many
experiments involve n ~ 8 and it is possible get away with a superficial understanding of
statistics. In research the number of measurements tends to relate to the research field. In
97
astronomy there are large numbers of stars and galaxies to examine, n can be large and
there’s no escaping statistics.
* The terminology of statistics will be introduced without its mathematical justification in
this document (see statistics books or further reading for more maths).
5.3 Introduction to statistics (distributions, populations and samples)
In this section the terminology of statistics relating to data distributions is introduced and
related to experimental error analysis/determination.
5.3.1 Distributions
As number of measurements increases, in the absence of systematic errors, we expect the
mean to become closer to the true value. In other words it will always be the case that the
mean of a set of values is the best estimate of the true value (more on this below). It is
also reasonable to expect more values close to the true value than further away, i.e. the
distribution of measurements has a central tendency and is expected to peak at or close to
the true value. With a reasonable number of points the distribution can be plotted by
plotting the number of points that occur in a certain interval against the measurement
value. As the number of points increases the interval used can get smaller until, for an
infinite number (the limiting case), the distribution is continuous and is known as the
“limiting distribution”. An example of a (close to) limiting distribution is shown in
figure 3 below.
In figure 3 the y-axis shows the number of measurements having a given value
(continuous line) or number of measurements in a certain interval (bars). Often the y axis
shows either the fraction of measurements in a certain interval (bar charts) or the
probability of having a certain value(limiting distribution). This is achieved by
normalisation - dividing by the total number of measurements. The result of
normalisation is that the sum of all probabilities or the integral over all measured values
will be unity in both cases.
Figure 3. Distribution of a set of data. A continuous line and three bars are shown to represent a large
number of data points.
5.3.2 (Spectroscopic ) line-shapes
Very closely related to the distributions described in the previous section are line-shapes
of various origins, for example the intensity of atomic emission lines versus wavelength
or the amplitude of oscillation of a resonant mechanical system versus frequency.
Different although related terminology can be used to describe the two cases. The
98
statistical terminology for distributions will be discussed later but the general terminology
for line-shapes will be introduced here.
Figure 4 shows an intensity versus frequency line shape (actually the same shape as the
distribution in figure 3). On the assumption (as it is not shown) that the intensity falls to
zero well away from the “peak” the “full maximum” of the intensity is shown along with
its full width at half maximum (FWHM). “FWHM” being independent of the intensity of
the peak is a convenient and often quoted way to describe line-shapes features. A peak
that is symmetrical will often be characterised by its peak intensity, position (a frequency
in this case) and its FWHM. Note: The term “Half width” is sometimes used and has the
same meaning as FWHM - it can be understood to mean the width at half height.
An asymmetric peak (as figure 4 is) might be additionally characterised by its half width
at half maximum (HWHM) values either side of the peak position (i.e. that of the
maximum of the peak).
“Full”
maximum
Intensity
HWHM
FWHM
frequency
Figure 4. A (slightly) asymmetric line shape, perhaps of a spectroscopic feature with its full maximum (i.e.
intensity) its full width at half maximum (FWHM) and its half width half maximum (HWHM).
5.3.3 Terminology: “Populations”, “samples” and real experiments
Returning to distributions, although it is the limiting distribution that characterises an
experiment, real experiments have a finite number of data points and the role of statistics
is to extract the best estimates of true values and associated errors. How this is achieved
will be discussed later, for now only the general principles will be of concern.
If the limiting distribution is viewed as resulting from all possible measurements then a
real experiment may be viewed as a limited “sample of all possible measurements”. A
single measurement then may take any value within the distribution and is more likely to
be found near to the peak, i.e. the mean or true value. In many experiments it’s possible
to conceive of an infinite number of repeats and this set of data is known as the
“population”. In other words a real experiment takes a “sample” of a “population” of
measurements.
The origin of the term population may be understood by thinking of statistics more
widely. For example surveys may be made of political views in Wales. Not all people
will be included, those that are constitute the “sample” whereas all possible people in
Wales constitute the “population”. Likewise, in astronomy a survey may consider a
sample of the (finite) population of galaxies.
99
5.3.4 Experimental information found from a distribution
Experimentally what is required from a sample is the best estimate of the true value,
sometimes also the shape of the limiting distribution but especially its (random) error:
 The best estimate of the true value is easy - it is simply the mean value of the
“sample”.
 The shape of the limiting distribution clearly is of interest because its width
corresponds to the “precision of the apparatus”* or the “experimental precision”* ,i.e.
in some sense it is a measure of how good the experiment is independent of the
sample size (although a large sample size is required to find it reliably).
 The random error (“precision of the experiment/measurement”*) not only improves
with increasing sample size but is also estimated differently depending on sample size.
* With two types of precision and wording that is ambiguous it is very easy to get
confused. The trick here is probably to be clear of the concept and don’t worry about the
terminology (if you come across wording that it not ambiguous please let us know).
5.3.5 Extraction of random error as a function of number of measurements (sample
size)
It is important to emphasise that here the concern is with cases where more than one
measurement is made and the random error is determined by analysing the distribution or
spread of data.
The following discussion concerns an increasing number of measurements(samples) of an
arbitrary experiment.
As mentioned previously a single measurement (n = 1) provides one sample of the
limiting distribution and although it is more likely to be close to the true value (rather than
out in the wings) occasionally the experimentalist will be unlucky.
Very quickly with n = 2,3,4.. averaging gives a lot more confidence in our estimate of the
true value and more importantly for errors starts to give an idea of the limiting
distribution. At this point the error will almost certainly be taken to be half the range or
spread of the values (because we quote ± error).
With a few more measurements a dilemma arises. The range/spread of values is likely to
increase whereas the random error should sensibly decrease. One valid approach which is
to use the range in which 50% of the values fall to indicate (twice) the error, this is known
as the “probable error”. This approach is illustrated in figure 5, it is a convenient
approach to use for 8 or 12 data points where the outer 4 or 6 points respectively can be
discarded.
100
Average (best estimate
of true value)
x
2Δx
Figure 5 Average value and probable error range from a set of eight data points
Probable error however, suffers a similar limitation to range and it does not progressively
decrease with increasing n. Neither is it a required step as statistical techniques
(described below) may be used. More importantly experimental work always requires
choices to be made and a good experimentalist will be clear on the method and the logic
applied in deciding on the approach used.
With a large numbers of measurements (let’s say n >> 10) and even before a well
defined distribution emerges statistical techniques are used - although cautiously because
this is the regime of small number statistics. With very high n and a well defined
distribution it is clear that its mean (our best estimate of the true value) can be found to
high precision. In fact its error approaches zero as the number of measurements
approaches infinity. What this is saying is that even when the precision of the experiment
is low with enough measurements a value can be found with a low error. But, as you
would expect it is easier to get a low error (i.e. using less measurements) when the
experimental precision is high - the precision of the experiment does matter. The next
section introduces the formal mathematics of this process.
Note: it isn’t easy to say how large n needs to be in order for a distribution to become well
defined. However, as a guide with n ~ 50 a it would be reasonable to draw a distribution
split into 4 or 5 intervals. If nothing else it should be clear from this that in order to
approach a limiting distribution n needs to be very large indeed.
5.4 Formal statistics (of distributions)
All experimental results are affected by random errors. In practice it turns out that in the
majority of cases the distribution function which best describes these random errors is the
“normal” or “Gaussian” distribution. Other mathematically described distributions
include “Poisson”, “Binomial” and “Lorentzian”. Distributions such as the one
presented in figure 3 may not have a basis in mathematics. However, all can be treated
with the same statistics.
Reminder: statistics work well with large but not small numbers of measurements - the
term “small number statistics” doesn’t have a poor reputation for nothing.
5.4.1 The mean
If n measurements of a quantity x are made and these are labelled x1, x2, x3,….xn then the
mean is given by:
101
xn 
1
1 n
( x1  x2  x3  ...  xn )   xi
n
n i 1
[6]
Often used alternative symbols for the mean, x n include x , Xn and μ.
5.4.2 Mean square deviation (variance) and standard deviation(s)
Clearly individual values of xi will differ from x n and these differences are intrinsically
linked to the nature of the distribution. The deviation of a particular measurement, xi from
x n is given by
 i  xi  xn
[7]
Clearly deviations may be either positive or negative and both the sum the mean
deviations will be zero. To avoid this the absolute value of mean deviations could be used
but it makes more sense mathematically to use the square of deviations. The sum of
square deviations would simply increase with the number of measurements whereas the
mean value would be expected to converge to a value representative of the limiting
distribution. The mean square deviation of n measurements,
 ( x n ) 2 is given by
1 n 2 1
2
[8]
  i   ( xi  xn )
n i 1
n
From this it is a short step to the root mean square deviation, normally known as the
“sample standard deviation”, σn:
1 n
1
 ( xn )  [   i2 ]1 / 2  [  ( xi  xn )2 ] 1 / 2
[9]
n i 1
n
The term sample standard deviation is used since it is calculated from a sample of n
measurements - it is important to include the subscript. It is sometimes also written as σn.
Note: although standard deviations can be calculated for small numbers of values it
doesn’t make sense to do so as discussed earlier.
 ( xn ) 2 
The standard deviation is useful quantity as it has the same units as the measured value
and relates to the width of the distribution and is often described as the precision of the
measurement. However, as hinted above there is more to this story.
In the same way as it is the limiting value of the mean that represents the true value, it is
the limiting value of the sample standard deviation that is the standard deviation (and
represents the precision) of the experiment. It is also possible to conceive of a correction
to the sample standard deviation, σn(x) to get a better estimate for the population standard
deviation σ(x). This best estimate of the standard deviation is usually denoted sn(x).
(Again, because it is confusing) the three versions of standard deviation with their
meanings:
Sample standard deviation, σn(x) - The standard deviation that can be calculated from n
measurements.
Standard deviation, σ(x) - The (unattainable) limiting or “true” value of standard
deviation, also quoted as the true precision of the experiment.
Best estimate (or adjusted) standard deviation, sn(x) - a variation of the sample
standard deviation, using σ(xn) and n to get a best estimate of σ(x). sn(x) is given by
1/ 2
 n 
sn ( x)  

 n 1
 n ( x)
[10]
102
5.4.3 Standard error (standard deviation of the mean),  ( xn )
As discussed above, the standard deviation gives a measure of the width of a distribution,
whereas what is required is the error in the mean value, a value that can become very
small as the distribution is better known (through increasing the number of measurements
n).
The error in the mean will be taken as given by the “standard error”. Mathematically, the
standard error is found by finding the standard deviation of a number of samples of the
mean value. This explains why the symbol used appears very similar to that for standard
deviation.
If the limiting or true standard deviation is known (σ(x)) then the standard error for n
measurements,  ( xn ) is given by
( x )
 ( xn ) 
[11]
n1 2
However, the true standard deviation cannot be known, and so similar expressions may be
considered including either σn(x) or sn(x.). Since the labelling is getting tricky/confusing
the same symbols will be used for standard error below but with words of explanation
attached:
 (x)
(standard error using sample standard deviation)
[12]
 ( xn )  n
n1 2
 ( xn ) 
sn ( x )
n1 2

1/ 2
1  n 


n1 2  n  1 
 n( x ) 
 n( x )
n  11 2
(best estimate of standard error)
[13]
Given that n will be quite large where it is applicable to use standard errors (i.e. when
distributions have emerged) there is little difference between the two expressions.
However, here it is now possible to state that the value for a measurement, X can be
expressed as
X  xn   ( xn )
[14]
In experimental terms the 1/n1//2 dependence of the standard error (for large n) indicates
that although it is possible to use repeats to find a value to high precision/small error this
is hard work and it is often better to work on improving the precision of the measurement.
5.5 Summary - what to use as the random error (precision) as a function of n



Single measurement - estimate of error.
Small number of measurements - whilst using best judgement: the range of the data
might be used for a very small number of measurements; with a few more
measurements (and possibly taking convenience into account) choose between
probable error and possibly standard deviation).
Large number of measurements - with the distribution emerging use standard error.
Some of the first year lab experiments are designed to illustrate how this works in
practice. However a guiding principle is to be open and clear about what error is chosen
and why.
6. Multiple measurements: straight line graphs (y = mx +c)
6.1 Introduction
103
The previous section discussed multiple measurements of the same value. However this
is not how laboratory physics experiments are usually performed. If a quantity y depends
upon another x, then rather than fixing on a value of x and making repeated measurements
of the corresponding value of y, it is usually much more revealing to vary x. The form of
the dependence of y upon x is then most simply demonstrated by plotting a graph. The
statistics of repeat measurement in section 5 still applies but in a modified form - think of
the different points as being in some sense a repeat.
The understanding and use of graphs is an essential skill. Teaching laboratories
concentrate on using straight line graphs, which are by far the easiest to analyse, and great
efforts are made to ensure that graphs emerge in this form.
6.2 Presenting experimental data on graphs
Scientific experiments examine cause and effect relationships where changing one
variable (known as the independent variable) causes a change in a second (dependent)
variable, both of which are measurable.
(Important: Conventionally the independent variable is plotted on the horizontal (x) and
the dependent variable is plotted on the vertical (y) axes of the graph respectively.)
For example, how the length of a spring depends upon the weight hung from its end may
be studied. The length is the dependent variable so it is plotted on the y axis, as in figure
6.
length / m
0.4
0.3
0.2
0.1
weight / N
0
0
1
2
3
4
5
6
7
8
9
Figure 6. Example graph, spring length versus weight (the line through the data is a “best fit” line).
On the graph, as is quite common, a line through the data is shown. The meaning of any
such line should be made clear, in this case the figure caption indicates that the line is a
“best fit”. In other words it is the straight line that best represents the data and from
which information is extracted. In this case, from the gradient a value for the spring
constant may be determined. The alternative is that a line is a “guide to the eye”, this is a
line with no scientific meaning. In a lab diary this information can be given at any
convenient place on the graph, in a report inclusion in the figure caption is usually best.
Error bars can also be included on graphs, this is discussed in a later section.
6.3 Finding the Slope and Intercept (and their errors)
The equation for a straight line is given by:
y = mx + c
[15]
where m is the gradient (or slope) of the line and c is the intercept with the y axis. It is
necessary to find values and errors for both, and two approaches are possible.
104
6.3.1 The two approaches
By hand, where a graph (drawn in a lab diary) is analysed using the judgement of the
experimentalist. This approach, although subjective, gives students an understanding of
the process of data analysis and it keeps students “close” to the data. Both of these are an
essential part of the process of equipping students with the skills and experience to
develop as a scientist. It is used by preference in the first year laboratory (and still would
be even if there were enough PCs readily available to use).
By computer, where the data is fed into software (such as EXCEL or Python) that graphs
and analyses the data. This approach has the advantage of using well defined statistical
techniques and in these terms at least giving “correct” answers. There are a number of
disadvantages: students lose their critical faculties and tend to believe any number
emerging from a PC or calculator (regardless of the quality of the data entered), extracting
usable error information can often be more troublesome than working by hand.
6.3.2 Finding gradient, intercept and their errors by hand
The approach is illustrated in figure 7. Having so determined the best straight line, the
gradient m and the intercept c can be determined. Two well separated arbitrary points on
the best fit line are determined (x1,y1 and x2,y2). This is a statement that it is the best fit
line that represents the experiment (students are often tempted to use extreme measured
data points - this is incorrect). From the two selected points the gradient can be
calculated:
dy y 2  y1
[16]
m

dx x 2  x1
c can then be found using the straight line equation, m and either of the two points (or
indeed any point on the best fit line):
c  y  mx
[17]
For clarity a right angled triangle is drawn linking the two chosen points on the best fit
line.
x2,y2
10
8
dy=y2 – y1
y
6
4
x1,y2
2
dx = x2 – x2
0
0
2
4
6
8
10
x
Figure 7. Determining m (= dy/dx) from a best fit line. Note that x1 and x2 are points on the best fit line ,
i.e. they are not data points.
Finding the errors is achieved by repeating the above procedure for one or two other
straight lines which are as far away in gradient (one larger, one smaller) from the data as
possible, but which are judged to be nevertheless still reasonably consistent with the
data. These are known as “worst possible fit lines” or “worst fit lines”. As shown in
105
figure 8 the lines should pivot about the approximate centre of the data points. These
lines provide two further values for m and c from which errors in m and c can be
estimated. In practice it is allowable to use one worst fit line, this saves time and is
justified since it is error estimates that are found.
However, remembering back to Gaussian distributions arising from repeated
measurements of the same value there is clearly a problem with this approach. With more
measurements the errors in m and c must decrease, whereas with this simplistic approach
more measurements are likely to sample a larger spread about the best fit line and
therefore result in slowly increasing errors.
10
best fit line
8
worst
fit
lines
y
6
4
dy
2
dx
0
0
2
4
6
8
10
x
Figure 8. Best and worst-possible fit lines used to estimate errors. The lines pivot about the centre of the
data range.
In effect the worst fit lines provide estimates of the standard deviations in m and c.
Estimates of the standard errors in m and c can be found by dividing these values by n1/2.
where n is the number of data points (dividing by (n-2)0.5 is probably better but the worst
fit lines are generated by eye so let’s not worry). The errors then decrease (as must be
expected to happen) with the number of data points and match better to cases where
averages of repeat measurements at different points are taken (e.g. timing an event 3 times
for each point) and also when errors are calculated by computer fitting packages (see next
section).
Summary:
estimated standard error in m
 ( mn ) 
estimated standard error in c
 ( cn ) 
mworstfit  mbestfit
n 0.5
c worstfit  cbestfit
n 0.5
[18]
[19]
6.3.3 Finding gradient, intercept and their errors by computation
This section gives the mathematics for determining gradients, intercepts and their errors
using a linear regression technique known as least squares fitting of a straight line. It may
be useful to think of the best fit line as the “true value” with points distributed about it.
Given n pairs of experimental measurements (x1,yl), (x2,y2) ......... (xn,yn), which have (the
same) errors in the y-values only*, the gradient (m) and intercept on the y axis (c) of the
best straight line (y = mx + c) through these points can be found by minimising the
106
squares of the distances of the points from the line in the Oy direction. The minimum is
found by differentiation and this leads to the analytical expressions that follow.
With the summations from i = 1 to i = n and defining (following Squires)
the “residual” for the ith data point
d i  yi  mxi  c
(the deviation in y for each data point - from the best fit
line)
1
1
x   xi
y   yi
n
n
1
1
D   xi 2 - (  xi ) 2
E  (xi yi ) -  xi  yi
n
n
1
F   yi 2 - (  yi ) 2
n
Then
E
m
D
1  d i2
1 DF  E 2
( m ) 

n2 D
n2
D2
c  y  mx
( c )2 
2
2
2
2   di
2  DF  E
1 D
1 D

 x 
 x 
n2 n
n2 n
 D
 D2
Mathematical software might have this programmed in, but many, EXCEL for example,
give the “product-moment correlation coefficient”, R (actually R2 is usually given) that is
a quality of fit (with R = ± 1 or R2 = 1.0 representing a perfect fit/correlation). This is
insufficient as error values are required.
R2 
E2
DF
With the constraint that the straight line is required to pass through the origin (0,0), c = 0,
the best value for m is
12
  y 2  2m ( x y )  m 2  x 2 
i
i i
i
with error ( m )2  

2


 xi ( n  1 )
However it isn’t at all clear when this may be used. It certainly should not be used on the
basis that an equation indicates that a straight line graph is expected to go through the
origin. A systematic error in the experiment might shift data such that the gradient is
unaltered but the line does not pass through the origin. Then the consequences of forcing
the line through the origin are to lose information on the presence of systematic errors and
at the same time to introduce a systematic error into the gradient.
 (x i y i )
m=
2
 xi
* This draws attention to an important point concerning statistical analysis. Insignificant
errors in the independent variable is often true experimentally (where the value of x is set
and the value of y measured) but it is also and is a necessary condition for the commonly
used statistical treatment of errors in gradient and intercept (software that calculates errors
in gradient and intercept almost certainly make this assumption). Treatments are much
more involved if the errors in both y and x are significant or if the error in individual
points varies.
107
6.4 Error bars (and outliers)
When plotting graphs it can sometimes be useful to include “error bars”. An error bar is
a way of drawing (an estimate of) the (random) error in the measured value of each data
point on the graph. It is illustrated in figure 9 for the case where only the errors in y are
significant and it is implied that the errors on x are insignificant. If the x error is
significant a horizontal bar should be included.
y
x
Figure 9. Example, use of error bars. The line is a best fit that excludes the outlier (the point significantly
below the best fit line and therefore ignored from the analysis).
Error bars are generally only included where there is a clear benefit compared to their
absence: not only do they take time to insert but they also complicate graphs (especially a
problem in lab diaries where best fit and worst fit lines (if drawn by hand) are present.
Before discussing the cases where there are “clear benefits of error bars” it is worth
dwelling on what they represent. Whilst it is possible to use error bars to represent
systematic errors the convention is that they represent random errors. Deviation from
convention is permitted provided it is clearly explained. Random errors are best
determined from repeated measurements, however it is often the case that points on a
graph correspond to single measurements. It is often possible to estimate the random
error for a single measurement (for example from the minimum graduation of a meter or
rule) but students are notoriously pessimistic (i.e. overestimate) random error sizes perhaps confusing them or mixing them up with possible systematic errors.
6.4.1 When to use error bars
Testing understanding of the measurement
Suppose that the error bars in figure 9 were estimated from single measurements for each
point. The fact that the scatter in the data points about the best fit line is of the same size
as the error bars supports the view that the experimental errors are well understood. It
should be a concern when error bars are significantly larger than the scatter.
Significance of deviations from theoretical curves
The theoretical curve that the data is compared to here is a straight line. Here error bars
make it easier to decide whether deviations from a straight line are significant or not.
(In scientific jargon anything that is “insignificant” is small enough to be ignored)
This is illustrated in figure 10 a and b which show the same set of data but with different
error bars.
108
16
14
y values /a.u.
12
(a)
10
8
6
(b)
4
2
0
0
2
4
6
8
10
x values /a.u.
Figure 10. (a) Data with best fit line and large error bars, (b) the same data shifted (down) with small error
bars (a.u. - arbitrary units)
As with any experiment there is scatter in the data. In figure 10a the error bars all
encompass the straight line and therefore the deviations from the best fit line cannot be
considered significant. By contrast in figure 10b with smaller error bars the deviations
must be considered significant and implies that either: (i) the theoretical model is
incorrect or (ii) that there are additional unknown or unconsidered experimental factors
causing a deviation.
The above discussion illustrates both the importance of careful consideration of errors and
also that extra information is revealed as errors are reduced.
Final note: here the deviation of a number of data points was considered. The significant
deviation of a single data point is treated a little differently (see also outliers below).
Significant errors in both y and x and a variation of size of error bars
Since the commonly used analytical method of determining line of best fit and errors in m
and c is based on the errors in each point being significant only in y then the cases where
this does not apply need to be treated with care. A first step towards dealing with (or at
least acknowledging) this is to provide x as well as y error bars when appropriate.
The error analysis required when the errors are significant in both x and y is beyond the
scope of this document.
Similarly the commonly used analysis assumes that the y errors are the same for each data
point and a first step towards acknowledging when this is not so might be to show these
varying error bars.
Situations where varying errors may occur:
 Errors based on repeat measurements will vary if the number of repeats is varied.
 Some experimental conditions might naturally lead to varying errors (for example, the
determination of frequency from a fixed number of oscillations).
 When combining measurements to obtain a “y” value.
6.4.2 Outliers
Returning to figure 9 in drawing the best fit line only 5 points were taken into
consideration, whilst the 6th (the point below the line) was excluded. An excluded point
is known as an “outlier” and clearly points should not be categorised as outliers lightly.
109
Potential outliers may sometimes occur due to a mistake in a reading or the setting of an
experimental condition and care must be taken when dealing with them. Working on the
assumption that the first indication of a presence was on plotting a graph (probably in a
lab diary):
 First check that all arithmetic and the plotting of the data point was performed
correctly.
 Do not rub the point out or ignore it - apart from anything else it may in fact be
correct.
 Make a decision about whether to include or exclude the point from analysis (i.e.
whether it is treated as an outlier or not) and indicate this clearly.
 If possible determine whether an error was made in the measurement - by going back
and performing repeats (this isn’t usually possible in year 0 and 1 labs, is often
possible in year 2 and is essential in year 3 and 4 projects).
 The earlier an outlier is spotted the easier it is to perform repeat measurements. This
is aided by drawing graphs as quickly as possible. The ultimate is to draw graphs as
you go along. Computers are very useful here but very rough sketch graphs are useful
alternate.
Consideration of whether a point should be considered as an outlier takes us back to error
bars. In figure 9 it is somehow reassuring that the line of best fit passes through the 5
good data points within their error range as indicated by their error bars. It appears
reasonable to ignore the outlier in the determination of the best fit line because it would be
impossible to include this point on the same basis (although with much larger error bars
the outlier might be included). However, the scatter in the data is also sufficient to make
this judgement and in reality the error bars do not add anything.
6.4.3 Dealing with a small numbers of data points
Clearly it is better to have many data points rather than few but what are the implications
of cases when this isn’t possible? Return to figure 9 and consider having not 6 but 3 or
even 4 data points one of which is the outlier:
 The scatter in the data is not obvious from the points alone.
 (Correct) error bars become more important.
 It is difficult or impossible to identify outliers.
 The values obtained for m and c are (almost always) less accurate and their errors
larger.
6.5 Forcing lines to be straight
It is almost always possible to manipulate the mathematical form of data such that an
easily analysed straight line results when it is plotted. Essentially the approach is to
obtain a relationship in the form y = mx + c. A simple example and two experimentally
very important examples are given in table 1.
Table 1. Example methods for making straight line plots
Function
y = 2x2
W = kTn
y = Ae-E/kT
Plot (y = mx + c)
y vs x2
log10W vs log10T
(log10W = log10(kTn)
= nlog10T + log10k)
lny vs 1/T
Comments
A very simple example
Used in determining unknown power
relationships (finding n).
Known as an “Arrhenius plot” it is used when
considering thermally activated processes
with an activation energy (E).
110
7. Some experimental considerations
It is too large a subject to consider what constitutes a good experiment, i.e. one that can be
believed. Here a flavour will be provided by first introducing some of the terminology
that is used before providing two useful examples making use of what has gone before.
7.1 Terminology
The “reliability” of a measurement relates to its consistency. Otherwise known as the
“repeatability” of a measurement, it is the extent to which an instrument can provide the
same value for nominally the same measurement (i.e. the same subject under the same
conditions).
The “validity” of the findings of an experiment refers the extent to which the findings can
be believed to be right. For a particular experiment this depends on the rigor with which
the study was conducted (as assessed through the experimental design, its reliability and
the care in its execution) but also the extent to which alternative explanations were
considered.
7.2 Comparing results with accepted values
In the year 0,1 and 2 teaching laboratories, it is common for measurements to be made of
known values (such as g) allowing a comparison with the results obtained. A downside of
this is that students may perceive that the result (being already known) is not important
and instead the point is practice of a technique and seeing physics in action. This is
incorrect, whatever the result, it sheds light on the experiment.
Remember that any result is presented as: (measured value +/- error) units. This allows
comparison with the known values and if the two agree within errors (i.e. within the error
range of the measured value) then there is nothing more to say. However, if the two do
not agree within errors there must be a reason and it is necessary to consider what this
might be.
Candidates include:
 Systematic errors in the measurement or equipment.
 Misjudged random errors.
 Poor experimental technique.
 Poor or inappropriate (possibly oversimplified) theory.
If the reason for the discrepancy is properly understood and subsequently included then
agreement should be possible. Whilst such an extra analysis is likely to be beyond the
expectations for 0 and 1st year labs it is important that students think about the situation,
and it is often true that the reason for the discrepancy is known in principle.
A link can also be made to more advanced work where it is essential that accurate
measurements of unknown values are made. If measurements of known values (possibly
standard samples or “standards”) are made first then any systematic errors can be
corrected for. The known samples provide a way of calibrating the instrument.
7.3 y = mx relationships
Previous discussion of straight line graphs have been concerned with the general case (y =
mx + c relationships). However, many expected relationships are of the form y = mx, in
other words the graph produced is expected to go through the origin. This is worth
special consideration as it often causes confusion for inexperienced experimentalists.
The main issue is that students not only include the origin as a data point but also give it
special significance by forcing the best fit line to go through the it (whether by hand or on
a computer).
111
One of the classic systematic errors is a zero offset the effect of which is to produce a
constant (solid) shift of all data point either up or down whilst leaving the gradient (from
which most information is found) unaffected. Excluding the origin from analysis allows
the y intercept to be compared to zero and so the significance of a possible zero offset to
be considered. The alternative such as forcing the best fit line through the origin both
removes evidence for a possible zero offset and if there is one alters the gradient so
introducing an (illegitimate) error into the gradient.
8. Some important distributions
A number of distributions are observed in experiments, three important ones described
here are the Gaussian (or Normal), Poisson and Lorentzian. The former two distributions
can be related to the Binomial distribution and so this is introduced first.
In all cases the probability function P is given using x, μ and σ as the measured value, the
mean and standard deviation of the distribution respectively. The functions are
normalised such that  F ( x)dx  1 .
8.1 Binomial statistics
Binomial statistics describe certain situations where results of physical measurements can
have one of a number of well-defined values - such as when tossing coins or throwing
dice. Consider a situation where the result of one physical measurement of a system has a
probability p of giving a particular result. If an experiment is carried out on n such
systems, then the probability that x of the systems will produce the required result is given
by.
Px,n, p  
n!
p x 1  p n  x 
x! n  x !
An example: The probability of throwing a six with one dice is 1/6. If we throw 4 dice
we may obtain 0,1,2,3 or 4 sixes. The probability of obtaining zero sixes is given by
substituting in equation 1 above so that
0
( 4 0 )
1
4!

1 5
Probability of zero sixes with 4 die = P 0 ,4 ,  
   
6  0! ( 4  0 )!  6   6 

Similarly the probability of throwing one six is
1
( 41 )
1
4!

1 5
P1,4,  
etc
   
6  1! ( 4  1 )!  6   6 

For this distribution the mean value is np and the standard deviation is
np(1  p)
8.2. The normal (or Gaussian ) distribution
As already mentioned the distribution function which best describes random errors in
experiments is the “normal” or “Gaussian” distribution. This distribution is an
approximation to the binomial distribution for the special limiting case where the number
of possible different observations is infinite and each has a finite probability so that
np>>1.
The normalised probability function P(x) given by:
112
  ( x  x )2 
1
n 
P(x) 
exp
2
2  n ( x )
 2 n ( x ) 
1
P(x)
where, as before, x is the measured value x n is the mean of the sample and  ( xn ) is the
sample standard deviation and the function is normalised such that  P( x )dx  1 . As
the example figure A1.1 shows the function is (characteristically) bell shaped and
symmetrical.
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
FWHM
FWHM
-4
-3
-2
-1
0
1
2
3
4
x
Figure 11 Gaussian probability function generated using
xn  0 and σ(x) = 1 resulting in the x-axis being
in units of standard deviation. The FWHM for the distribution is also shown and can be seen to be wider
than 2σ(x).
If x n and  n ( x) are known the whole distribution function can be drawn and the
probability of measurements occurring in a given range can be determined. The integral
of the Gaussian function cannot be performed analytically and so many statistics books
will contain look-up tables, a summary version of which is presented in table 2.
Table 2. The Integral Gaussian or Normal probability
Range either side of mean,
Expected percentage of values in range
in terms of +/-m σn(x)
m=0
0%
m=1
68.3%
m=2
95.4%
m=3
99.73%
m=4
99.994%
From this table it can be seen that quoting an error of +/- σn(x) would cover a range in
which ~68% of the values fall which will therefore give a similar estimate of error as the
“probable error” in which 50% of the values fall. The FWHM is also worth considering
in this context as experimentally it is often more direct and convenient to deal with then
standard deviation. It is clear from figure 11 that the FWHM covers a little more than the
range of +/- σn(x) (in fact FWHM = 2 2 ln( 2 ) ( xn )  2.355.... ( xn ) ). This corresponds
to a range in which ~76% of the values fall. Any of these three might be used as an
113
estimate of the error - in the case where a small number of measurements have been
performed.
8.3 Poisson distribution
The Poisson distribution is the limiting case of a binomial distribution when the possible
number of events (n) tends to infinity and the probability of any one event (p) tends to
zero in such a way that np is a constant.
Poisson distributions are often appropriate for counting experiments where the data
represents the number of events observed per unit time interval. A gram of radioactive
material may contain ~1022 nuclei whereas the number that disintegrate in each time
interval is many order of magnitudes smaller.
This covers a very wide range of physics experiments:
 In the teaching labs - radioactive decay, x-ray absorption and fluorescence.
 More widely - Spectroscopy, particle physics (such as at the LHC), astronomy.
The Poisson distribution is the limiting case of a “binomial distribution” when the number
of possible events is very large and the probability of any one event is very small. The
normalised distribution is given by:
 x e
P(x) 
x!
where P(x) is the probability of obtaining a value x, when the mean value is μ. The
standard deviation for a Poisson distribution is  . This distribution is unlike the
normal or Gaussian distribution in that it becomes highly asymmetrical as the mean value
approaches zero.
Counting experiments: the “signal to noise” ratio
In all counting experiments the “quality” of the data is expected to “improve” with
increasing counting time and counts. This can be understood as follows: the mean
number of counts in the experiment, μ, is the “signal” whilst statistical variations in this
signal are represented by the standard deviation σ(x) and can be thought of as “noise”.
In Poisson statistics σ(x) =  therefore the signal/noise =  /    , i.e. the ratio
increases with the square root of the number of counts. This is an often quoted and very
important finding for understanding and designing experiments.
Put another way, if in a particular counting period an average of N counts are obtained,
the associated standard deviation is N (ignoring any errors introduced by timing
uncertainties, etc). Clearly, the larger N the more precise the final result. For a given
source and geometrical arrangement, however, N can be increased only by counting for
longer periods of time.
8.4 Lorentzian distribution
This distribution is important as it describes data corresponding to resonance behaviour.
This includes mechanical and electrical systems but also the shape of spectral lines
occurring in atomic and nuclear spectroscopy.
The Lorentzian distribution is symmetric about the mean, is usually characterised by its
half full width at half maximum  (aka “half width”) rather than by its standard deviation
and is given by
Px, ,  
1
/2
 x      / 22
2
114
A characteristic of the distribution is that is has “heavy tails”, i.e. it falls away slowly for
large deviations. A consequence of this is that it is not possible to define a standard
deviation for this function.
It should be noted that a number of broadening mechanisms may be effective in
spectroscopic experiments and some of these, such as Doppler broadening and also the
resolution of the system may be Gaussian in nature. What is measured may therefore be a
convolution of a Lorentzian and a Gaussian function resulting in a so called “Voigt”
profile. Experimentally, it is usual to start by assuming a Gaussian line shape, deviations
away from this in the tails is often good evidence of a Lorentzian contribution.
115
III.3 REPORTING ON EXPERIMENTAL WORK
AN EXAMPLE OF HOW TO WRITE A LONG REPORT
1. Introduction
Scientific report writing is a skill, the application of numerous rigid conventions, in
combination with a surprising degree of freedom in structure, combined to achieve clarity
of presentation.
Physics students will write such reports at a rate of approximately one per semester
throughout their undergraduate University career. For many students the feedback this
provides may be insufficient for them to efficiently get to grips with what is required and
expected. The document is based around a specimen report the examination of which is
intended to help students in writing long reports.
“Galileo’s Rolling Ball Experiment” is a Preliminary (Year 0) experiment and also a
classic experiment of physics. It is performed in a three hour laboratory session in which
students are required to both take and analyse their data (diaries are handed in at the end
of the session). It is a simple experiment used to help develop data handling and error
analysis for people some of whom are new to performing physics experiments for
themselves. Consequently the report is rather basic.
Following this introduction, the main body of the report is split into three sections:
2. Teaching Laboratory instructions for the experiment
3. The specimen report based on students’ laboratory diaries
4. A final section on report writing that discusses some of the finer points and the
School’s changing expectations of students as they progress through their Physics courses.
2. Teaching Laboratory instructions for the experiment
G2
GALILEO'S ROLLING BALL EXPERIMENT
Reference: Duncan, Chapter 7, Statics and Dynamics, Chapter 8 Circular motion and
gravitation
Equipment List: Metal channel, retort stand, ball bearings and box, stopwatch, metre rule.
Introduction
Galileo Galilei made observations in astronomy and mechanics that were of major
importance to the development of 17th century science. Perhaps Galileo's most famous
experiment, which was supposed to involve the leaning tower of Pisa, was his verification
that all bodies, independent of their mass, fall at the same rate (if the bodies are heavy
enough that air resistance is negligible). We shall look at here one of Galileo's less famous
but closely related experiments which conveniently does not require dropping weights
from the tower of Pisa!
116
Galileo performed an experiment on a falling body that 'diluted' the effects of gravity, by
letting the body roll down a slope. Galileo predicted and was able to show experimentally
that in this case:
1) No matter what the angle θ (this is the Greek letter theta) of the slope, the speed of the
object at the bottom of the slope depends only on the total height h it has fallen through.
2) The speed of the object increases in proportion to the time it has travelled.
3) For a given angle of the slope, the vertical height h fallen is proportional to the square
of the time it has travelled.
Since this was true for all the slopes that Galileo was able to measure, by imagining the
steepness of the slope to be increased until it was vertical he predicted that these rules
would be true for a freely falling body.
Imagine yourself in Galileo's position. Mechanical watches had not yet been invented. He
had to use 'water clocks' in which time was measured by water escaping from the bottom
of a conical container. Standards of length differed across Europe. Also, he calculated, not
with decimal fractions, but with whole number ratios. (See the article by S Drake in the
American Journal of Physics, p302, volume 54, April 1986, if you are interested in the
historical details). Your experiment here will be rather easier than Galileo's!
Start (t=0)
h

Finish
In this experiment we shall be concerned with investigating the third statement only.
Referring to the above diagram, Galileo's third statement can be expressed mathematically
as
h α t2
(if θ is fixed)
(Eq. 1)
Here t is the time for the object to roll from the start to the finish, and the symbol α means
"is proportional to". (The constant of proportionality depends on the strength of the
Earth's gravity and the angle of the slope). The aim of this experiment is therefore to
check the above relation.
The experiment provides a good introduction to taking measurements, presenting
information in tabular and graphical form, and the consideration of errors of
measurement. Additionally, you will need to relate your experimental data to theory
presented in a mathematical form.
117
Experiment (read this to the end before you start)
You are provided with a channel which can be inclined at any angle. You should use the
following procedure, making sure you record all the details in your laboratory notebook.
STEP 1 - First fix the value of θ at a value between 2 and 15 degrees. (If θ is too large
then it is difficult to time the fast-moving ball, whilst if it is too small the effects of
friction will be more important).
Measure sin θ for the slope and estimate its error (see below). Since all your
measurements will be made at the same angle it is very important to perform this
carefully. In subsequent calculations you will use sin θ and its error but you should also
STEP 2 - Hold the ball at a convenient position along the channel and measure h.
STEP 3 - Measure the time t that it takes the ball to roll down the slope for a starting
height h. Repeat the measurement 3 times and record each result.
STEP 4 - Repeat steps 2 and 3 for eight different values of the starting height h. Make
sure that you neatly tabulate every measurement that you make (not just the averages).
Your table should have the following columns:
Height
/m
...
...
...
h t1/ sec
...
...
...
t2/ sec
t3 / sec
t1²/ sec2
t2²/ sec2
t3² / sec²
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
t2
(average) /
sec2
Always include the units when you write down any numerical value.
Some suggestions
It is difficult to accurately measure the angle θ with a protractor! The best way to find it
is to measure H (the change in height of the end of the channel above the bench) and D
(the total length of the channel) shown in the diagram below. (Do not confuse the symbol
h with H or d with D, also shown on the diagram!) Then sin θ = H/D, so you can
calculate θ . Remember to tabulate all the measurements you make, not just θ
D
H
h
d

118
Precision Estimates
In all measurements you make, you should write down the precision of the measurement ie could you measure h, H and D to the nearest millimetre, centimetre, or metre? (This
depends on how you measure the quantity as well as the fineness of divisions on the metre
rule. For example, can you tell exactly where the centre of the ball bearing is, and can you
position the ruler easily? The golden rule is use common sense when estimating the
precision of a measurement.
Analysis
Equation 1 can be written in another, exactly equivalent, form:
t² = K × h
(if θ is fixed)
(Eq. 2)
Because t² is proportional to h, a graph of t² (plotted on the vertical axis) against h (plotted
on the horizontal axis) should give a straight line, which passes through the origin, with a
gradient equal to the constant K.
STEP 1 - From your data in the tables of t² and h, plot a graph for your value of θ .
STEP 2 - Draw a straight line, which best fits the data points. Work out the gradient of
this line (don't forget the units). Draw the 'error lines' and so work out the error in the
gradient. Does your best fit line pass through the origin?
The data you took can be used to work out the acceleration due to gravity, g. This can be
done since the constant K in equations 1 and 2 is, according to theory (see appendix),
the formula:
K = 2 / (g sin² θ)
(Eq. 3)
So, to find g, just do the following: Work out sin θ (it's just equal to H/D) and K (the
gradient of the corresponding graph you plotted) and substitute into equation 3, after
rearranging it to make g the subject of the equation. Be careful to make sure you know
what units K is measured in.
What value of g do you get? Even taking errors into account* the value is probably
around half the accepted value of 9.8 ms-² ? Can you think of any reason why this should
be so?
(* If you need to, ask a demonstrator to explain how to calculate the errors in g - you will
need to estimate the experimental error in each of the things that was used to find g, ie the
individual errors in sin and K, and then combine the errors. Actually, you will probably
find there is comparatively little error in sin so that most of the error is in finding K.)
Appendix
Read this at home, not in the laboratory class. You may find it useful in conjunction with
your Mechanics lectures.
119
Suppose a body slides, without friction, down a slope of inclination θ :
mg sin 
h
mg

Finish
The component of the force on the mass m parallel to the slope is mg sin θ , so the
acceleration of the body parallel to the slope is
a = F/m = g sin θ
Using the formula "s = ut + at² /2" means that the distance moved to the bottom of the
slope in a time t is just (u=0 if the body starts at rest)
d = g sin θ × t² /2
But sin θ = h/d or d = h/sin θ , so we finally get
h = g sin² θ t² /2
(Eq. 4)
This equation is therefore the same as equation 2, since we can re-arrange it as
t² = 2 h /g sin² θ
(Eq. 5)
So, comparing directly to equation 3, we have K = 2/(g sin² θ ), as stated earlier.
120
3. The specimen report based on students’ laboratory diaries
(A report based on measurements made by a Foundation Engineering Student taking
PX0102 in October 2006)
Galileo’s Rolling Ball Experiment
Date January 2007
Author: Cardiff University, School of
Physics and Astronomy
Abstract
Galileo’s rolling ball experiment was performed in which the motion of a ball bearing
down a shallow incline, of angle  = 3.52 +/- 0.03 degrees, was timed as a function of the
starting height of the ball. Starting heights between 0.035 and 0.070 m resulted in travel
times in the range 1.90 – 2.90 s. As expected, a graph of the square of the time of travel
versus starting height was a straight line that passed through the origin. The gradient
would be expected to be 2/g.sin2  , where g the acceleration due to gravity, assuming that
the gravitational potential energy was entirely converted to translational kinetic energy.
The value of the gradient was found to be 113+/-19 s2.m from which a value for g of 4.76
+/- 0.12 m.s-2 was determined that is approximately a factor of two lower than the
accepted value of 9.81 m.s-2. The discrepancy can be attributed to the fact that as the ball
rolls down the incline gravitational potential energy is converted not only into
translational but also into rotational kinetic energy.
1. Introduction
Galileo Galilei was a seventeenth century Italian scientist who made many important
observations in astronomy and mechanics [1]. His most famous experiment on the effects
of gravity involved dropping weights from the tower of Pisa and showed that all bodies
fall at the same rate independent of their mass. In the rolling ball experiment [2] in which
a ball rolls down an incline, the effects of gravity are easier to quantify since the travel
times are increased.
Using this experiment Galileo showed that: (i) the speed of the object at the bottom of the
slope depends only on the height it has fallen through, (ii) that the speed of the object
121
increases in proportion to the time it has traveled and (iii) for a given angle of slope, the
vertical height fallen through is proportional to the square of the time it has travelled.
The experiment performed here was concerned only with the last statement.
2 Background Theory
A schematic of the experiment in which an object of mass m acted upon by gravity
(acceleration due to gravity is g) on an incline is illustrated in figure 1 below.
d
m.g.sinθ
m.g
h
Figure 1. Schematic of an object on an inclined plane. The plane is at an angle  to the
horizontal and the force due to gravity acting down the slope is m.g.sin  .
For an incline at an angle  , although the force vertically downwards is m.g the force
parallel to the slope is m.g.sin  . This is the force that accelerates the body down the
slope the acceleration, a being given by:
a
force m.g.sin 

 g.sin 
mass
m
(1)
If the body starts at rest (initial velocity zero) and travels a distance d (for example to the
bottom of the slope) the relationship between the time taken and distance travelled is
given by the well known equation of motion:
d
1 2
a.t
2
d
or
1
g.sin  .t 2
2
(2)
In addition, if h is the change in height the object undergoes by travelling a distance d
down the slope then it is clear from figure 1 that:
sin  
h
d
(3)
Note that as h is defined in figure 1 the object would start at the top of the slope.
Substituting for d in equation 3 and rearranging gives:
t2 
2
g.sin 2 
.h
(4)
122
Equation 4 confirms Galileo’s third statement and indicates that a graph of the square of
the travel time versus height should be a straight line that passes through the origin. In
addition, if the angle of the slope is known the value of the gradient can be used to
determine a value for the acceleration due to gravity.
This is the experiment that has been performed. Whilst Galileo performed the experiment
for a range of slope angles, here only one has been used.
3. Description of the Experiment
The “slope” was provided by a right angled channel, held by a retort stand down which a
ball bearing could roll. After fixing the slope its angle was found (by way of measuring
its elevation and length) to be 3.52 +/- 0.03 degrees. The ball bearing was placed on the
slope at a particular height and its time to travel down the slope was measured by hand
with a stopwatch. The measurement was performed three times for each height and at
eight different heights. One person released the ball at the set height and a second person
timed the descent. The timing error for a single measurement was initially estimated to be
+/-0.5 s however the spread of times found in the repeated measurements was usually
only +/-0.1 s. The error in the release height of the ball bearing was +/- 1 mm. The range
of heights used was 0.035 to 0.070 m resulting in travel times in the range ~1.9 to 2.9 s.
4. Results
A graph of the average squared travel time against release height is shown in figure 2.
The data is a reasonable straight line with some scatter about the best fit line. By drawing
best and worst possible fits by hand the gradient of the line was found to be 113 +/- 19
s2.m-1. These lines indicated that within errors the data is a straight line through the origin
[3] as expected from equation 4 and indicating that any systematic errors are small
compared to random errors.
Time squared /s 2
9
8
7
6
5
4
3
0.03
0.04
0.05
0.06
0.07
0.08
Height /m
Figure 2. Graph of the average of the travel times squared versus the release height. The
straight line here is a computer generated best fit to the data3.
From the gradient and the angle of slope a value for the acceleration due to gravity, g, was
determined (using equation 5) to be 4.76 +/- 0.12 m.s-2.
5. Discussion
123
Although the results of the experiment do show that for the single angle of slope used, the
vertical height fallen through is proportional to the square of the time it has traveled, the
derived value for g does not agree with the accepted value of 9.81 m.s-2 within graphical
errors. The obtained value of g is approximately half of the expected value, whereas the
error is only ~10%. The discrepancy is therefore much larger than can apparently be
explained by random errors associated with the measurement and therefore needs to be
considered further.
The sources of measurement error include distances (for the height of release and the
angle of the slope) and timing (for the travel time). Neither the meter rule nor the
stopwatch are likely to have appreciable intrinsic errors associated with them. The use of
the rule to determine heights and angles has relatively small errors as discussed above and
no errors have been found in calculations. The estimated absolute timing error (+/- 0.5 s)
arose from consideration of matching the start of the stopwatch with the release of the ball
bearing and its stop with the ball reaching the bottom of the slope. The fact that this error
appears significantly larger than the spread of travel times in repeated measurements (0.1
s) obtained from repeat measurements indicates that there may be a systematic error in
starting and stopping the watch. However, a systematic error of up to +/- 0.5 s would do
little to improve the agreement between the measured acceleration and g.
The explanation for the results obtained lies in the realization that whereas it is true that it
is the translational acceleration down the slope that is measured by this experiment it is
not true that the gravitational force acting down the slope is only converted into this form
of motion. As the title of the experiment states the ball rolls down the hill implying that it
has both translational and rotational motions. In other words the gravitational potential
energy of the ball is converted into both translational and rotational kinetic energy. It
should be possible to reanalyze the results here incorporating the effects of rotational
motion but this is beyond the scope of this report.
6. Conclusions
Galileo’s rolling ball experiment has been performed in which the motion of a ball
bearing down a shallow incline of angle 3.52 +/- 0.03 degrees. Assuming that
gravitational potential energy is entirely converted to translational energy of the ball the
value the value for g was determined to be g = 4.76 +/- 0.12 m.s-2. This value is
approximately a factor of two lower than the expected value. The discrepancy is almost
certainly mainly caused by the fact that gravitational potential energy is converted into
rotational as well as translational kinetic energy as the ball rolls, rather than slides, down
the hill.
References
[1]. “Galileo’s physical measurements” Stillman Drake, Am.J.Phys 54 (1986) 302-306.
[2]. Experiment G2 (Gallileo’s Rolling Ball Experiment) in Preliminary/Foundation Year
Laboratory Course Booklet (2006_7).
[3]. The computer generated best fit gave a gradient of 127 s2.m-1.
Aside: This value is at the high end of the values quoted in the text. Looking more
closely it appears almost certain that the student forced the best fit line to go through the
origin. This was a (commonly made) mistake. To do this the student has assumed not
that t2 = 0, h = 0 is an experimental point but that it is a point known with absolute
certainty. While this may at first seem reasonable, after all the time taken to change
124
height by zero amount will take zero time, the trouble is that hides the effects of any
systematic errors from the data analysis. For example, it is quite feasible that a systematic
error could have been made in measuring the release height or in timing the motion. This
might result in a straight line that does not go through the origin, but, a perfectly valid
gradient. The result is that the student has both hidden any systematic errors and
introduced an error into the gradient and consequently into the calculated value of g. If
the student had spotted the error it would not be valid to present erroneous results, the
data would need to be reanalyzed. However, giving the benefit of (a very small doubt)
this report has been written using the assumption that the student did not force the best fit
line through the origin and should be read with this in mind.
4. Report writing.
The style is intended to be very similar to that of a paper presented to a scientific journal
but the level at which it is written should be such that another student with a similar
background but unfamiliar with the experiment would be able to understand what you
have done, why and what it all means. Reports are separated into sections the expected
contents of which are described below. This is followed by some general advice and
comments on changing expectations through the undergraduate course.
4.1 Contents of the different sections of a scientific report
Abstract
This summarizes the experiment in a single paragraph in ~150 words, featuring
particularly the (numerical) results and principal conclusions. It is entirely separate from
the rest of the report, hence concepts introduced in the abstract need to be introduced
again in the main part of the report.
1. Introduction
Describes the background to, and aim(s) of, the experiment and whatever theoretical
background is needed to make sense of your own work being presented.
There is an expectation that the student reads around the subject before writing the report.
This should be reflected in “Introductory”/”Theory” sections that are not solely derived
from the laboratory handbooks. The source material for this should be quoted and
obviously re-written to fit in with the requirements of the report and to avoid plagiarism.
At the same time the “Introductory”/”Theory” sections should be appropriate for the
report and not overwhelm it.
If necessary, for example if the introduction becomes large and difficult to read, the
section can be split in order to have a distinct "Background Theory" section following
on from the more general introduction.
Unfamiliar/obscure derivations may be included but exclude trivial steps.
The theory section may include a number of equations. These should be on a separate
line, numbered and each of the symbols used should be explained the first time they
appear, e.g.:
E = mc2
(1)
-1
where E is energy (J), m is mass (kg) and c is the speed of light (ms ).
2. Description of the experiment and 3. Results
These sections are very flexible and tend to cause the most trouble for students in years
0,1 and 2.
125
There should be descriptions of the main features of the equipment and general
descriptions of how it was set up and used. These should be written in paragraph rather
than point form, should not be in the form of lists and should not be an instruction set for
the experiment. Greater detail should be included where non-standard/unfamiliar
equipment has been used, where subjective interpretations or procedures were employed
or where significant or systematic errors or uncertainties may have occurred.
If only one experiment was performed the logical flow of the report is clear. However, if
the experiment had two or more parts then things can get complicated. Many students fall
into the trap of separating important procedural information from results: e.g presenting
procedure 1, procedure 2, results 1 and then results 2 etc. Reports using this format are
very difficult to read.
Much better is: procedure 1, results 1, procedure 2, results 2 etc. A question to consider
then is how much common experimental information can be placed upfront before getting
deeply into the experiments?
Large amounts of data are usually best presented in either tabular or graphical form,
choose the most appropriate (but usually not both forms). Diagrams and graphs should
be labeled: Figure 1, Figure 2 etc underneath the figure (see example above) and tables as
Table 1, Table 2 etc above the table (se example below) and all should have an
explanatory title.
Explain how the original data were analyzed, for example indicate whether a value is the
average of a number of measurements and/or refer (by number) to the mathematical
equations used (see notes below). However, the actual mathematical working should not
be included. Graphs should show the best fit straight line (but not the error fits) if
applicable and numerical values should always be quoted with their associated errors.
Again, do not show the mathematical working used to obtain errors.
4. Discussion
The discussion section is very important in that it both brings together the previous
sections and is the point at which students can demonstrate “critical awareness” through
interpretation of the meaning of the previously described results.
Other items that might be discussed are: consistency of readings, accuracy, limitations of
apparatus or measurements, suggestions for improvements of apparatus, comparison of
results obtained by different methods, comparison with theoretical behaviour or accepted
values, unexpected behaviour, future work. However it is clear that some of these are
experimental considerations that could equally well be placed in the previous sections in
the case of a complicated/multi-experiment report.
5. Conclusions
Reports should end with a conclusions section. These should summarize the main results
and findings.
6. References
References should be numbered and placed in the correct order in the text (i.e. the
Vancouver system). They can be denoted by a superscript1 in square brackets [1] or by
other (logical) systems.
The procedure can be stated in words in the following way:
126

At the point in the report at which it is necessary to make the reference insert a
number in square brackets, e.g. [1], the numbers should start with [1] and be in the
order in which they appear in the report.

At the end of the report in the section headed “References” the full reference is given
as follows:
In the case of a book:
Author list, title, publisher, place published, year and if relevant, page number.
e.g. [1] H.D. Young, R.A. Freedman, University Physics, Pearson, San Francisco,
2004.
In the case of a journal paper:
Author list, title of article, journal title, vol no., page no.s, year.
e.g. [2] M.S. Bigelow, N.N. Lepeshkin & R.W. Boyd, “Ultra-slow and superluminal
light propagation in solids at room temperature”, Journal of Physics: Condensed
Matter, 16, pp.1321-1340, 2004.
In the case of a webpage (note: use carefully as information is sometimes incorrect):
Title, institution responsible, web address, date accessed.
e.g. [3] “How Hearing Works”, HowStuffWorks inc.,
http://science.howstuffworks.com/hearing.htm, accessed 13th July 2005
Different publications are likely to insist on one particular system (e.g. Vancouver as done
here or Harvard – authors name and year of publication in text). Lecturing staff may
express a preference.
Appendices
This section is not compulsory but can be used to provide information that doesn’t fit into
or is not vital to the report but the author still wants or needs to present (possibly as
evidence of work carried out). The main text should reference the appendix but it should
not be necessary for the reader to read the appendix to understand the report.
Examples of material included in appendices include: long, non-standard derivations,
computer code, the authors detailed designs for apparatus, results not included in the
report and risk assessments (if required). The appendix should include sufficient
explanation to make sense of this extra information.
Appendices are not usually necessary for year 0,1 and 2 reports but are more common in
years 3 and 4 because of the desire to demonstrate project work.
4.2 General advice
 The report should be written in your own words, i.e. do not plagiarize other peoples
work (including laboratory books, other student’s reports, the web or textbooks).
 Apart from the abstract and conclusions there should be little repetition in reports.
 The past tense is most appropriate and the most commonly used.
 The report should be impersonal (avoid “I”, “we”, “you” etc).
 A well-labelled diagram can be more informative than several paragraphs of prose.
 All diagrams, pictures, graphs and figures should be labelled figure 1, figure 2 etc in
the order they appear and should have a descriptive figure caption.
 Tables should be labelled as table 1, table 2 etc in the order they appear and have a
descriptive table caption.
 Readers will naturally work through the text of the report. This text should therefore
refer to and explain figures, tables equations etc when appropriate. For example,
“Figure x shows…….”.
127



Related to the last point figures and tables should appear at an appropriate place in the
text and be of an appropriate size. The electronic generation of reports means that
there should be no need for full page hand drawn graphs (allow these are still allowed
at Year 0 level).
It is not necessary to include a risk assessment with your final report, the purpose of
that was to ensure your safety when you performed the experiment. However, it may
be required as part of longer reports in the third or fourth years in which case it should
present in an appendix as proof of its existence.
Pages should be numbered and longer reports (3rd and 4th year project reports) should
have a contents page.
4.3 Differentiation between years
1. Style
In essence very little changes of style are expected through the academic years. The aim
is to instill the scientific style of writing from the beginning. Such changes that do occur
reflect the changing content of the report and the audience (reader).
2. Length of reports
Typical report lengths are shown in table 1 for different student years.
Table 1. Typical lengths of reports (pages assumed to be typed and to include diagrams
and tables)
Student Year
Typical word length
0
(1500-2000)
1
(2000-3000)
2
(2000-3000)
3 (interim)
(~3000)
3 (final)
(up to 6000)
4 (interim)
(~3000)
4 (final)
(up to 6000)
3. Scientific content




Experiments in years 0 and 1 are highly prescriptive with well defined aims. In year 2
some of the experiments are likely to allow genuine student enquiry. In years 3 and 4
the two semester projects are open ended, student led and with undetermined
outcomes. At the same time the techniques will likely become more sophisticated, the
physics more advanced (and distinct from taught modules) and the results more
numerous.
Early years reports will inevitably be heavily influenced by the laboratory books
provided. Third and fourth year reports will have no such guidance to fall back on
and 2nd year reports sit somewhere in between.
Early reports may use laboratory books and text books as reference sources whereas
3rd and 4th year reports should make increasingly extensive references to research
papers.
Since longer reports are expected in the 3rd and 4th years the style is perhaps less
similar to scientific papers and more so towards a Masters or Ph.D thesis. Ultimately
though it remains “scientific”.
128
DIARY (LAB BOOK) CHECKLIST (also see page 6)
Date
Experiment Title and Number
Risk Analysis
Brief Introduction
Brief description of what you did and how you did it
Results (indicating errors in readings)
Graphs (where applicable)
Error calculations
Final statement of results with errors
Discussion/Conclusion (including a comparison with accepted results if
applicable)
FORMAL REPORT CHECKLIST ( also see page 8 )
Date
Experiment Title and Number
Abstract
Introduction
Method
Results: Use graphs – and don’t forget to describe them.
Indication of how errors were determined
Final results with errors
Discussion
Conclusion (including a comparison with accepted results if applicable)
Use Appendices if necessary
A risk assessment is unnecessary.
129
Download