answer 16

advertisement
PRINCIPLES OF MEDICAL EMERGENCIES
5 mark question
1.Teaching hospital
A teaching hospital is a hospital that provides clinical education and training to
future and current doctors, nurses, and other health professionals, in addition to
delivering medical care to patients. They are generally affiliated with medical
schools or universities (hence the alternative term university hospital), and may
be owned by a university or may form part of a wider regional or national health
system.
Some teaching hospitals also have a commitment to research and are centers for
experimental, innovative and technically sophisticated services.
History
Although institutions for caring for the sick are known to have existed much
earlier in history, the first teaching hospital, where students were authorized to
methodically practice on patients under the supervision of physicians as part of
their education, was reportedly the Academy of Gundishapur in the Persian
Empire during the Sassanid era.1
Cultural references
The American television shows St. Elsewhere, Chicago Hope, ER, Scrubs, House,
and Grey's Anatomy all take place in teaching hospitals (St. Eligius Hospital,
Chicago Hope Hospital, County General Hospital, Sacred Heart Hospital,
Princeton-Plainsboro, and Seattle Grace Mercy West Hospital, respectively), as
does the Canadian show Saving Hope (Hope Zion Hospital).
In the United Kingdom, the 1980s television documentary series Jimmy's was set
in St James's University Hospital, Leeds (nicknamed Jimmy's), which formerly
claimed to be the largest teaching hospital in Europe.
Entry-level medical education programs are tertiary-level courses undertaken at a
medical school. Depending on jurisdiction and university, these may be either
undergraduate-entry (most of Europe, India, China), or graduate-entry programs
(mainly Australia, Canada, United States).
In general, initial training is taken at medical school. Traditionally initial medical
education is divided between preclinical and clinical studies. The former consists
of the basic sciences such as anatomy, physiology, biochemistry, pharmacology,
pathology. The latter consists of teaching in the various areas of clinical medicine
such as internal medicine, pediatrics, obstetrics and gynecology, psychiatry, and
surgery. However, medical programs are using systems-based curricula in which
learning is integrated, and several institutions do this.
There has been a proliferation of programmes that combine medical training with
research (D.O./Ph.D. or M.D./Ph.D.) or management programmes (D.O./MBA or
M.D./ MBA), although this has been criticised.
2.Standards of quality
Reliability can also be expressed in mathematical terms as: Rx = VT/Vx where Rx
is the reliability in the observed (test) score, X; Vt and Vx are the variability in
‘true’ (i.e., candidate’s innate performance) and measured test scores
respectively. The Rx can range from 0 (completely unreliable), to 1 (completely
reliable). An Rx of 1 is rarely achieved, and an Rx of 0.8 is generally considered
reliable.11
Validity
A valid assessment is one which measures what it is intended to measure. For
example, it would not be valid to assess driving skills through a written test alone.
A more valid way of assessing driving skills would be through a combination of
tests that help determine what a driver knows, such as through a written test of
driving knowledge, and what a driver is able to do, such as through a performance
assessment of actual driving. Teachers frequently complain that some
examinations do not properly assess the syllabus upon which the examination is
based; they are, effectively, questioning the validity of the exam
Validity of an assessment is generally gauged through examination of evidence in
the following categories:
1. Content – Does the content of the test measure stated objectives?
2. Criterion – Do scores correlate to an outside reference? (ex: Do high scores
on a 4th grade reading test accurately predict reading skill in future
grades?)
3. Construct – Does the assessment correspond to other significant variables?
(ex: Do ESL students consistently perform differently on a writing exam
than native English speakers?)12
4. Face – Does the item or theory make sense, and is it seemingly correct to
the expert reader?13
A good assessment has both validity and reliability, plus the other quality
attributes noted above for a specific context and purpose. In practice, an
assessment is rarely totally valid or totally reliable. A ruler which is marked wrong
will always give the same (wrong) measurements. It is very reliable, but not very
valid. Asking random individuals to tell the time without looking at a clock or
watch is sometimes used as an example of an assessment which is valid, but not
reliable. The answers will vary between individuals, but the average answer is
probably close to the actual time. In many fields, such as medical research,
educational testing, and psychology, there will often be a trade-off between
reliability and validity. A history test written for high validity will have many essay
and fill-in-the-blank questions. It will be a good measure of mastery of the
subject, but difficult to score completely accurately. A history test written for high
reliability will be entirely multiple choice. It isn't as good at measuring knowledge
of history, but can easily be scored with great precision. We may generalize from
this. The more reliable our estimate is of what we purport to measure, the less
certain we are that we are actually measuring that aspect of attainment. It is also
important to note that there are at least thirteen sources of invalidity, which can
be estimated for individual students in test situations. They never are. Perhaps
this is because their social purpose demands the absence of any error, and
validity errors are usually so high that they would destabilize the whole
assessment industry.
It is well to distinguish between "subject-matter" validity and "predictive" validity.
The former, used widely in education, predicts the score a student would get on a
similar test but with different questions. The latter, used widely in the workplace,
predicts performance. Thus, a subject-matter-valid test of knowledge of driving
rules is appropriate while a predictively valid test would assess whether the
potential driver could follow those rules.
3.Progress testing
Progress tests are longitudinal, feedback oriented educational assessment tools
for the evaluation of development and sustainability of cognitive knowledge
during a learning process. A Progress Test is a written knowledge exam (usually
involving multiple choice questions) that is usually administered to all students in
the a program at the same time and at regular intervals (usually twice to four
times yearly) throughout the entire academic program. The test samples the
complete knowledge domain expected of new graduates on completion of their
course, regardless of the year level of the student. The differences between
students’ knowledge levels show in the test scores; the further a student has
progressed in the curriculum the higher the scores. As a result, these resultant
scores provide a longitudinal, repeated measures, curriculum-independent
assessment of the objectives (in knowledge) of the entire programme
The progress test is currently used by national progress test consortia in the
United Kingdom 3, The Netherlands 4, in Germany (including Austria) 5, and in
individual schools in Africa 6, Saudi Arabia 7, South East Asia 8, the Caribbean,
Australia, New Zealand, Sweden, Finland, UK, and the USA 9. The National Board
of Medical Examiners in the USA also provides progress testing in various
countries 1011 The feasibility of an international approach to progress testing has
been recently acknowledged 12 and was first demonstrated by Albano et. al. 13 in
1996, who compared test scores across German, Dutch and Italian medical
schools. An international consortium has been established in Canada 1214 involving
faculties in Ireland, Australia, Canada, Portugal and the West Indies.
The progress test serves several important functions in academic programs.
Considerable empirical evidence from medical schools in the Netherlands,
Canada, United Kingdom and Ireland, as well postgraduate medical studies and
schools in dentistry and psychology have shown that the longitudinal feature of
the progress test provides a unique and demonstrable measurement of the
growth and effectiveness of students’ knowledge acquisition throughout their
course of study 15 16 17 18 12 19 20 21 1 2223.
As a result, this information can be consistently used for diagnostic, remedial and
prognostic teaching and learning interventions. In the Netherlands, these
interventions have been aided by the provision of a web-based results feedback
system known as ProF 24 in which students can compare their results with their
peers across different total and subtotal score perspectives, both across and
within universities.
Additionally, the longitudinal data can serve as a transparent quality assurance
measure for program reviews by providing an evaluation of the extent to which a
school is meeting its curriculum objectives 101 25. The test also provides more
reliable data for high-stakes assessment decisions by using measures of
continuous learning rather than a one-shot method (Schuwirth, 2007). Interuniversity progress testing collaborations provide a means of improving the costeffectiveness of assessments by sharing a larger pool of items, item writers,
reviewers, and administrators. The collaborative approach adopted by the Dutch
and other consortia has enabled the progress test to become a benchmarking
instrument by which to measure the quality of educational outcomes in
knowledge. The success of the progress test in these ways has led to
consideration of developing an international progress test 2625.
The benefits for all main stakeholders in a medical or health sciences programme
make the progress test an appealing tool to invest resources and time for
inclusion in an assessment regime. This attractiveness is demonstrated by its
increasingly widespread use in individual medical education institutions and interfaculty consortia around the world, and by its use for national and international
benchmarking practices.
20 mark question
1.Objective structured clinical examination
An Objective Structured Clinical Examination (OSCE) is a modern1 type of
examination often used in health sciences (e.g. Midwifery, orthoptics, optometry,
medicine, chiropractic, physical therapy, radiography, nursing, pharmacy2,
dentistry, paramedicine, veterinary medicine). It is designed to test clinical skill
performance and competence in skills such as communication, clinical
examination, medical procedures / prescription, exercise prescription, joint
mobilisation / manipulation techniques, radiographic positioning, radiographic
image evaluation and interpretation of results.
An OSCE usually comprises a circuit of short (the usual is 5–10 minutes although
some use up to 15 minute) stations, in which each candidate is examined on a
one-to-one basis with one or two impartial examiner(s) and either real or
simulated patients (actors). Each station has a different examiner, as opposed to
the traditional method of clinical examinations where a candidate would be
assigned to an examiner for the entire examination. Candidates rotate through
the stations, completing all the stations on their circuit. In this way, all candidates
take the same stations. It is considered to be an improvement over traditional
examination methods because the stations can be standardised enabling fairer
peer comparison and complex procedures can be assessed without endangering
patients health.
As the name suggests, an OSCE is designed to be:



objective - all candidates are assessed using exactly the same stations
(although if real patients are used, their signs may vary slightly) with the
same marking scheme. In an OSCE, candidates get marks for each step on
the mark scheme that they perform correctly, which therefore makes the
assessment of clinical skills more objective, rather than subjective, which is
where the examiners decide whether or not the candidate fails based on
their subjective assessment of their skills.
structured - stations in OSCEs have a very specific task. Where simulated
patients are used, detailed scripts are provided to ensure that the
information that they give is the same to all candidates, including the
emotions that the patient should use during the consultation. Instructions
are carefully written to ensure that the candidate is given a very specific
task to complete. The OSCE is carefully structured to include parts from all
elements of the curriculum as well as a wide range of skills.
a clinical examination - the OSCE is designed to apply clinical and
theoretical knowledge. Where theoretical knowledge is required, for
example, answering questions from the examiner at the end of the station,
then the questions are standardised and the candidate is only asked
questions that are on the mark sheet and if they are asked any others then
there will be no marks for them.
OSCE marking
Marking in OSCEs is done by the examiner. Occasionally written stations, for
example, writing a prescription chart, are used and these are marked like written
examinations, again usually using a standardised mark sheet. One of the ways an
OSCE is made objective is by having a detailed mark scheme and standard set of
questions. For example, a station concerning the demonstration to a simulated
patient on how to use a Metered dose inhaler MDI would award points for
specific actions which are performed safely and accurately. The examiner can
often vary the marks depending on how well the candidate performed the step.
At the end of the mark sheet, the examiner often has a small number of marks
that they can use to weight the station depending on performance and if a
simulated patient is used, then they are often asked to add marks depending on
the candidates approach. At the end, the examiner is often asked to give a "global
score". This is usually used as a subjective score based on the candidates overall
performance, not taking into account how many marks the candidate scored. The
examiner is usually asked to rate the candidate as pass/borderline/fail or
sometimes as excellent/good/pass/borderline/fail. This is then used to determine
the individual pass mark for the station.
Many centers allocate each station an individual pass mark. The sum of the pass
marks of all the stations determines the overall pass mark for the OSCE. Many
centers also impose a minimum number of stations required to pass which
ensures that a consistently poor performance is not compensated by a good
performance on a small number of stations.
There are, however, criticisms that the OSCE stations can never be truly
standardised and objective in the same way as a written exam. It has been known
for different patients / actors to afford more assistance, and for different marking
criteria to be applied. Finally, it is not uncommon at certain institutions for
members of teaching staff be known to students (and vice versa) as the examiner.
This familiarity does not necessarily affect the integrity of the examination
process, although there is a deviation from anonymous marking. However, in
OSCEs that use several circuits of the same stations the marking is repeatedly
shown to be very consistent which supports the validity that the OSCE is a fair
clinical examination.
Preparation
Preparing for OSCEs is very different from preparing for an examination on
theory. In an OSCE, clinical skills are tested rather than pure theoretical
knowledge. It is essential to learn correct clinical methods, and then practice
repeatedly until one perfects the methods whilst simultaneously developing an
understanding of the underlying theory behind the methods used. Marks are
awarded for each step in the method; hence, it is essential to dissect the method
into its individual steps, learn the steps, and then learn to perform the steps in a
sequence. For example, when performing an abdominal examination, a student is
instructed to first palpate for the liver, and then to palpate for the spleen. This
seemingly meaningless order becomes relevant when it is considered that those
with enlarged livers often also have enlarged spleens3 .
Most universities have clinical skills labs where students have the opportunity to
practice clinical skills such as taking blood or mobilizing patients in a safe and
controlled environment. It is often very helpful to practise in small groups with
colleagues, setting a typical OSCE scenario and timing it with one person role
playing a patient, one person doing the task and if possible, one person either
observing and commenting on technique or even role playing the examiner using
a sample mark sheet. Many OSCE textbooks have sample OSCE stations and mark
sheets that can be helpful when studying in the manner. In doing this the
candidate is able to get a feel of running to time and working under pressure.
In many OSCEs the stations are extended using data interpretation. For example,
the may have to take a brief history of chest pain and then interpret an
electrocardiogram. It is also common to be asked for a differential diagnosis, to
suggest which medical investigations the candidate would like to do or to suggest
a management plan for the patient.
Taking a nursing history prior to the physical examination allows a nurse to
establish a rapport with the patient and family. Elements of the history include:
the client's overall health status, the course of the present illness including
symptoms, the current management of illness, the client's medical history
(including familial medical history), social history and how the client perceives his
illness.1
Psychological and social examination
The main areas considered in a psychological examination are intellectual health
and emotional health. Assessment of cognitive function, checking for
hallucinations and delusions, measuring concentration levels, and inquiring into
the client's hobbies and interests constitute an intellectual health assessment.
Emotional health is assessed by observing and inquiring about how the client feels
and what he does in response to these feelings. The psychological examination
may also include the client's perceptions (why they think they are being assessed
or have been referred, what they hope to gain from the meeting). Religion and
beliefs are also important areas to consider. The need for a physical health
assessment is always included in any psychological examination to rule out
structural damage or anomalies.
Physical examination
A nursing assessment includes a physical examination: the observation or
measurement of signs, which can be observed or measured, or symptoms such as
nausea or vertigo, which can be felt by the patient.2
The techniques used may include Inspection, Palpation, Auscultation and
Percussion in addition to the "vital signs" of temperature, blood pressure, pulse
and respiratory rate, and further examination of the body systems such as the
cardiovascular or musculoskeletal systems.3
2.Types of Educational assessment
The term assessment is generally used to refer to all activities teachers use to help
students learn and to gauge student progress.3 Though the notion of assessment
is generally more complicated than the following categories suggest, assessment
is often divided for the sake of convenience using the following distinctions:
1.
2.
3.
4.
formative and summative
objective and subjective
referencing (criterion-referenced, norm-referenced, and ipsative)
informal and formal.
Formative and summative
Assessment is often divided into formative and summative categories for the
purpose of considering different objectives for assessment practices.

Summative assessment - Summative assessment is generally carried out at
the end of a course or project. In an educational setting, summative
assessments are typically used to assign students a course grade.
Summative assessments are evaluative.

Formative assessment - Formative assessment is generally carried out
throughout a course or project. Formative assessment, also referred to as
"educative assessment," is used to aid learning. In an educational setting,
formative assessment might be a teacher (or peer) or the learner, providing
feedback on a student's work, and would not necessarily be used for
grading purposes. Formative assessments can take the form of diagnostic,
standardized tests.
Educational researcher Robert Stake explains the difference between formative
and summative assessment with the following analogy:
When the cook tastes the soup, that's formative. When the guests taste the soup,
that's summative.4
Summative and formative assessment are often referred to in a learning context
as assessment of learning and assessment for learning respectively. Assessment of
learning is generally summative in nature and intended to measure learning
outcomes and report those outcomes to students, parents, and administrators.
Assessment of learning generally occurs at the conclusion of a class, course,
semester, or academic year. Assessment for learning is generally formative in
nature and is used by teachers to consider approaches to teaching and next steps
for individual learners and the class.5
A common form of formative assessment is diagnostic assessment. Diagnostic
assessment measures a student's current knowledge and skills for the purpose of
identifying a suitable program of learning. Self-assessment is a form of diagnostic
assessment which involves students assessing themselves. Forward-looking
assessment asks those being assessed to consider themselves in hypothetical
future situations.6
Performance-based assessment is similar to summative assessment, as it focuses
on achievement. It is often aligned with the standards-based education reform
and outcomes-based education movement. Though ideally they are significantly
different from a traditional multiple choice test, they are most commonly
associated with standards-based assessment which use free-form responses to
standard questions scored by human scorers on a standards-based scale, meeting,
falling below, or exceeding a performance standard rather than being ranked on a
curve. A well-defined task is identified and students are asked to create, produce,
or do something, often in settings that involve real-world application of
knowledge and skills. Proficiency is demonstrated by providing an extended
response. Performance formats are further differentiated into products and
performances. The performance may result in a product, such as a painting,
portfolio, paper, or exhibition, or it may consist of a performance, such as a
speech, athletic skill, musical recital, or reading.
Objective and subjective
Assessment (either summative or formative) is often categorized as either
objective or subjective. Objective assessment is a form of questioning which has a
single correct answer. Subjective assessment is a form of questioning which may
have more than one correct answer (or more than one way of expressing the
correct answer). There are various types of objective and subjective questions.
Objective question types include true/false answers, multiple choice, multipleresponse and matching questions. Subjective questions include extendedresponse questions and essays. Objective assessment is well suited to the
increasingly popular computerized or online assessment format.
Some have argued that the distinction between objective and subjective
assessments is neither useful nor accurate because, in reality, there is no such
thing as "objective" assessment. In fact, all assessments are created with inherent
biases built into decisions about relevant subject matter and content, as well as
cultural (class, ethnic, and gender) biases.7
Basis of comparison
Test results can be compared against an established criterion, or against the
performance of other students, or against previous performance:
Criterion-referenced assessment, typically using a criterion-referenced test, as the
name implies, occurs when candidates are measured against defined (and
objective) criteria. Criterion-referenced assessment is often, but not always, used
to establish a person's competence (whether s/he can do something). The best
known example of criterion-referenced assessment is the driving test, when
learner drivers are measured against a range of explicit criteria (such as "Not
endangering other road users").
Norm-referenced assessment (colloquially known as "grading on the curve"),
typically using a norm-referenced test, is not measured against defined criteria.
This type of assessment is relative to the student body undertaking the
assessment. It is effectively a way of comparing students. The IQ test is the best
known example of norm-referenced assessment. Many entrance tests (to
prestigious schools or universities) are norm-referenced, permitting a fixed
proportion of students to pass ("passing" in this context means being accepted
into the school or university rather than an explicit level of ability). This means
that standards may vary from year to year, depending on the quality of the
cohort; criterion-referenced assessment does not vary from year to year (unless
the criteria change).8
Ipsative assessment is self comparison either in the same domain over time, or
comparative to other domains within the same student.
Informal and formal
Assessment can be either formal or informal. Formal assessment usually implies a
written document, such as a test, quiz, or paper. A formal assessment is given a
numerical score or grade based on student performance, whereas an informal
assessment does not contribute to a student's final grade such as this copy and
pasted discussion question. An informal assessment usually occurs in a more
casual manner and may include observation, inventories, checklists, rating scales,
rubrics, performance and portfolio assessments, participation, peer and self
evaluation, and discussion.9
Internal and external
Internal assessment is set and marked by the school (i.e. teachers). Students get
the mark and feedback regarding the assessment. External assessment is set by
the governing body, and is marked by non-biased personnel. Some external
assessments give much more limited feedback in their marking. However, in tests
such as Australia's NAPLAN, the criterion addressed by students is given detailed
feedback in order for their teachers to address and compare the student's
learning achievements and also to plan for the future.
3.Controversy of Educational assessment
Concerns over how best to apply assessment practices across public school
systems have largely focused on questions about the use of high stakes testing
and standardized tests, often used to gauge student progress, teacher quality,
and school-, district-, or state-wide educational success.
No Child Left Behind
For most researchers and practitioners, the question is not whether tests should
be administered at all—there is a general consensus that, when administered in
useful ways, tests can offer useful information about student progress and
curriculum implementation, as well as offering formative uses for learners.19 The
real issue, then, is whether testing practices as currently implemented can
provide these services for educators and students.
In the U.S., the No Child Left Behind Act mandates standardized testing
nationwide. These tests align with state curriculum and link teacher, student,
district, and state accountability to the results of these tests. Proponents of NCLB
argue that it offers a tangible method of gauging educational success, holding
teachers and schools accountable for failing scores, and closing the achievement
gap across class and ethnicity.20
Opponents of standardized testing dispute these claims, arguing that holding
educators accountable for test results leads to the practice of "teaching to the
test." Additionally, many argue that the focus on standardized testing encourages
teachers to equip students with a narrow set of skills that enhance test
performance without actually fostering a deeper understanding of subject matter
or key principles within a knowledge domain.21
High-stakes testing
The assessments which have caused the most controversy in the U.S. are the use
of high school graduation examinations, which are used to deny diplomas to
students who have attended high school for four years, but cannot demonstrate
that they have learned the required material. Opponents say that no student who
has put in four years of seat time should be denied a high school diploma merely
for repeatedly failing a test, or even for not knowing the required material.222324
High-stakes tests have been blamed for causing sickness and test anxiety in
students and teachers, and for teachers choosing to narrow the curriculum
towards what the teacher believes will be tested. In an exercise designed to make
children comfortable about testing, a Spokane, Washington newspaper published
a picture of a monster that feeds on fear.25 The published image is purportedly
the response of a student who was asked to draw a picture of what she thought
of the state assessment.
Other critics, such as Washington State University's Don Orlich, question the use
of test items far beyond standard cognitive levels for students' age.26
Compared to portfolio assessments, simple multiple-choice tests are much less
expensive, less prone to disagreement between scorers, and can be scored
quickly enough to be returned before the end of the school year. Standardized
tests (all students take the same test under the same conditions) often use
multiple-choice tests for these reasons. Orlich criticizes the use of expensive,
holistically graded tests, rather than inexpensive multiple-choice "bubble tests",
to measure the quality of both the system and individuals for very large numbers
of students.26 Other prominent critics of high-stakes testing include Fairtest and
Alfie Kohn.
The use of IQ tests has been banned in some states for educational decisions, and
norm-referenced tests, which rank students from "best" to "worst", have been
criticized for bias against minorities. Most education officials support criterionreferenced tests (each individual student's score depends solely on whether he
answered the questions correctly, regardless of whether his neighbors did better
or worse) for making high-stakes decisions.
21st century assessment
It has been widely noted that with the emergence of social media and Web 2.0
technologies and mindsets, learning is increasingly collaborative and knowledge
increasingly distributed across many members of a learning community.
Traditional assessment practices, however, focus in large part on the individual
and fail to account for knowledge-building and learning in context. As researchers
in the field of assessment consider the cultural shifts that arise from the
emergence of a more participatory culture, they will need to find new methods of
applying assessments to learners.27
Assessment in a democratic school
Sudbury model of democratic education schools do not perform and do not offer
assessments, evaluations, transcripts, or recommendations, asserting that they do
not rate people, and that school is not a judge; comparing students to each other,
or to some standard that has been set is for them a violation of the student's right
to privacy and to self-determination. Students decide for themselves how to
measure their progress as self-starting learners as a process of self-evaluation:
real lifelong learning and the proper educational assessment for the 21st century,
they adduce.28
According to Sudbury schools, this policy does not cause harm to their students as
they move on to life outside the school. However, they admit it makes the
process more difficult, but that such hardship is part of the students learning to
make their own way, set their own standards and meet their own goals.
The no-grading and no-rating policy helps to create an atmosphere free of
competition among students or battles for adult approval, and encourages a
positive cooperative environment amongst the student body.29
The final stage of a Sudbury education, should the student choose to take it, is the
graduation thesis. Each student writes on the topic of how they have prepared
themselves for adulthood and entering the community at large. This thesis is
submitted to the Assembly, who reviews it. The final stage of the thesis process is
an oral defense given by the student in which they open the floor for questions,
challenges and comments from all Assembly members. At the end, the Assembly
votes by secret ballot on whether or not to award a diploma.30
Assessing ELL students
A major concern with the use of educational assessments is the overall validity,
accuracy, and fairness when it comes to assessing English language learners (ELL).
The majority of assessments within the United States have normative standards
based on the English-speaking culture, which does not adequately represent ELL
populations.31 Consequently, it would in many cases be inaccurate and
inappropriate to draw conclusions from ELL students’ normative scores. Research
shows that the majority of schools do not appropriately modify assessments in
order to accommodate students from unique cultural backgrounds.31 This has
resulted in the over-referral of ELL students to special education, causing them to
be disproportionately represented in special education programs. Although some
may see this inappropriate placement in special education as supportive and
helpful, research has shown that inappropriately placed students actually
regressed in progress.31
It is often necessary to utilize the services of a translator in order to administer
the assessment in an ELL student’s native language; however, there are several
issues when translating assessment items. One issue is that translations can
frequently suggest a correct or expected response, changing the difficulty of the
assessment item.32 Additionally, the translation of assessment items can
sometimes distort the original meaning of the item.32 Finally, many translators are
not qualified or properly trained to work with ELL students in an assessment
situation.31 All of these factors compromise the validity and fairness of
assessments, making the results not reliable. Nonverbal assessments have shown
to be less discriminatory for ELL students, however, some still present cultural
biases within the assessment items.32
When considering an ELL student for special education the assessment team
should integrate and interpret all of the information collected in order to ensure a
non biased conclusion.32 The decision should be based on multidimensional
sources of data including teacher and parent interviews, as well as classroom
observations.32 Decisions should take the students unique cultural, linguistic, and
experiential backgrounds into consideration, and should not be strictly based on
assessment results.
4.Health promotion
Health promotion has been defined by the World Health Organization's 2005
Bangkok Charter for Health Promotion in a Globalized World as "the process of
enabling people to increase control over their health and its determinants, and
thereby improve their health".1 The primary means of health promotion occur
through developing healthy public policy that addresses the prerequisites of
health such as income, housing, food security, employment, and quality working
conditions. There is a tendency among public health officials and governments—
and this is especially the case in liberal nations such as Canada and the USA—to
reduce health promotion to health education and social marketing focused on
changing behavioral risk factors.2
Recent work in the UK (Delphi consultation exercise due to be published late 2009
by Royal Society of Public Health and the National Social Marketing Centre) on
relationship between health promotion and social marketing has highlighted and
reinforce the potential integrative nature of the approaches. While an
independent review (NCC 'It's Our Health!' 2006) identified that some social
marketing has in past adopted a narrow or limited approach, the UK has
increasingly taken a lead in the discussion and developed a much more integrative
and strategic approach (see Strategic Social Marketing in 'Social Marketing and
Public Health' 2009 Oxford Press) which adopts a whole-system and holistic
approach, integrating the learning from effective health promotion approaches
with relevant learning from social marketing and other disciplines. A key finding
from the Delphi consultation was the need to avoid unnecessary and arbitrary
'methods wars' and instead focus on the issue of 'utility' and harnessing the
potential of learning from multiple disciplines and sources. Such an approach is
arguably how health promotion has developed over the years pulling in learning
from different sectors and disciplines to enhance and develop.
The "first and best known" definition of health promotion, promulgated by the
American Journal of Health Promotion since at least 1986, is "the science and art
of helping people change their lifestyle to move toward a state of optimal
health".34 This definition was derived from the 1974 Lalonde report from the
Government of Canada,3 which contained a health promotion strategy "aimed at
informing, influencing and assisting both individuals and organizations so that
they will accept more responsibility and be more active in matters affecting
mental and physical health".5 Another predecessor of the definition was the 1979
Healthy People report of the Surgeon General of the United States,3 which noted
that health promotion "seeks the development of community and individual
measures which can help... people to develop lifestyles that can maintain and
enhance the state of well-being".6
At least two publications led to a "broad empowerment/environmental"
definition of health promotion in the mid-1980s3:

In 1984 the World Health Organization (WHO) Regional Office for Europe
defined health promotion as "the process of enabling people to increase
control over, and to improve, their health".7 In addition to methods to
change lifestyles, the WHO Regional Office advocated "legislation, fiscal
measures, organisational change, community development and

spontaneous local activities against health hazards" as health promotion
methods.7
In 1986, Jake Epp, Canadian Minister of National Health and Welfare,
released Achieving health for all: a framework for health promotion which
also came to be known as the "Epp report".38 This report defined the three
"mechanisms" of health promotion as "self-care"; "mutual aid, or the
actions people take to help each other cope"; and "healthy environments".8
The WHO, in collaboration with other organizations, has subsequently cosponsored international conferences on health promotion as follows:

1st International Conference on Health Promotion, Ottawa, 1986, which
resulted in the "Ottawa Charter for Health Promotion".9 According to the
Ottawa Charter, health promotion9:
o "is not just the responsibility of the health sector, but goes beyond
healthy life-styles to well-being"
o "aims at making... political, economic, social, cultural, environmental,
behavioural and biological factors favourable through advocacy for
health"
o "focuses on achieving equity in health"
o "demands coordinated action by all concerned: by governments, by
health and other social and
Worksite health promotion
Work site health focus on the prevention and intervention that reduce health risk
of the employee. The U.S. Public Health Service recently issued a report titled
"Physical Activity and Health: A Report of the Surgeon General" which provides a
comprehensive review of the available scientific evidence about the relationship
between physical activity and an individual's health status. The report shows that
over 60% of Americans are not regularly active and 25% are not active at all.
There is very strong evidence linking physical activity to numerous health
improvements. Health promotion can be performed in various locations. Among
the settings that have received special attention are the community, health care
facilities, schools, and worksites.10 Worksite health promotion, also known by
terms such as "workplace health promotion," has been defined as "the combined
efforts of employers, employees and society to improve the health and well-being
of people at work".1112 WHO states that the workplace "has been established as
one of the priority settings for health promotion into the 21st century" because it
influences "physical, mental, economic and social well-being" and "offers an ideal
setting and infrastructure to support the promotion of health of a large
audience".13
Worksite health promotion programs (also called "workplace health promotion
programs," "worksite wellness programs," or "workplace wellness programs")
include exercise, nutrition, smoking cessation and stress management. Reviews
and meta-analyses published between 2005 and 2008 that examined the scientific
literature on worksite health promotion programs include the following:






A review of 13 studies published through January 2004 showed "strong
evidence... for an effect on dietary intake, inconclusive evidence for an
effect on physical activity, and no evidence for an effect on health risk
indicators".14
In the most recent of a series of updates to a review of "comprehensive
health promotion and disease management programs at the worksite,"
Pelletier (2005) noted "positive clinical and cost outcomes" but also found
declines in the number of relevant studies and their quality.15
A "meta-evaluation" of 56 studies published 1982-2005 found that worksite
health promotion produced on average a decrease of 26.8% in sick leave
absenteeism, a decrease of 26.1% in health costs, a decrease of 32% in
workers’ compensation costs and disability management claims costs, and
a cost-benefit ratio of 5.81.16
A meta-analysis of 46 studies published 1970-2005 found moderate,
statistically significant effects of work health promotion, especially exercise,
on "work ability" and "overall well-being"; furthermore, "sickness absences
seem to be reduced by activities promoting healthy lifestyle".17
A meta-analysis of 22 studies published 1997-2007 determined that
workplace health promotion interventions led to "small" reductions in
depression and anxiety.18
A review of 119 studies suggested that successful work site healthpromotion programs have attributes such as: assessing employees' health
needs and tailoring programs to meet those needs; attaining high
participation rates; promoting self care; targeting several health issues
simultaneously; and offering different types of activities (e.g., group
sessions as well as print materials).




Health promotion in Sri Lanka has been very successful during recent
decades as shown by the health indicators. Despite the numerous
successes over the years, the integrity of the health system has been
subjected to many challenges. Sri Lanka is already facing emerging
challenges due to demographic, epidemiological, technological and socioeconomic transitions. The disease burden has started to shift rapidly
towards lifestyle and environmental related non-communicable diseases.
These are chronic and high cost and will cause more and perhaps
unaffordable burden to the country’s health care expenditure, under the
free of charge health services policy. The previous success of health
development increased the life expectancy of Sri Lankan people to 72 for
male and 76 for women but the estimated “healthy life expectancy” at birth
of all Sri Lanka population is only 61.6
Health is affected by biological, psychological, chemical, physical, social,
cultural and economic factors in people’s normal living environments and
people’s lifestyles. With the current rapid changing demographic, social and
economic context and the epidemiological pattern of diseases, the previous
health promotion interventions which found to be effective in the past may
not be effective enough now and the future to address all the important
determinants that affect health. Promoting people’s health must be the
joint responsibility of all the social actors. These challenges require
significant changes in the national health system toward new effective
health promotion which has been accepted worldwide as the most cost
effective measure to reduce the disease burden of the people and the
burden of the nation on the increasing cost for treatment of diseases.
The development of this National Health Promotion Policy is based on: (a)
the evidences from Sri Lanka health promotion situation analysis, (b) the
international accepted concept, the WHO guiding principle for health
promotion and the World Health Assembly resolutions and WHO South
East Asia Regional Committee Resolution, and (c) the State Policy and
Strategy for Health and the Health Master Plan 2007-2016.
The key strategies for health promotion are: advocacy and mediate
between different interests in society for the pursuit of health; empower
and enable individual and communities to take control over their own
health and all determinants of health; improve the health promotion
management, health promotion interventions, programs, plans and
implementation; and partnership, networking, alliance building and
integration of health promotion activities across sectors.

In Sri Lanka, other non health government sectors and NGOs are currently
active implementing their community development projects with the
community empowerment concept that resemble the healthy setting
approach for health promotion. These projects are the high potential entry
points and good opportunity for the formal commencement of the new
effective setting approach health promotion and the holistic life course
health promotion. It is also an opportunity for partnerships and alliance
building for concerted action to promote health of the nation. This policy is
formulated to promote health and well-being of the people by enabling all
people to be responsible for their own health and address the broad
determinants of health through the concerted actions of health and all
other sectors to make Sri Lanka a Health Promoting Nation where all the
citizens actively participate in health promotion activities continuously for a
healthy life expectancy.
Download