Oxford Medical School Gazette 60(1)

advertisement
Oxford Medical School Gazette 60(1)
Below is the full text of OMSG 60(1), with references.
For the full pdf file please visit: http://www.omsg-online.com/?p=105
For other archive editions, further information about the Gazette, to download
subscription form or to contact us, please visit: http://www.omsg-online.com/
Editorial
The theme for this issue of the Oxford Medical School Gazette is ‘Creation vs.
Destruction’. It was conceived during a discussion about ‘self-inflicted’ diseases and the
many ways in which we create health problems for ourselves. You can read about some
of them here. On page 18, Helen Mather tackles the philosophy and ethics of treating
such self-inflicted diseases on the NHS. On page 20, Jack Pottle and Felicity Rowley
discuss extremes of weight and explore ways in which we can return patients to ‘the
happy medium’. Closer to home, on page 52, Charlotte Skinner examines the high rates
of alcoholism and suicide
However, concerned about publishing a Gazette filled with doom and gloom, we
turned to the brighter side of medicine, looking at new developments and ways in which
the field is moving forwards. Before we knew it, the theme ‘Creation vs. Destruction’
was born. Several of the ‘Creation’ articles relate to surgery. On page 23, Annabel
Christian looks at ways in which surgery is changing to accommodate people with
religious beliefs that rule them out of conventional surgery. On page 50, Lucy Farrimond
explores the latest developments in robotic surgery, and how this will change roles within
the surgical team. With a different take on creation, on page 36, Lisanne Stock takes a
look at the pros and cons of IVF.
Moving away from the theme, there are a variety of other articles to peruse and
ponder. The focus goes overseas as Omar Abdel-Mannan and Imran Mahmud examine
the health and social ramifications of the Israeli bombardment of the Gaza Strip on page
12. On page 6, Ben Stewart looks at the differing healthcare systems across the world at a
time when reform is ever-present. On a more amorous note, Matt Harris and Tanya Debb
get it together in their articles on love on pages 40 and 44, whilst on page 46, Lance Rane
gives advice on what to if you fall out of love with medicine.
Behind the scenes, there have been various developments. The OMSG team has
moved out of the old boiler room next to Lecture Theatre 1 and into a new office in Osler
House. Here there is the space and technology for both editors and designers to work
side-by-side to concoct the latest issue. The extensive collection of OMSG back issues
has also been streamlined and catalogued for reference. Finally, we have a brand
spanking new website, where we can publish previous editions, online exclusives and
advertisements. Go to www.omsg-online.com to see for yourself.
Looking towards future editions, we are hoping to include a peer review section,
where students can publish findings from their research projects. We would like to steer
the Gazette towards the realms of medical journalism and encourage more opinion pieces
from writers. We would also like to encourage more articles from Oxford Medical
Alumni, so if you have something to say, please feel free to contact us. Above all, we
hope you enjoy the latest edition.
The OMSG editors,
Jack Pottle
Matt Harris
Namrata Turaga
June 2010
Helping Haiti
Professor Chris Bulstrode reports on his experiences
The news of an earthquake in Haiti didn’t flash up on my radar for a couple of days, until
it became clear that this was really, really big. The epicentre was close to Port-au-Prince,
a city of over three million people. That is the size of Birmingham. The number of people
were killed when the atom bomb was dropped on Hiroshima was in the region of 40–
50,000. This tragedy was four or five times bigger. So when Doctors of the World
(Medecins du Monde) asked if I could help, my response was to start packing. You don’t
really need very much for a field mission, but you always need a really good head torch
with plenty of spare batteries, both for doing surgery and for reading in the evenings if
there is no light. I never go now without a laptop either, because you can always find
electrical power some time in the day, and if you can just get internet connection then
you have Skype and the world is yours. You should be able to do three weeks in the field
on hand-luggage only, but I never manage it.
MdM is one of the top NGOs so we were really well looked after. There were three-man
tents for each of us and regular meals. Our anaesthetist was extraordinary. He arranged
the two operating tables (examination couches) an arms distance apart, so that he could
hold a hand over both patients’ faces all the time. That way he could check that they were
both breathing. He poached a hand-held ultrasound machine from the Americans, and had
his own nerve stimulator. Using these, he could block any nerve in the body in seconds.
Since we had no anaesthetic machines, it was either that, or his rich mix of intravenous
morphine, ketamine and midazolam. Surgically most of our work was just trying to bring
infection under control in all the amputation stumps, and open fasciotomies. Then we
were trying to close the wounds with skin grafts. Sometimes the maggots had got there
first and were cleaning the wounds beautifully for us. Controlling bleeding was really
difficult because we had no diathermy but apart from that it was standard surgery, just
plenty of it! As fast as we cleared cases, stretcher bearers would come in with more
patients from outlying areas.
But now the next stage must begin. There are now estimated to be over 8,000 survivors
with lower limb amputations. They need prosthetics in a country with no artificial limb
service. I fear that Haiti will now sink back into poverty and oblivion. It needs more than
a pop record to lift it out of the downward spiral of drugs, violence, corruption and
political instability, so well explained in Paul Collier’s book ‘The Bottom Billion’. Two
Haitian orthopaedic trainees, who helped us hugely, want to come to Oxford for a few
months to do some extra training. Anyone got any ideas how we could organise and fund
that?
Chris Bulstrode is Professor of Trauma and Orthopaedics and Fellow at Green
Templeton College
Golden rice: high hopes unfulfilled?
“This rice could save a million kids a year” [1]. Ten years ago, Time magazine had high
hopes for Golden Rice. Created by Ingo Potrykus, of the Institute of Plant Sciences at the
Swiss Federal Institute of Techonology, golden rice is a form of Oryza sativa geneticallymodified (GM) to contain β-carotene, a pigment which causes the associated golden
colour and is the precursor to Vitamin A, in the edible parts of rice.
Vitamin A, the precursor of the light-absorbing molecule retinal, is essential for retina
formation. Its deficiency leads to night blindness, xerophthalmia, and ultimately total
blindness. Since rice is a major staple food source in at least 26 countries, including
highly populated areas of Asia, Africa and Central America, it was hoped that golden rice
would be used as a humanitarian tool to prevent Vitamin A-related blindness in the
developing world.
However, golden rice has encountered several obstacles and is yet to save the sight of
millions. As with all forms of GM crops, there has been significant opposition to golden
rice since it is proposed that it will reduce crop biodiversity and lead to a surge in weedy
rice populations, as has been exhibited by other strains of GM rice [2], [3]. Weedy rice is
a strain in which the seeds simply drop from the crops and cannot be harvested. It has
been shown that the plants that grow from crossing GM rice and weedy rice varieties
have higher reproductive rates, which exacerbates the problem.
Another problem with golden rice is its β-carotene content, which was suggested to be
too low to make a substantial difference in alleviating Vitamin A deficiency [4]. When
rice grains were tested for their β-carotene content it was found to be less than 1% of the
expected content [5], with the content dropping further on cooking. The limiting step was
hypothesised to be the daffodil gene encoding phytoene synthase (psy), and upon the
replacement with a maize psy gene, the β-carotene content was increased 23-fold.
However there is still a lack of evidence to suggest that these improvements are enough
to enable golden rice to alleviate Vitamin A deficiency and achieve its humanitarian goal.
Although golden rice has been criticised for failing to live up to expectations, it cannot be
criticised from an altruistic perspective. There is no fee associated with the humanitarian
use of golden rice. Farmers are not required to pay royalties if they make less than
$10,000 a year, and are permitted to keep and replant seeds. Other projects aimed at
preventing ‘hidden hunger’, a term used to describe micronutrient malnutrition, have
been fully taken on board. For example, iodine fortification of salt prevents iodine-related
mental retardation [6], and its distribution throughout China has been compulsory since
1995. It is therefore hoped that in the future, golden rice will be seen as a similarly
revolutionary scheme.
Isabel Instone is a second year biochemistry student at Oriel College
References
[1] TIME magazine (New Jersey, Jul 2000)
[2] Li Juan Chen, Gene Flow from Cultivated Rice (Oryza sativa) to its Weedy and Wild
Relatives, Annals of Botany 93: 67-73, 2004
[3]Warwick SI, Gene Flow, Invasiveness, and Ecological Impact of Genetically Modified
Crops, Ann N Y Acad Sci. 2009 Jun;1168:72-99.
[4]Improving the nutritional value of Golden Rice through increased pro-vitamin A
content, Nature Biotechnology 2005, 23:4.
[5] Christoph Then, The campaign for genetically modified rice is at the crossroads,
2009,
available
from:
http://www.foodwatch.de/foodwatch/content/e6380/e23456/e23458/GoldenRice_english
_final_ger.pdf [accessed February 2010]
[6] Andersson M, Epidemiology of iodine deficiency: Salt iodisation and iodine status,
Best Pract Res Clin Endocrinol Metab. 2010 Feb;24(1):1-11
Healthcare models across the world
Introduction
When we think of healthcare models, immediately our own unique system, the NHS,
springs to mind, or perhaps we consider the model in the United States and the
controversial changes it is currently undergoing. However, these are just two systems
amongst a wide variety used across the developed world. With criticisms levelled at the
NHS at home and abroad, what can we learn from the methods employed elsewhere?
Australia
Australia operates a mixed system incorporating both public and private services with a
strong relationship between the two. This is a decentralized system in which the six states
and two territories are each responsible for the public hospitals and primary healthcare.
Since 1984, Australia has operated a universal healthcare system called Medicare, which
pays for the vast majority of treatments. Out of pocket payments are made for a small
number of treatments, but these are typically offset by tax rebates. The Medicare funds
are distilled down from the Australian Department of Health and Aging, and the
Department of Veteran’s affairs. [1].
The private system comprises private hospitals and private medical practitioners. Its
major financial inputs are from the private insurance infrastructure augmented by out-ofpocket payments. In addition, to relieve pressure on the public system, the private arm of
medical care receives commissions for work from the same government departments that
fund public care and citizens are offered incentives to take out private insurance [1-3].
Canada
Canada employs a publically funded ‘single payer’ healthcare system with care free at the
point of use. Like in Australia, this system (also called Medicare) was conceived in 1984
with the Canada Health Act which set out the guiding principles of universality,
accessibility, and public administration. Canada operates a strongly decentralized system
with the ten provincial governments charged with administration and distribution of
funds (gathered by compulsory enrolment for citizens in Medicare) for hospital care [5].
The Canadian system is unique in that the 1984 Canadian Health Act forbids any citizen
from buying a service from the private sector that is already provided by Medicare. This
prevents a parallel private system from emerging. Nevertheless, a private sector has
developed, providing services that Medicare is unable to, for example laser eye treatment
and IVF. Many in Canada are worried about the development of “privatization by
stealth”, in which private for-profit clinics specialize in high volume, low risk, expensive
procedures such as MRI scans, and hernia operations [5, 6]. However, critics of Canadian
Medicare attack a chronically stretched system, long waiting times, and discrepancies in
provision between different provinces [7].
Sweden
Sweden is perplexing to many, as it regularly appears very high on the World Health
Organization rankings of healthcare whilst operating a decentralized public model [8].
The model in Sweden is an aspect of a larger social insurance system based on three
fundamental principles: equal access, care based on need, and cost effectiveness [9].
Like Canada, Sweden employs a ‘single payer’ system in which the government is the
sole contributor to healthcare via funds generated from taxation. Provision of the
decentralized resources comes from the eighteen county councils, and some of the larger
municipal governments. The infrastructure for the delivery of healthcare is provided
overwhelmingly by public hospitals; however there are a small number of private
providers commissioned to undertake work by the councils. Sweden has a number of
government departments which oversee the process of rationing healthcare, the two most
important being the Swedish Council on Technology Assessment in Healthcare, which
analyses which interventions provide the most benefit to patients, and the Dental and
Pharmaceutical Benefits Agency, which determines whether pharmaceuticals or certain
dental treatments should be subsidized [9-11].
United States of America
Healthcare in the USA is provided by private hospitals, and the costs are met through a
complex system of health insurance providers. It is a multi-payer system in which many
providers compete for the investment of consumers. Insurance provided by employers
covers half of the population, government programs cover a third, and individual
insurance policies cover 5% of Americans. Around one sixth of the population is
uninsured [12]. Individuals over 65 years receive insurance through the government-run
insurance program Medicare. Health Maintenance Organizations (HMOs) manage care
through a network of hospitals and doctors which treat patients in accordance with HMO
guidelines. Whilst these are cheaper than conventional health insurance, there are greater
limitations on the hospitals and physicians that can be used and the treatments available.
The US healthcare system has a very large spend (it is the largest sector of the US
economy, four times the size of national defence), but it fails to provide the high
standards of care seen in other countries [12, 13]. There are several important problems
with this system, including the large administrative costs and high proportion of the
population that remain uninsured. In addition, the lack of a solid base of primary
healthcare and the way in which the system promotes defensive medicine due to heavy
malpractice litigation costs, are thought to decrease the quality of healthcare [13, 14].
On 23 March 2010, Barack Obama signed into law the historic Patient Protection and
Affordable Care Act to extend coverage to the 32 million Americans currently uninsured.
This act, when implemented, will require all Americans to have health insurance
coverage, and will provide subsidies to make it more affordable. In addition it will stop
discrimination by insurance providers on the basis of pre-existing conditions, and will
prevent recission (the practice of halting coverage when care becomes too expensive).
Although some changes will take effect immediately, for example prohibition of insurers
denying coverage to children with pre-existing conditions, regrettably most of the
changes will not be seen until 2014 [15, 16].
Japan
The Japanese system is remarkably egalitarian, and remarkably successful. It is based on
a system of universal health insurance, and provides strong outcomes for a modest per
capita expenditure. Japan operates a social insurance scheme in which all citizens are
enrolled with the payment of premiums in proportion to income. The population is
divided into three groups: members of large companies and those in public service have
insurance provided in association with these organizations, members of small companies
have insurance provided by the Ministry of Health and Welfare, and the self employed
and pensioners receive insurance from the municipal governments. Despite insurers being
disparate, all fees flow through the National Fee Schedule, which lists all the healthcare
expenses and sets prices (reviewed once every two years), so that all insurers pay the
same for each service. Most physicians operate in small hospitals as independent
practitioners. However, there are numerous larger hospitals operated by the local
governments and universities. Hospitals established for profit by investors are prohibited
[17].
An intriguing feature of the Japanese system is the retrospective review of physicians’
claims: following treatment, a committee of physicians review the claim to approve it,
this is thought to significantly reduce the administration costs and provides physicians
with a good deal of autonomy. However, some argue that this has led to unsustainable
overuse of tests, drugs, and facilities [18]. Further criticisms levelled at the Japanese
system include the way in which the Fee Schedule curtails the implementation of high
technology services in favour of low cost, short-term treatments. In 2002, Japan cut
spending on healthcare for the first time due to economic decline which, combined with
the burden of a top-heavy population, has led to the desire for reform [19].
Lessons Learned
Whilst all the countries surveyed here subscribe to a philosophy of universal healthcare,
there are varying degrees of state and private involvement, for example, Japan and
Sweden profit from an almost entirely public system, whilst Canada and Australia
incorporate growing private elements, much like the UK. The move of the USA towards
universal coverage through healthcare reform demonstrates that among developed
countries, it is now unacceptable to fail to provide universal coverage. The UK does not
prohibit a parallel private system; indeed private practice is growing in the UK, and this
allows individuals to bypass the limited financial resources available in the NHS,
particularly for complex treatments, non-essential treatments, or those involving very
new technologies [20]. It has been argued that the NHS struggles to provide elective
surgeries alongside acute and emergency care, and that the private sector can support the
NHS by taking on the elective workload [21]. In this sense a parallel private system is a
valuable addition to the repertoire of healthcare provision. Canada is increasingly
struggling with the prohibition of private healthcare and it is likely that in the future,
there will be greater private sector involvement based on a need to provide treatments
that are difficult to justify in a public system [5]. However, evidence from the USA
suggests that for-profit, private healthcare results in inferior care and inflated prices,
largely due to diversion of money to profits, greater bureaucracy and malpractice
litigation costs [12, 14, 22, 23]. Can public-private competition drive up standards? If this
were so, it would be a useful method increasing efficiency in the NHS. Unfortunately,
evidence from the USA and Canada suggests that this is not the case; private firms grab
the high profit services, and the public sector (for example Medicare in the USA) is
forced to accept the unprofitable, and often more seriously ill patients [5, 6, 22].
Privatization must be implemented cautiously in the UK; it cannot be a replacement for a
public system, but it must be complementary, providing an equivalent standard of care,
for a fair price. Canada is prudent in prohibiting private healthcare as it preserves a
patient-centric universal system, however if private healthcare is used as an effective
adjunct to a largely public system, as is more the case in Australia, an effective
compromise may be reached.
A crucial question for the NHS is whether to reform towards a system of social insurance
rather than the cumbersome nationalized system that is currently employed. Currently the
NHS relies on the distribution of funds from taxes to primary care trusts (PCTs) via the
Department of Health. The PCT’s share of total resources is calculated in advance, and
with this money, the PCT commissions healthcare services from the NHS resources in the
region. This is similar to the tax funded Swedish model which, whilst clearly an effective
model, is not directly comparable mainly because it covers a much smaller a population
of around 9 million compared to over 60 million in the UK [4]. Indeed getting the NHS to
break even has been described as equivalent to “landing a jumbo jet on a postage stamp
wearing a blindfold” [24]. A social insurance scheme such as Medicare employed in
Canada, Australia, or Japan provides an attractive alternative. Instead of paying for
healthcare via taxes and the money being distributed based on projected use, a universal
social insurance scheme would purchase care for individuals who make a compulsory
income-related contribution to the insurance scheme. The major benefit is that as the
consumer chooses which services to take up, the system gravitates towards providing
services which patients want or need. Furthermore, as evidenced by the Australian
system, the insurance scheme can turn to private healthcare, commissioning additional
treatments, whilst still retaining not for-profit, public hospitals. Could this work in the
UK? It is not too great a leap of imagination to conceive of the NHS acting as insurer
rather than treasurer, particularly as a greater concentration on the consumer is in line
with current movements in the NHS, such as the “choose and book” scheme [25].
The greatest lessons we can learn from health models elsewhere are to be cautiously
accepting of private healthcare and to embrace a social insurance scheme for the
provision of universal healthcare. In doing so we may be able to retain the traditional
strengths of the NHS whilst incorporating the best of the rest.
Benjamin Stewart is a second year medical student at St. Hilda’s College
Key statistics [4]
Canada Australia Sweden
Per
Capita
Expenditure
on 3912
healthcare (US $)
Physician
density
per
1000 19
population
Private expenditure on health as 29.6
percentage of total expenditure
USA
Japan
UK
3316
3870
6714
2690
3361
25
33
26
21
23
32.8
18.8
54.2
17.8
12.6
References
[1] Goss J, et al. Health Expenditure Australia 2007 – 2008. Australian Institute of Health
and Welfare. Health and Welfare Expenditure Series, No. 37.
[2] Australia Government. Medicare Australia [Online]. Available
http://www.medicareaustralia.gov.au/ [Accessed: 24th February 2010]
from
[3] The Australian Health Care System: An outline. Financing and Analysis Branch,
Commonwealth Department of Health and Aged Care. September 2000.
[4] WHO Statistical Information System WHOSIS [Online]. Available from
http://apps.who.int/whosis/data/Search.jsp?countries=%5bLocation%5d.Members
[Accesed: 24th February 2010]
[5] Steinbrook R. Private Health Care in Canada. The New England Journal of Medicine
2006 354; 16:1661-1664
[6] Lewis S, Donaldson C, Mitton C, Currie G. The future of health care in Canada. BMJ
2001 323:926–9
[7] Eisenberg MJ, An American Physician in the Canadian Health Care System. Arch
Intern Med 2006 166:281-282
[8] Triggle N. How the NHS could learn from Sweden [Online]. Available from
http://news.bbc.co.uk/1/hi/health/4460098.stm [Accessed: 24th February 2010]
[9] Swedish Institute. Healthcare in Sweden [Online]. Available from
http://www.sweden.se/eng/Home/Work/Swedish_model/Residence_based_benefits/Healt
h-care/Facts/Health-care-in-Sweden/ [Accessed: 24th February 2010]
[10] The Swedish Council on Technology Assessment in Health Care. SBU evaluates
Healthcare technologies [Online]. Available from http://www.sbu.se/en/ [Accessed: 24th
February 2010]
[11] Dental and Pharmaceutical Benefits Agency. Dental and Pharmaceutical Benefits
Agency in Sweden [Online]. Available from http://www.tlv.se/Upload/English/ENGTLV-presentation.pdf [Accessed: 24th February 2010]
[12] Nuwer MR, Esper PD, Donofrio JP, et al. Invited Article: The US health care
system: Part 1: Our Current System. Neurology 2008 71;1907-1913
[13] Sarpel U, Vladeck BC, Divino CM, Klotman PE. Fact and Fiction: Debunking
Myths in the US Healthcare System. Annals of surgery 2008 247;563-569
[14] Starfield B. Is US Health Really the Best in the World. JAMA 2000 284;483-485
[15] Iglehart JK. Historic Passage – Reform at Last. NEJM 2010, March 24 [Epub ahead
of Print]
[16] Oberlander J. A Vote for Health Care Reform. NEJM 2010 362:e44
[17] Campbell J, Ikegami N. Medical Care in Japan. The New England Journal of
Medicine 1995 333; 19:1295-1299
[18] Nakayama T, Nomura H. The Japanese Healthcare system: The issue is to solve the
“tragedy of the commons” without making another. BMJ 2005 331:648-649
[19] Campbell J, Ikegami N. Japan’s Heath Care system: Containing Costs and
Attempting Reform. Health Affairs 2004 23; 3:26-36
[20] Propper C. The Demand for Private Health Care in the UK. Journal of Health
Economics 2000 19:855-876
[21] Bull A, Doyle Y. Role of private sector in United Kingdom healthcare system. BMJ
2000 321:563-565
[22] Woolhandler S, Himmelstein D. Competition in a publicly funded healthcare system.
BMJ 2007 335:1126-1129
[23] Angell M. Privatizing health care is not the answer: lessons from the United States.
CMAJ 2008 179; 9:916-919
[24] £63m black hole for health trusts [Online]. Available
http://news.bbc.co.uk/1/hi/england/4628130.stm [Accessed: 27th February 2010]
from
[25] Lipsey D, Neuberger J, Robinson R, Patel C, et al. Introducing Social Insurance to
the UK: The Social Market Foundation Health Commission – Report 2B [Online].
Available
from
http://www.smf.co.uk/assets/files/publications/IntroducingSocialInsurancetotheUK.pdf
[Accessed: 27th February 2010]
Dedicated followers of fashion: surveillance of emerging disease
Every weekday morning, a group of men and women in Geneva sit down to discuss
information gathered from news wires, discussion groups, government bodies, and
academic institutions around the world. Their goal: to identify emerging infectious
diseases.
Each day this Global Alert and Response team (GAR), part of the World Health
Organisation (WHO), will assess around twenty incoming reports of outbreaks at various
stages, with around seventy percent of the initial reports coming from unofficial sources
[1]. These reports are scrutinised to verify their reliability before performing a risk
assessment of global impact. Depending on various factors, such as whether the disease is
known, morbidity and mortality, and potential for international spread, the WHO
disseminates the information and launches an outbreak response. The majority of these
responses deal with agents that are already known: over 30% concerned cholera, and
nearly 10% were diarrhoea of other causes [1]. However, some represent unknown or
novel agents. A recent example of this was the outbreak of Severe Acute Respiratory
Syndrome (SARS) in 2003.
In late 2002, there were reports in Guangdong Province, China of a strange contagious
disease causing an atypical pneumonia with a high mortality. These reports were picked
up at the end of November by GAR’s monitoring of electronic media, which only months
earlier had been upgraded to include Chinese newswires. Despite local panic and
unofficial news of over one hundred deaths, the Chinese Ministry of Health delayed their
official report until January, when they informed the WHO of a minor outbreak that was
coming under control [2]. In fact, it rapidly became apparent that China was underreporting SARS cases, and on 12 March 2003 the WHO issued its first global alert
announcing SARS [3]. This was the first of many updates which are summarized in
figure 1.
Following the revelations regarding its suppression of vital evidence, the image of China
took a further blow as the majority of Western reports painted it as a backward
unsanitised nation, whose hygiene and culinary habits surely explained the outbreak [4].
In spite of their unnecessary hyperbole, these claims were somewhat vindicated when
evidence emerged later that the SARS-coronavirus could be isolated from the palm civet
[5], an animal eaten as a delicacy in China, and from whom the virus is presumed to have
jumped to humans. At the height of the epidemic, amid school closures, travel
restrictions, and threatened border closures, the frightened public took to wearing surgical
face masks in the belief that it would protect them [6], a fashion which was the iconic
image of the SARS epidemic. Despite the media and public frenzy, just a few months
into the outbreak, and with a death toll of only around nine hundred [7], the WHO
declared the epidemic under control.
The experience with SARS illustrates several points. Firstly, it emphasizes the
importance of accurate and open reporting of notifiable disease. Although the epidemic
was contained relatively quickly and at minimal cost to life, it would undoubtedly have
been identified earlier had China been more transparent in their communications with the
WHO. Secondly, it highlights the benefits of local reporting to decentralized surveillance
systems, such as the GAR network. International political and scientific collaboration is
critical to monitor, identify, and research such agents as the outbreak develops. In the
case of SARS, the timeframe for characterizing the virus was impressively short. From an
epidemiological perspective, it exemplifies the role of zoonosis in the emergence of
disease, and the ease with which modern travel allows disease to spread. Although this
risk could be reduced by restriction of air travel, modelling has shown that even a ninetynine percent successful ban would only slow a pandemic by a few weeks [8]. Finally,
SARS showcases the role of the media in the social construction of disease outbreaks,
with early reports of dire plagues fuelling a widespread panic which rapidly gave way to
embarrassed silence as the epidemic fizzled out within months [4]. This media coverage
was quite different to that of HIV or Ebola, where the Western press initially ignored the
story as something that concerned ‘other’ people (Africans and homosexual men) and it
only became newsworthy once the Western majority was at risk. By comparison, for
SARS, this risk arrived all too rapidly by air travel and droplet transmission, resulting in
immediate news reports.
In the wake of SARS, the WHO commissioned a consultation on best practice for
communication with the public during outbreaks. It outlines the factors that may help to
mitigate health and social impact of future pandemics, such as early announcement,
transparency, building trust, respecting public concern, and advanced planning [9].
Individual ministries of health also made plans to protect their countries against the next
pandemic, and in Britain this included negotiation of ‘sleeping contracts’ with
pharmaceutical companies, who could be drafted in to rapidly manufacture millions of
doses of vaccine [8].
These global and national strategies were tested only a few years later, with the
emergence of ‘Swine Flu’. This was declared a pandemic in June 2009, with international
governments instituting their pandemic emergency plans. The UK vaccination campaign
was rolled out in October, an impressively rapid response. However, the robust nature of
the response seems inappropriate, as figures emerge suggesting that the death rate is as
low as 0.026% in the UK [10]. With this in mind, the WHO has since announced a
review of its handling of the pandemic, and some are even questioning whether it
warrants being called a pandemic at all. Perhaps the future will see a tiered system that
can deal with varying levels of severity, with the potential to upscale the response if it
proves necessary. In the meantime, the GAR network continues to monitor its sources for
evidence of emerging disease (figure 2), with the experiences of SARS and swine flu
epidemics having prepared the ground for appropriate and successful future responses.
Nicola Martin is a fourth year medical student at Green Templeton College
References
[1] Grein, T W et al. Rumors of Disease in the Global Village: Outbreak Verification.
Emerging Infectious Diseases, 2000; 6(2): 97-102.
[2] World Health Organisation. Update 95 - SARS: Chronology of a serial killer. Disease
Outbreak
News.
[Online]
July
4,
2003.
Available
at:
http://www.who.int/csr/don/2003_07_04/en/index.html, Accessed on 04/02/10
[3] World Health Organisation. Severe Acute Respiratory Syndrome (SARS) - multicountry outbreak - Update 1. March 16, 2003. Disease Outbreak News [Online].
Available at: http://www.who.int/csr/sars/archive/2003_03_16/en/index.html, Accessed
on 04/02/10
[4] Washer, P et al. Representations of SARS in the British newspapers. Social Science &
Medicine, 2004; 59: 2561–2571.
[5] Guan, Y et al.Isolation and characterization of viruses related to the SARS coronavirus from animals in southern China. Science, 2003; 302: 276-278.
[6] Seto, W H et al. Effectiveness of precautions against droplets and contact in
prevention of nosocomial transmission of severe acute respiratory syndrome (SARS).
Lancet, 2003; 361: 1519 - 1520.
[7] World Health Organisation. Summary table of SARS cases by country, 1 November
2002
7
August
2003
[Online].
Available
at:
http://www.who.int/csr/sars/country/2003_08_15/en/index.html. Accessed on 04/02/10
[8] Donaldson, L. A pandemic on the horizon. J R Soc Med, 2006; 99: 222-5.
[9] World Health Organisation. Outbreak Communication. [Online] September 2004.
Available
at:
http://www.who.int/csr/resources/publications/WHO_CDS_2005_32/en/index.html.
Accessed on 04/02/10
[10] Donaldson, L J et al.Mortality from pandemic A/H1N1 2009 influenza in England:
public health surveillance study. BMJ, 2009; 339: 5213.
[11] World Health Organisation. Severe Acute Respiratory Syndrome (SARS) - multicountry outbreak - Update 12. [Online]. Disease Outbreak News, March 2003. Available
at: http://www.who.int/csr/sars/archive/2003_03_27b/en/index.html, Accessed on
04/02/10
a. Content Providers(s): CDC/ C. S. Goldsmith, D. Auperin. Photo Credit: C. S.
Goldsmith. Copyright Restrictions: None - This image is in the public domain
HYPERLINK
"http://phil.cdc.gov/PHIL_Images/8699/8699_lores.jpg"
http://phil.cdc.gov/PHIL_Images/8699/8699_lores.jpg
b. Reproduced with kind permission from Higashi N et al. Electron microscope study of
development of Chikungunya virus in green monkey kidney stable (VERO) cells.
Virology, 1967; 33(1): 55-69
c. Reproduced with kind permission from Hyatt AD et al, Ultrastructure of Hendra virus
and Nipah virus within cultured cells and host animals, Microbes and Infection, 2001;
3(4): 297-306
d. No author specified. Copyright Restrictions: None - This image is in the public domain
http://www.uwlax.edu/clinmicro/clinicalmicro.htm
Once bitten, might die
Poverty stricken Ed Blacker examines the ethics and reality of participating in clinical
trials. Then he eats soup.
Six years is a long time to be a student. There comes a point when all the grants, loans
and bursaries dry up and one must seek additional funds. For those too shy or cautious for
the traditional avenues of busking or online gambling, there is a third path: participation
in a clinical trial. This can be very lucrative, ranging from £30 for a couple of hours in a
scanner into the thousands for more long-term, invasive studies. So it was for the promise
of around £2,200 that I found myself in the Churchill’s tropical medicine unit discussing
the possibility of becoming infected with Falciparum malaria by the trial doctors.
“So you guys are going to give me a new anti-malaria vaccine?”
“That is correct, yes.”
“That’s very Jenner-ous of you”, I quip.
They look at me blankly. This joke only works on paper.
The deal is this. I get two doses of a vaccine mix (see box) 8 weeks apart and am then
‘challenged’ with malaria 3 weeks later. The compensation depends on the number of
visits (£6 each), time taken (£15/hour) and blood tests (£6/donation). That this is
compensation and not payment is an important distinction. Deciding whether a system of
rewarding test subjects is acceptable is up to the ethics committee [1]. Informed consent
is a fundamental legal requirement, and offering potential subjects considerable sums of
money may adversely affect their ability to properly weigh the evidence, or at least cause
them to take risks they otherwise would not consider [2, 3]. This view is perhaps overly
paternalistic, but two deaths from studies in the 1980s, wherein subjects concealed health
issues that would otherwise have prevented their inclusion, highlight the dangers of
offering large cash incentives [4, 5].
The Office for Protection of Research Subjects states that “in no case should
remuneration be viewed as a way of offsetting risks…to cause a prospective subject to
accept risks that he or she would not accept in the absence of the remuneration”[6]. Of
course, no trial should be so risky as to put subjects at risk of serious harm. There is
clearly a distinction between weighting payment for risk, and for unpleasantness.
Contracting malaria isn’t a whole lot of fun, but if closely monitored the chance of
serious consequences are low. In this case it seems reasonable to reward subjects for their
contribution to science.
More importantly, saying you’re going to get malaria is also a great ice-breaker at parties.
“You’re getting malaria?! Isn’t that really super-dangerous?” coos one busty admirer.
“It’s pretty deadly, yeah. I’m doing it for the children. They’re our future.” I feel like Bob
Geldof.
However, there are a few problems. My mother does not know that I did this trial. This
also means she doesn’t know that I’m writing this article. She must wonder what I’ve
been doing with myself. Her perception of trials, like much of the public, is strongly
coloured by the monoclonal antibody TGN1412 trial at Northwick Park, which resulted
in the hospitalisation of its six participants. However, in this case the agent being tested
was novel in humans, and had only previously been tested in laboratory-raised animals
which would not have the same range of memory lymphocytes as the subjects [7]. This
may have been the difference between a pro-regulatory effect in animal studies and
catastrophic multi-organ failure in humans. I’m almost certain my trial will be just fine…
After being given the first vaccination I am given a thermometer and a form on which to
record various daily observations. I hit a pretty high fever the first evening but the only
real incident comes the next day when I am forced to leave my sick bed to pay my rent in
at the bank. After shivering my way up the hill I find a lengthy queue in Natwest. I am
only two people away from being served when I pass out. I come round and pay my rent
with four minutes to go. Yes I’m hardcore.
For the malaria challenge we are all taken to Imperial College by train. We go in small
groups to have a paper cup containing five mosquitoes and covered in gauze placed on
our forearm. Then we have sandwiches. The sensation of being bitten is an odd one, a
vague pricking sensation and not at all painful. The sandwiches are excellent (M&S).
When the mosquitoes have had their fill they sink languorously to the bottom of the cup,
visibly swollen. They are killed and their salivary glands examined for sporozoites to
prove we are infected. All in all, a nice day out.
Six days after the challenge we are required at the Churchill twice daily for blood films.
While the trek up Old Road is rather arduous, any parasitaemia is immediately treated.
This means that symptom duration is as short as possible. If the vaccine fails to protect us
from developing malaria we are expected to experience symptoms around 10-11 days
after being infected. By day nine other participants are noticeably feverish as one by one
they succumb. Nobody wants to be next. It’s like ‘It’s a Knockout’ with parasitic
infection. I feel a bit off on day 13 and, sure enough, my blood film is positive. I have
bona fide malaria. Later that day I shiver harder than a Brookes student in withdrawal and
lie steaming on my bed. I try frying an egg on my abdomen just to see but it doesn’t
work. Luckily I have an anxious girlfriend to hand and any exaggerated whimperings
result in soup. Thanks to the Riamet (anti-malarial) I’m put on, and healthy dosings of
paracetamol, my symptoms only last two days. I no longer get brought soup, but the
money I get from the trial means I can pay others to make me soup. And isn’t that really
what we all want?
The author would like to thanks to Dr Susanne Sheehy, Laura Dinsmore and Dr Chris
Duncan.
Ed Blacker is a fifth year medical student at Brasenose College
BOX: Ed received mix of attenuated Chimpanzee adenovirus and modified vaccinia
ankara virus, both containing ME-TRAP. ME-TRAP encodes for malaria surface
proteins. A regimen would ultimately aim to mimic natural immunity - allowing
parasitaemia (and therefore protective natural immunity) to develop but preventing severe
disease. In this scenario a liver stage vaccine would target and destroy infected
hepatocytes and then a blood stage vaccine would clear any parasites that make it through
to the blood stage. No malaria vaccine is currently licensed, but the results from this trial
are currently undergoing analysis. If found to be effective, the vaccine will be used as
prophylaxis in children in sub-Saharan Africa and possibly South-East Asia.
References
1. Medicines
for
human
use
(clinical
trials)
2004.www.englandlegislation.hmso.gov.uk/si/si2004/20041031.htm
regulations
2. Should healthy volunteers be paid according to risk? No. John Saunders
BMJ 2009 339: b4145
3. Grady C. Money for research participation: does it jeopardize informed
consent? Am J Bioethics 2001;1:40-4.
4. Darragh A, Kenny M, Lambe R, Brick I. Sudden death of a volunteer. Lancet
1985;i:93-4.
5. Kolata GB. The death of a research subject. Hastings Cent Rep1980;10:5-6
6. US Department of Health and Human Services Office for Human Research
Protection.
Informed
consent:
frequently
asked
questions.www.hhs.gov/ohrp/informconsfaq.html#q6
7. http://www.telegraph.co.uk/news/uknews/1540591/Study-claims-to-solve-drugtrial-mystery.html
Inside Gaza: health and security under siege
Accounts from doctors and surgeons working in Al Shifa hospital in Gaza during the
January 2009 Israeli military operation [1].
“Four year old Salmah Abed Rabu, who has suffered a large shrapnel injury to her back
by fragments from Israeli rocket bombs, lies awake but paraplegic after the attack on her
family home in Beit Lahia. Abandoned in the hospital corridor, she quietly whispers
“mama, mama, mama”. Her family members remain at home stranded due to the heavy
military bombardment of the area.”
“A 16 ambulance convoy evacuating war casualties to Egypt for essential follow-up
treatment carries a barely stable 15 year old boy with severe maxilla-orbital blast
injuries amongst its patients. He has lost both his eyeballs. As the convoy approaches the
outskirts of Gaza city, machine gun fire erupts across the road warning the convoy not to
proceed. The convoy has no choice but to head back to the nightmarish havoc of Al Shifa
Hospital.”
Inside the hospitals of Gaza
December 27 2008: the Israeli Defence Force (IDF) launched Operation Cast Lead, a
three week military assault on Gaza. One thousand, three hundred and sixty-six
Palestinians were killed, including 313 children. Doctors Mads Gilbert and Erik Fosse
report in The Lancet [i] that they witnessed “the most horrific war injuries in men, women
and children in all ages in numbers almost too large to comprehend”. The number of
wounded coming into Al-Shifa hospital in Gaza (where they worked) during the Israeli
attacks would have overcrowded a well functioning 500-bed western hospital, which AlShifa most certainly is not. Clinical work was hampered by a shortage of basic supplies
and equipment. Frequent blackouts in the absence of functioning headlights or torches
resulted in staff using mobile phone lights in operating theatres, where up to three
patients were operated on simultaneously. This snapshot portrays an overwhelmed
healthcare system serving a people in crisis. In this article, we focus on health status,
healthcare services and human security in Gaza, both leading up to and during the recent
Israeli assault.
Health in Gaza before Operation Cast Lead
An 18 month blockade prior to the IDF assault precipitated a virtual collapse of Gaza’s
healthcare infrastructure. The Gaza strip is 45 km long and 5-12 km wide (figure 2), with
1.5 million inhabitants, 1 million of whom are registered UN refugees. 80% live below
the poverty line and 80% are unemployed, with 50% dependent on UN food aid and basic
supplies. This is highlighted in hospitals like Al-Shifa, which lack basic medical
equipment such as ventilators, patient trolleys and electronic monitors for vital signs.
These issues are couched within a larger context of disjointed and inadequate public
health provision and healthcare infrastructure.
Infant mortality and growth stunting in children are often used to paint health status in
broad brush-strokes. Whilst infant mortality declined between 1967 and 1987, it has since
stalled at around 27 per 1000 between 2000 and 2006, indicating stagnation in health
improvements and deterioration in conditions [ii] (see figure 3). The rate of stunting in
children below five has risen from 7·2% in 1999 to 10·2% in 2006. This not only
indicates chronic malnutrition, but is also associated with increased morbidity and
mortality. Both TB and meningitis continue to rise, and a WHO survey on quality of life
in 2005 found it to be lower in the Palestinian territories than all other countries studied.
Failures of the healthcare system over the years
The current Palestinian health system is a disjointed service that developed over
generations of different regimes. Christian missionaries in the 19th Century established
some hospitals that are still operating in East Jerusalem. After the 1948 nakba1, the UN
General Assembly established the UN Relief and Works Agency, which delivered food
aid, housing, education, and health services to refugees in the occupied Palestinian
territories, Jordan, Lebanon and Syria. The Israeli military administration starved health
services of funds between 1967 and 1993 causing shortages of staff, hospital beds,
medications, and essential services, and in turn forced Palestinians to depend on Israeli
services. The Palestinian response was to create independent services to meet the needs
of the population during emergencies.
Meanwhile, the Palestinian Ministry of Health (MoH) was established after the Oslo
accords in 1994 (along with the Palestinian National Authority), inheriting a neglected
healthcare infrastructure from the Israeli military. The number of hospital beds managed
by the MoH increased by 53% from 1994 to 2006, a trend also followed in Non
Governmental Organisations (NGOs) and the private sector. Despite these apparent
improvements in capacity, current services have proved inadequate for the health needs
of the people, due to the tripartite attrition of decades of neglect, poor management and
corruption. Patients are still often referred elsewhere (Israel, Egypt, and Jordan) due to a
lack of resources. Indicators of health-system function (such as number of hospitals,
primary healthcare facilities and healthcare workers) mask an underlying problem of low
quality care and a continued failure amongst health services to meet the required
standards for training.
Three important factors account for the inability of the MoH to develop an effective
health system. Firstly, Israeli restrictions since 1993 on the free movement of Palestinian
goods and labour across borders between the West Bank and Gaza have damaged
attempts at system-building. Secondly, the absence of any control by the Palestinian
National Authority over water, land, and the environment within the occupied Palestinian
territory has made a public-health approach to health-system development impossible.
An Arabic word referring to the ‘catastrophe’ of defeat and subsequent expulsion of
Palestinians after the 1948 war
1
Thirdly, a multiplicity of donors, complete with their different agendas, and the
dependence on financial assistance from these donors, has also resulted in programme
fragmentation.
So what is the solution? Building an effective healthcare system requires command over
resources, self-determination, sovereignty and control over land, water and free
movement of people. All of these prerequisites are absent in the occupied territories,
particularly in Gaza, and should form the basis for the protection and promotion of health
in the region.
Threats to holistic health and survival
The framework of human security provides a useful lens through which we may consider
health holistically (defined by the WHO as physical, mental and social well-being) and
the daily threats to civilian health. The preservation of human security and safety is, for
obvious reasons, a prerequisite for physical health, yet when we consider the social and
psychological implications of insecurity, we can begin to understand its pervasive and
long-lasting effect on the holistic health of a population.
Direct threats to human security in Gaza include air strikes and gunfire within civilian
areas by the IDF, and fighting between Palestinian factions. Four thousand, seven
hundred Palestinian civilians, including 900 children, have died as a direct result of
Israeli military operations between 2000 and 2008. Inter-Palestinian fighting, largely
since the beginning of 2006, has resulted in 600 Palestinian deaths (10% of all deaths
since 2000). In the same period, 35,000 Palestinian have been injured during clashes with
Israeli forces. Furthermore, during Operation Cast Lead, white phosphorous munitions
were fired upon civilian areas in Gaza, leading to widespread severe chemical burns [iii].
The insecurity of physical displacement has featured prominently in the collective
consciousness of Gazans since the wars and expulsions of 1948 and 1967. These feelings
have only been perpetuated by the ongoing siege and de-facto occupation by Israel; as
people are displaced, severed connections between homes and communities continue to
shatter hopes of stability and long-term security, causing great stress. Housing
destruction, another cause of great economic and emotional difficulty, is used as a form
of collective punishment and is illegal under international law.
Sewage continues to pose problems for Gazans; many facilities frequently shut down due
to electricity cuts, resulting in a public health catastrophe. The UNCHA reports 50-60
million litres of untreated or partly treated sewage are dumped into the Mediterranean
daily. Israeli water control policies have restricted Palestinian water usage to 320m3 per
annum (the threshold for ‘shortage’ is defined as 1700m3, and a ‘critical minimum’ as
500m3). In contrast, Israeli settlers in the West Bank have access to nine-fold more water
[iv].
Indirect threats to health include those insidious and subtle sources of trauma to the
collective and individual physical and psychosocial resilience. Repeated exposure to
sonic booms (powerful shocks from Israeli military aircraft) has been documented by the
Gaza Community Mental Health Programme as producing feelings of intense fear in
young children, manifesting somatically as headaches, stomach aches, bedwetting and
poor appetite. UN investigations in 2008 reported high levels of fear in children, with
worrying levels of exposure to relatives being killed, mutilated and dismembered bodies,
and destruction of homes [5].
Further, malnutrition, unemployment, public curfews and restrictions on movement are
daily realities. The separation wall, constructed between Israel and the West Bank and
declared illegal by the International Courts of Justice, continues to impede movement of
Palestinians during everyday activities, and divides neighbourhoods and households [6].
Reports of patients needing life-saving operations and critical care being denied access
are commonplace. The need for travel permits delays access to hospitals for patients,
medical students and health workers, with commuting times increasing from 30 minutes
to more than 2.5 hours on a regular basis.
Chronic exposure to violence, humiliation and insecurity has bred pervasive
demoralization and despair amongst Gazans. Within this context, Gazans have cultivated
a collective social resilience to occupation in the face of daily struggles [7]. This metanarrative is characterized by the struggle for normality amid insecurity, and a deep-rooted
sense of connection to the land, enabling the struggle for self-determination to continue.
Networks of family, friends, neighbours and wider communities fill the holes within
healthcare delivery, with reports of neighbours and passers-by transporting casualties to
hospitals when ambulances are unavailable, or extended families financially contributing
to cover healthcare costs.
Gaza’s only hope…
Many of the difficulties in ensuring day-to-day security and survival in Gaza stem from
political instability. The social resilience portrayed operates within a vacuum of power
and authority due to the lack of a stable and autonomous government. In order to bolster
the physical, social and mental health of Gazans, all efforts should be directed towards
supporting and strengthening the efforts that nurture social cohesion and resilience, whilst
simultaneously pressing for a political solution.
International norms and rulings regarding the security and status of the Gaza strip must
be emphasized; perhaps a useful starting position would be the UN Human Rights
Council, Human Rights Watch and Amnesty International consensus on the illegality of
the collective punishment of Gazans. Although nine billion dollars of financial aid has
been distributed in the occupied Palestinian territory since 1994, there is little evidence of
tangible sustained growth in security or infrastructure [v]. In contrast, a political solution
will not only reduce direct and indirect threats to the holistic health and security of
Gazans, but also facilitate an environment in which Gaza’s health infrastructure can
operate effectively.
While a political solution remains distant, Gazans continue to suffer. It is difficult to
imagine how a solution to the healthcare situation described can succeed without political
stability. In the meantime, the physical, psychological and social well-being of Gazans
will remain poor, whilst the structural impediments and barriers to development remain
in place. As the WHO's Commission on Social Determinants of Health states:
“The conditions in which people live and work can help to create or destroy their
health”.
Omar Abdel-Mannan is a fifth year medical student at St John’s College. Imran Mahmud
is a fifth year medical student at St Catherine’s College.
References
1. Gilbert M, Fosse E. Inside Gaza's Al-Shifa hospital. The Lancet 2009; 373: 200202
2. Rita Giacaman Rana Khatib, Luay Shabaneh, Asad Ramlawi, Belgacem Sabri,
Guido Sabatinelli, Prof Marwan Khawaja, Tony Laurance. Health status and
health services in the occupied Palestinian territory. The Lancet 2009. 373; 837849
3. James Hider, Sheera Frenkel. Israel admits using white phosphorous in attacks on
Gaza. The Times. 24th Jan 2009
4. Rajaie Batniji, Yoke Rabaia, Viet Nguyen–Gillham, Rita Giacaman, Eyad Sarraj,
Prof Raija–Leena Punamaki, Hana Saab, Will Boyce. Health as human security in
the occupied Palestinian territory. The Lancet 2009. 373; 1133-1143
5. United Nations. Gaza Strip inter-agency humanitarian fact sheet. March 2008
http://domino.un.org/pdfs/GSHFSMar08.pdf (accessed Aug 2, 2008).
6. M Rutter, Resilience in the face of adversity: protective factors and resistance to
psychiatric disorder, Br J Psychiatry 1985; 147: 598–611
7. Y Sayigh, Inducing a failed state in Palestine, Survival 2007; 49: 7–39.
Factitious illness
“Dear Doctor,
Thank you for referring this middle-aged gentleman, who has suffered from diarrhoea,
bloating and vague abdominal discomfort since returning from a long trip abroad. He has
become rather preoccupied by his symptoms, and I understand he has recently installed a
toilet next to his study, to facilitate his increasingly frequent visits. Despite his florid
symptoms, investigations have been unrewarding and examination in clinic today was
entirely unremarkable He is also under the care of the neurologists for an unusual
tremor...”
The heartsink patient will be familiar to anyone with experience of general practice.
Usually suffering from multiple ill-defined symptoms, they are regular visitors to both
their GP and hospital outpatient clinics, particularly cardiology, neurology and
gastroenterology, in which they make up over one-third of consultations [1]. Their
defining feature is the medically-unexplained symptom: despite extensive investigation,
no underlying pathology can be identified as a cause of their illness. In this situation, the
relationship between patient and doctor can become difficult. However, such patients can
be helped, particularly if identified early. This article provides a brief guide to doing this.
Patients with medically-unexplained symptoms can be divided into two groups. The first
experiences ‘real’ symptoms, either as a manifestation of psychological distress
(somatisation) or through some theoretical disturbance in physiological processes or
sensation (functional illness). In contrast, sufferers of factitious disorder and malingerers
deliberately feign their symptoms to obtain medical attention or material gain,
respectively. Making this distinction is important, as the two groups are managed very
differently. A doctor consulted by a patient with possible non-organic illness must
therefore decide both whether non-organic symptoms are present, and why the patient is
experiencing them.
Non-organic illness is diagnosed primarily on clinical grounds. Investigations are only
used to exclude serious underlying causes which cannot be discounted based on clinical
findings alone, with a few exceptions, such as video telemetry and EEG monitoring in
pseudoseizures. Symptoms for which a non-organic explanation should be considered are
shown in Box 1, together with some important areas to evaluate in the history and
examination. The key features to identify are a biologically implausible symptom
complex and inconsistent or fluctuating symptoms and signs. The history and
examination may contradict each other: for instance, a patient complaining of weakness
may claim to be unable to lift their legs to get out of bed, but be observed to stand up
from a chair without difficulty. Clues may also be obtained from the patient’s
background, an excessively thick set of notes being the most obvious. Depression,
anxiety and traumatic life events are possible risk factors. Unusual health beliefs, such as
obscure dietary intolerances, excessive internet use and membership of the educated
upper middle classes have also been proposed, somewhat unkindly, to contribute.
Although entering into overt competition with the patient is probably unhelpful, special
tests have been devised to demonstrate the presence of functional or feigned symptoms
by eliciting subtle signs uncharacteristic of organic illness or by directly forcing a
malingerer into revealing their deception. The latter category includes a pair of tests for
non-organic coma: in the arm-drop test, the patient’s arm is held above their face and
released (only a genuinely unconscious person will allow themselves to be hit in the face)
whilst the vibrating-tuning fork-up-the-nose test is self-explanatory, and tends to elicit a
stronger response. More subtly, Waddell’s signs of non-organic back pain (pain on
downwards head pressure, co-ordinated hip and spine rotation and on straight leg raise
when lying down, but not when sat up as the surgeon ‘checks their plantar response’)
exploit the gap between the commonly-presumed and actual characteristics of low back
pain. In the latter category, Hoover’s test for leg weakness compares directly-requested
hip extension power to that elicited synergistically through contralateral hip extension.
The abductor [2] and finger abductor [3] signs work on a similar synergic principle.
Lastly, a functional tremor can often be demonstrated by its tendency to synchronise with
contralateral hand movement (the same principle that makes rubbing your stomach whilst
patting your head difficult).
Once sufficient evidence has been obtained to support the diagnosis of non-organic
illness, the next step is to determine whether the patient is deliberately producing their
symptoms. The wider context of the patient’s presentation is useful: does the patient
stand to gain materially from their ‘illness’? The involvement of lawyers is an ominous
sign. Faked illness also tends to have a particularly thin, vague history and very florid
symptoms and signs. The patient may seem excessively concerned by their illness, or
may show la belle indifference2. A patient with factitious disorder often has a medical
background, and in so-called dermatitis artefacta, will show a distinctive distribution of
skin lesions, with sparing of the small of the back and preferential involvement of the
side of the body opposite the dominant hand.
The first step in managing a patient with functional symptoms is giving and explaining
the diagnosis. This is not easy to do without causing offence. However, one study has
calculated the offensiveness of various descriptions for non-organic symptoms: the
‘number needed to offend’ [5]. Unsurprisingly, describing symptoms as ‘all in the mind’
or ‘hysterical’ offended a large proportion of patients, although ‘medically unexplained’
and ‘depression associated’ were almost as offensive. ‘Functional’ was found to be the
most acceptable term, rating similarly to suggesting a minor stroke as the cause of the
patient’s symptoms. However, explaining what is meant by ‘functional’ symptoms is
awkward. It is helpful to try to ‘reattribute’ the symptoms, explaining that while they may
have an (unidentified) organic origin, they are unlikely to be dangerous, and that
treatment of exacerbating psychological factors may be the most effective way to
improve the physical symptoms. Further management may then involve counselling,
2
A dermatologist describes a patient with extensive ulceration of one arm from self-inoculation with
faeces: “Yes, it is rather unpleasant, isn’t it, doctor? I wonder if you could arrange for someone to take it
off?” [4]
anxiolytic or antidepressant medication and symptomatic relief, and spontaneous
improvement is likely.
Patients who are actively involved in deception are more difficult to treat, involving
complex medicolegal issues. A supportive confrontation, in which the patient’s
behaviour, the reasons for it and possible ways to help are discussed as nonjudgementally as possible, has been suggested [6], but, unsurprisingly, reluctance to cooperate is common. Close documentation for self-preservation is advisable.
Although patients with medically-unexplained symptoms are demanding and frustrating,
attentive treatment can be rewarding, even if only to reduce the chance of seeing them
again. Making a diagnosis offers an opportunity for a different kind of consultation, and
excellent intellectual exercise. Finally, it should be remembered that not all heartsinks are
entirely useless: the case history presented at the start of this article is of one CD, a
prominent evolutionary biologist...
[1] Hamilton et al. Anxiety, depression and management of medically unexplained
symptoms in medical clinics. Journal of the Royal College of Physicians 1996; 30 18 –
20
[2] Sonoo. Abductor sign: a reliable new sign to detect unilateral non-organic paresis of
the lower limb. Journal of Neurology, Neurosurgery and Psychiatry 2004; 75 121 – 125
[3] Tinazzi et al. Abduction finger sign: A new sign to detect unilateral functional
paralysis of the upper limb. Movement Disorders 2008; 23 2415 – 2419
[4] Graham-Brown and Burns. Skin and the psyche. In: Graham-Brown and Burns.
Lecture Notes: Dermatology (2nd edition). Blackwell Publishing; 2007. p. 169-174
[5] Stone et al. What should we say to patients with symptoms unexplained by disease?
The “number needed to offend”. British Medical Journal 2002; 325 1149 – 1450
[6] Bass and May. Chronic multiple functional somatic symptoms. British Medical
Journal 2002; 325 323 - 326
Box 1: Presenting Complaints
General: tiredness, malaise
Neurology: headache, weakness, sensory loss, seizure, tremor
Cardiology: palpitations
Gastroenterology: chronic abdominal pain, bloating
Gynaecology: pelvic pain
Box 2: History
HPC: inconsistency, implausibility, fluctuating severity, relationship to stressors,
medicolegal context
PMH: other chronic symptoms, psychological comorbidity
SH: educational level, medical background, recent life events
Other: patient’s attitude towards symptoms, evidence of depression/anxiety
Box 3: Examination
Full examination to rule out serious pathology
Distribution of signs: anatomical plausibility, possible mechanism
Specific tests for non-organic disease
A&E at only £532 per night
Alcohol-related hospital admissions have doubled in a decade, costing the NHS £2.7
billion [1] and hitting the economy by a staggering £25 billion every year. It’s clear that
both the nation’s health and the NHS budget cannot continue to sustain this trend. In
order to discourage extreme alcohol binges and reduce the burden on the NHS, the
leading centre-right think tank Policy Exchange have recommended that patients
admitted to A&E for less than 24 hours with acute alcohol intoxication should be charged
the NHS tariff cost of £532. This could be reduced for those paying for the cost of their
own ‘brief interventions’ alcohol awareness course, which has proven to reduce alcohol
consumption and future health costs. Evidence suggests that this would help recover the
£15 million p.a. cost of admitting patients simply to sleep off the effects of their binge
[2]. Could this policy be one step in the right direction to tear us away from the bottle, or
open the floodgates to patient charging for other self-inflicted conditions, destined to
harm the founding principles of the NHS?
Caroline Pendleton is a second year medical student at St John’s College
References
[1] NHS Information Centre. Statistics on Alcohol: England 2008.
[2] Featherstone, H and Storey, C. Hitting the bottle. Policy Exchange, May 2009.
Do we have free will?
Helen Mather explores the implications of choice and fate in the context of ‘selfinflicted’ illness.
Cause and effect
In a fit of frustration I throw my stethoscope at a patient. If you knew the mass and initial
position of the stethoscope, and the force with which I had thrown it, you might be able
to calculate a rough estimate of where it would land using simple mathematics. Of
course, a stethoscope is more complicated than, for example, a ball: its mass is not
equally distributed. However, with more advanced mathematics a close estimate could be
made, therefore you could tell if the stethoscope would strike in just the right way to
terminate the patient’s ventricular arrhythmia, or if it would all end in a messy nosebleed.
Now you might wonder what on earth caused me to throw my stethoscope in the first
place. Arguably, this outcome could also be calculated if we had enough information. If
you knew everything about the physiology of all the neurons and structures involved in
that decision at that moment, and all the inputs to my brain immediately preceding it,
then perhaps you could know before me that I would commit this assault.
Whether or not we could ever have all this information or the means to interpret it is
beside the point. What is interesting is the idea that the laws of nature, together with the
initial conditions (the Big Bang), could have fixed all future events.
Making a choice
So if everything in the future is potentially predictable ie pre-determined, how can we
ever change anything in the course of events? Are we just pawns going through the
motions set out for us under the illusion that we can make choices about the way we live
our lives? If not, how can it be so?
Perhaps the laws of nature are not deterministic after all. Quantum theory suggests that
events may be random: that in one system an event may happen, whilst in an identical
system it may not. This theory may rule out determinism, (though not necessarily if you
accept the Bohmian interpretation which proposes that there are ‘hidden variables’; or if
you consider that at a macroscopic level deterministic properties hold [1]), but it does
nothing to illuminate how free will might be possible. A truly random event cannot be
chosen any more than a pre-determined one.
Could there be more to our minds than complex networks of neurons? Maybe whilst we
have no control over our ‘knee-jerk’ reflex, we could exercise control over decisions that
we are conscious of making? Consider the process of making a choice. Put simply, our
brain detects information, determines the options that are available to us in light of this
information, ranks the options, and executes the one that comes out on top [2]. The way
in which the options are ranked depends on the way our brain has developed due to
genetic and environmental factors and our cognitive state at the time. Is there any way
that our ‘conscious self’ could intervene in this process? Won’t this ‘consciousness’ also
be subject to deterministic or probabilistic laws? So where is the opportunity for free will
[3]?
Recent curious experiments have suggested that evidence of a decision having been made
can be seen by functional magnetic resonance imaging up to 10 seconds before the
subject is consciously aware of having made it [4]. By looking at the subject’s brain
activity, experimenters could tell whether subjects were about to press a button on their
left or their right before the subjects themselves knew which one they were about to
‘choose’. There is controversy over the interpretation of such experiments and likely the
question of free will can never be answered conclusively, by neuroscience or otherwise,
but there is exciting potential for neuroscience to deepen our understanding.
Responsibility and the function of judgment
Supposing we do not have free will, can we fairly be held responsible for our actions?
Surely the answer is no. But imagine if no-one were held responsible for anything. If
there were no consequences for ‘bad’ behaviour, a chaotic picture comes to my mind!
Whether or not judgment can be considered fair, it does have a useful deterrent function.
Prior experience of judgment, from being subject to catty comments to having spent time
in prison, would affect the way in which a person’s brain ranks their options when
making a decision. Witnessing such consequences would likely affect the decisionmaking process in would-be wrong-doers too.
So what?
One could say then that it might be helpful to deny patients treatment for self-inflicted
illness to deter people from harming themselves through unhealthy behaviour. While this
may seem appropriate when you consider people who are not yet at the point of needing
care, what about patients who are so ill that changes in behaviour can no longer help
them? If we are taking the point of view that they could not have done anything
differently, the patient with alcoholic cirrhosis is no less deserving of treatment than a car
crash victim. Surely then other ways of preventing unhealthy behaviour should be
sought?
Of course resources are finite, and to allocate them to generate the greatest benefit might
mean denying the active alcoholic a transplant if another suitable recipient is in equal
need of it and would likely benefit more. Similarly, if NHS funding could be used to
provide oxygen for patients with emphysema or care for patients with Alzheimer’s
disease to provide the same perceived benefit to patients; the decision might be made in
favour of Alzheimer’s treatment on the basis of the added benefit of deterring people
from smoking. Neither of these decisions involve a judgment over how deserving the
patients are.
But would a lack of NHS funding for oxygen for patients with emphysema really provide
any meaningful deterrent? Would it be any greater than having “SMOKING KILLS”
plastered over every cigarette packet sold? Knowledge of the consequences of unhealthy
behaviour is clearly not enough to deter many people. Taking a compassionate approach
could be more effective. People might respond better when the difficulty of their situation
is acknowledged and non-judgmental help and support is offered. Smoking cessation
clinics, which operate on these principles, have enjoyed good success [5, 6]. More
investment in this kind of preventative care could improve the quality and duration of
many lives as well as reduce costs when you consider that fewer people would need
expensive treatment further down the line.
Research funding is affected by perception of which causes are most worthy. Cancer
receives a proportion of funding which is significantly greater than corresponds to the
morbidity it causes [7]. Perhaps one reason for this is that many cancers are considered to
be something over which the sufferer has no control. In contrast, afflictions such as
emphysema, type 2 diabetes, hepatitis, and even conditions like depression and anxiety
are often viewed less sympathetically. Sufferers are assigned blame for causing their
condition; for being weak. I wonder if the distribution of funding might be different if
people regarded conditions deemed to be self inflicted as something with which sufferers
are ‘struck down’ through no fault of their own.
There is an important implication at an individual level too: might the idea that we could
lack free will affect your attitude towards your obese emphysemic cirrhotic patient, friend
or relation? However subtle the adjustment, it could make a valuable difference to them:
the impact of stigma on quality of life should not be underestimated. And when you take
into account the suggested effects on resource allocation and research funding, holding
this point of view could even mean a difference between life and death.
Helen Mather is a fourth year medical student at St Anne’s College
References
1. Polkinghorne J. Quantum Theory. New York: Oxford University Press; 2002. p. 4057
2. Blackburn S. Think. New York: Oxford University Press; 1999. p. 91-97
3. Van Inwagen P. An Essay on Free Will. New York: Oxford University Press; 1983.
4. Soon CS, Brass M, Heinze HJ, Haynes JD. Unconscious determinants of free
decisions in the human brain. Nature Neuroscience 2008; 11: 543-545
5. Oztuna F, Can G, Ozlu T. Five-year outcomes for a smoking cessation clinic.
Respirology 2007; 12(6): 911-915
6. Rooney B, Sakis K, Havens S, Miller C. A smoking cessation clinic with a 57%
success rate: what makes it work? WMJ 2002; 101(5): 34-38
7. Alzheimer’s Association. Alzheimer’s Disease facts and figures 2008 [Online].
Available
from:
http://www.alz.org/documents_custom/report_alzfactsfigures2008.pdf
[Accessed:
December 2008]
Hangover homeopathy
“My first return of sense or recollection was upon waking in a small, dismal-looking
room, my head aching horridly, pains of a violent nature in every limb and deadly
sickness at the stomach.” (Hickey, 1768)[1].
Mankind has always imbibed and so it follows that mankind has always had to survive
hangover. For those who are fond of etymology, try this on for size: veisalgia,
synonymous with hangover. Algia stems from the Greek ‘algia’, referring to pain or grief
(or both as the case may be). Slightly less obviously though, veis is derived from the
Norweigian ‘kveis’, denoting literally ‘uneasiness following debauchery’. There are
several key players in the physiology of veisalgia.
Alcohol itself has a multitude of effects. Suppression of vasopressin release causes
dehydration and a vasodilatory headache. Inhibition of glutamine release triggers a
rebound excitation, which accounts for the symptoms of sympathetic overdrive, such as
palpitations and lack of sufficient deep sleep. Glutathione, which together with
acetaldehyde dehydrogenase acts on acetaldehyde to convert it to acetate is limited in
stores. On its exhaustion acetaldehyde accumulates and toxicity ensues [1-4].
Most of us are not genetically blessed enough to code for sufficient acetaldehyde
dehydrogenase to negate the effects of acetaldehyde toxicity. Thus the pursuit of
happiness continues. Unfortunately there is no one remedy that has proven itself above all
others and much of the evidence is anecdotal [4]. If you are not one of my housemates
and do not wish to avail ‘the hair of the dog that bit you’, then the advice relates to broad
principles:
1) Pre-consumption: take in a carbohydrate and fat-rich meal. This will help to even
out the alcohol metabolism and protect the lining of the stomach, reducing
potential emesis and symptoms of gastritis.
2) Peri-consumption: attempt to steer clear of drinks with congeners (fermentation
by-products) such as whiskey, brandy, tequila and red wine. They can cause
particularly severe symptoms of hangover.
3) Pre-sleep: as much water as you can get in you and a prophylactic 1g paracetamol
PO.
4) Post-sleep: fruit juice and eggs, containing fructose and cysteine respectively. The
fructose increases the rate at which toxins are ‘mopped up’ and cysteine is
present in high amounts in the aforementioned glutathione.
5) Second sleep: the combined dehydration, hypoglycaemia and acetaldehyde toxicity
will most likely have precluded adequate deep sleep; after waking,
rehydrating and refuelling as above, a ninety-minute nap can be just the ticket.
This though is not always feasible when you have a 9 am or indeed an 8 am.
More recently there has been talk of hangover ‘magic bullets’. Regrettably I don’t think
these are anything to set your hair on fire about. Such tablets as ‘RU-21’ (antipokhmelin)
are said to have been developed originally for KGB personnel in the hope of preventing
morning-after symptoms. They contain mainly succinate and are theorised to work by
increasing cell metabolism and chelating congers [5]. There have, however, been no peerreviewed publications on their efficacy so we are at present, left to self-medicate.
Remember: the consumption of alcohol can create the illusion that you are tougher,
smarter, faster and better looking than most people.
References:
1. Swift, R. & Davidson, D. Alcohol Hangover: mechanisms and mediators. Alcohol
Health Res World 1998; 22(1);54-60
2. http://health.howstuffworks.com/hangover.htm
3. http://www.telegraph.co.uk/science/science-news/5118283/Bacon-sandwichreally-does-cure-a-hangover.html
4. Pittler, M.H., Verster, J.C., Ernst, E. Interventions for preventing or treating
alcohol hangover: systematic review of randomised controlled trials. BMJ 2005
Dec 24;331(7531):1515-18
5. http://www.ru21.co.uk/
The happy medium
How to rescue those at the edge of the BMI bell curve
Introduction
Walk through any city centre and you’ll see all sorts. Some will be thin, some will be fat,
most will be in the middle. This is the BMI bell curve. This article looks at those people
who have slipped off the peak of the bell curve and find themselves firmly wedged at
either end: the obese and underweight. It will then focus on how to get them back to the
happy medium.
The Obese
How fat is too fat?
How large do you have to be to affect your health? Well, not very according to the New
England Journal of Medicine [1]. In fact, even moderate elevations into the “overweight”
category (see Table 1.) increase mortality and, once you get past a BMI of 27, your rate
of mortality increases linearly. Bearing in mind that 42% of British men were overweight
in 2008 [2], this is a worrying statistic. From a medical point of view, therefore, we
should not ignore patients who are simply a little podgy: they’re at risk too.
What will this cost?
Damage to the individual aside, what is the cost of obesity to society? Those with a BMI
over 35 cost, on average, 44% more than those of normal weight and, according to Klim
McPherson, the Oxford Professor of Health Epidemiology, over 50% of adults will be
obese by 2030 [3]. This will translate to a cost of £45.5 billion per year by 2050. The
limited resources of the NHS can little afford the heavy burden of the obese.
Awareness of obesity
The statistics show we live in an increasingly obese world, but are the public aware of it?
They should be, if media coverage is anything to go by. For example, using the Daily
Mail as the nation’s barometer for medical awareness, we searched their website for
‘obesity’. From the 994 articles found since January 2009, the public have learned a lot.
They now know that “toxic fat” will strangle their organs, that Carol Vorderman’s “fat
age” is fifty (higher than they would have expected, apparently), that fat camp saved a
five year-old girl from the horror of being “too fat to toddle”, and that they can become
“too fat to lock up” (subtitle: “43 stone man avoids jail for food scam”). Although these
high-calibre headlines rather undermine the seriousness of the obesity issue, they do raise
awareness of it.
Treatments
Obesity is therefore bad news, on the increase, and we’re aware of its dangers. So what
can we do about it? Since obesity is a population-wide epidemic, the most effective
interventions to reduce its prevalence must be population-wide. This is an area where
public health and government policy will have to make real strides in the next few years.
Nevertheless, much can be done at the individual level, from primary care to the
operating table.
1. Lifestyle modification
This is paramount. In primary care the key is to use a collaborative approach, get the
patient to understand the problem and be realistic. SMART goals provide a useful
framework for presenting patients with the reality of their situation (see Table 2). Those
of a more brutal nature could remind the patient that, in those over 40, obesity can
shorten life expectancy by seven years [4]. Morbidities may also be persuasive. Joint
pains? Losing weight will help. Snoring keeping the missus awake? Losing weight will
help. Sexual problems? That’s right, losing weight will help.
However, appreciate that getting the motivation to do this isn’t easy and it may take
several consultations until the patient’s agenda approaches your own. Even in patients
who seem totally uninterested in such advice, studies with smokers show that the
occasional reminder about an intervention may work as patients may suddenly become
receptive [5].
A sensible, balanced diet is evidently the first step and the NHS recommends at least 30
minutes of vigorous exercise five times a week. However, suggest this to some patients
and they’ll look at you as though you’ve just asked them to hack off a limb. Judge your
audience, but it may be more sensible to start with “get off the bus a few stops early and
walk” and build on this.
Family support is vital and groups such as Weight Watchers can be a great help. In one
large RCT, Weight Watchers attendees lost 4.5 kg after one year and remained 3.0 kg
lower than baseline weight after two years. The self-help control group only lost 1.3 kg in
the first year and weight tended to return to baseline at two years [6].
2. Medication
If the above methods fail, medication may be used under strict guidance from NICE [7].
Orlistat is the only dedicated anti-obesity drug available on the NHS and has recently
become available over the counter. It works by inhibiting the action of lipase. Not only
does this prevent the absorption of dietary fats, but also leads to unpleasant bathroom
experiences if the patient does have a fatty meal. It therefore directly induces weight loss
and discourages unhealthy eating. Worries remain about the absorption of fat-soluble
vitamins and the fact that, patients may use it as a quick fix and bypass the vital lifestyle
modifications. This is an important point because, although medication does aid weight
loss, orlistat alone only leads to a reduction in weight of 5% more than placebo (not even
enough to meet U.S. pharmacological effectiveness targets). Orlistat plus lifestyle
change, however, leads to significantly greater, clinically effective, weight loss [8]. The
message is therefore to combine pharmacological treatment with lifestyle modification;
the intervention may help, but it is the patient’s own motivation to change their habits
which makes the difference.
Sibutramine, a centrally acting appetite suppressant which is actually more effective than
orlistat [8], was withdrawn from Europe in January 2010 due to cardiovascular concerns
[9]. According to a Food and Drug Administration whistleblower, it may be “more
dangerous than the conditions it is prescribed for”. It is still widely used in the U.S.
3. Surgery
The field of bariatrics (dedicated weight loss surgery) has grown enormously in recent
years and will continue to do so. Essentially, there are two types. One bypasses an area of
the digestive tract and one physically restricts (bands) the stomach to limit meal size.
Both have striking results, with weight loss of about 20kg, far exceeding medical
management [10]. However, with complication rates of around 40% and little known
about long-term outcomes, surgery is not an easy way out.
So that’s obesity. It’s a huge issue and we have many angles from which to attack it.
Worryingly, however, although obesity is on the rise, so is extreme thinness, and
increasing numbers of both women and men are seeking treatment for eating disorders
[11].
Extreme Thinness
How thin is too thin?
Each society and individual has their own, often evolving, interpretation. As an
illustration of western trends, 25 years ago fashion models were 8% thinner than the
average woman. Today that difference is almost 25% [12]. Currently, the NHS states that
‘underweight’ is below a BMI of 18.5, but ‘too thin’ is essentially whenever weight
impacts on health.
Is thin always anorexia nervosa?
Anorexia nervosa (AN) is characterised by deliberate weight loss that impairs physical or
psychosocial functioning. Bulimia nervosa (BN), on the other hand, is characterised by
repeated bouts of overeating and excessive preoccupation with the control of body
weight. Atypical eating disorders (AEDs) are a catch-all between the two. Importantly,
BN and AED sufferers are frequently not underweight, dispelling the myth that everyone
with an eating disorder is thin. In addition, remember that disorders such as
malabsorption, malignancy, illicit drug use, endocrine disturbances, chronic infection,
depression and schizophrenia can also lead to extreme weight loss. Therefore, while
thinness does not equal anorexia nervosa, it has to be on your differential.
Can we see it coming?
The SCOFF questionnaire (table 3) is 100% sensitive for detecting eating disorders in
adult women [13]. However, while a memorable and useful tool, it can only be used to
raise suspicion rather than to make a diagnosis. Unfortunately, it does not help to predict
who will develop an eating disorder.
Why worry about thin people?
Although our physiology is labile enough to cope with mild fluctuations in calorie intake,
there are numerous complications of extreme thinness and these may be the presenting
feature. Examples include the young girl with poor dentition and lanugo hair on the
forearms who comes to you thinking she is pregnant after missing her period; the male
athlete brought to A&E after an episode of syncope who is found to have an electrolyte
imbalance and ECG abnormalities; an unexplained normochromic, normocytic
anaemia… the list goes on. Reassuringly, most medical complications of low BMI seem
at least partially reversible on recovery, although the risk of osteoporosis and osteopenia
often remains.
The Media
Media information is confusing: stories about the obesity epidemic contrast with rail-thin
images of celebrities in magazines. The arrival of the phrases ‘thinspiration’ and ‘proanorexia’ (pro-ana, for those in the know) compound this. Putting the first into Google
brings up over 200,000 websites, which include pictures of both famous and ‘normal’
girls acting as ‘thinspiration’ to others. One site features Kate Moss apparently
announcing that “nothing tastes as good as skinny feels”. Another has one hundred top
tips giving advice on weight loss, including drinking ice cold water (the body burns
calories to get it to body temperature), cutting an apple into sixths and eating as three
meals, and taking caffeine tablets. Worryingly, I found my way to this information easier
than I can to parts of the West Wing.
Treatment
These influences aside, if an eating disorder is diagnosed then what can we do about it?
The majority of eating disorders are managed in the community, and this outline will
discuss the management of anorexia nervosa from this perspective.
Risks to physical and mental health should be assessed, and precipitating factors, such as
drug or alcohol misuse, should be removed before treatment. NICE suggests an average
weekly gain of 0.5-1.0 kg as an inpatient and 0.5 kg as an outpatient, plus vitamin
supplementation. If vomiting is prominent, advice about dental care is necessary as
stomach acid rapidly destroys enamel. While antidepressants such as fluoxetine have
been shown to reduce the frequency of binge eating and purging in BN, they are not
effective in AN.
A central issue in eating disorders is the over-evaluation of eating, shape and weight.
Cognitive analytic therapy (CAT), Cognitive behaviour therapy (CBT), interpersonal
psychotherapy (IPT) and family therapy may all reduce risk and facilitate recovery.
Cognitive behavioural approaches are useful to address not only the restrictive habits of
individuals but also bingeing where this is a feature [14].
Novel treatment approaches are always being considered. One small pilot study found that
treatment with ghrelin (a hunger-stimulating hormone) improved energy intake by up to
36% in AN, without any serious adverse effects [15]. Another interesting idea is the
potential role of afferent vagal hyperactivity in the pathophysiology of BN. Ondensetron
(a 5-HT3 antagonist) has been found to reduce binge/vomit frequency and increase
normal meal and snack intake [16]. This is thought to be due to normalisation of the
mechanism controlling meal termination and satiation, though further work is needed.
There are various new directions becoming available in the treatment of eating disorders.
However, as with obesity interventions, changing the way the patient thinks and
motivating them to change their habits is central.
Conclusion
Increases in both extreme thinness and obesity are distorting the BMI bell curve. There
are plenty of treatments we can offer to help our child who was “too fat to toddle” or to
save our ‘thinspired’ girls and get them back to the top of the bell curve. However, these
treatments are not going to solve the problem alone; they are simply a walking stick to
help the patient climb that curve themselves.
Jack Pottle and Felicity Rowley are third year graduate-entry medical students at
Magdalen College and Pembroke College, respectively
BMI (kg/m2)
Category
<18.5
Underweight
≥18.5 to <25.0
Normal
≥25.0
Overweight
≥30.0
Obese
Table 1. WHO BMI boundaries.
Goals should be:
S pecific
Measurable
A cheivable
R ealistic
T imed
Table 2. SMART objectives in primary care.
The SCOFF questionnaire*
Do you make yourself Sick because you feel uncomfortably full?
Do you worry you have lost Control over how much you eat?
Have you recently lost more than One stone in a 3 month period?
Do you believe yourself to be Fat when others say you are too thin?
Would you say that Food dominates your life?
*One point for every "yes"; a score of 2 indicates a likely case of anorexia nervosa or
bulimia
Table 3. The SCOFF questionnaire.
References
1. Adams, K.F., et al., Overweight, obesity, and mortality in a large prospective cohort
of persons 50 to 71 years old. N Engl J Med, 2006. 355(8): p. 763-78.
2. NHS Choices. Obesity: How common is obesity? (2010) [Online] Available from:
http://www.nhs.uk/conditions/obesity/pages/introduction.aspx [Accessed 05.04.10]
3. Depatment of Health. General Obesity Information (2009) [Online] Available from:
http://www.dh.gov.uk/en/Publichealth/Healthimprovement/Obesity/DH_078098
[Accessed 05.04.10)
4. NHS Choices. Obesity: Outlook? (2010) [Online] Available from:
http://www.nhs.uk/conditions/obesity/pages/introduction.aspx [Accessed 05.04.10]
5. Russell, M.A., et al., Effect of general practitioners' advice against smoking. Br Med
J, 1979. 2(6184): p. 231-5.
6. Heshka, S., et al., Weight loss with self-help compared with a structured commercial
program: a randomized trial. JAMA, 2003. 289(14): p. 1792-8.
7. NICE Guidelines. CG43 Obesity: quick reference guide (2006) Available from:
http://guidance.nice.org.uk/CG43/QuickRefGuide (Accessed 05.04.10)
8. Yaskin, J., R.W. Toner, and N. Goldfarb, Obesity management interventions: a review
of the evidence. Popul Health Manag, 2009. 12(6): p. 305-16.
9. European Medicines Agency. European Medicines Agency recommends suspension of
marketing authorisations for Sibutramine (2010) [Online] Available from:
http://www.ema.europa.eu/pdfs/human/referral/sibutramine/3940810en.pdf [Accessed
05.04.10]
10.
Buchwald, H., et al., Bariatric surgery: a systematic review and meta-analysis.
JAMA, 2004. 292(14): p. 1724-37.
11.
Braun, D.L., et al., More males seek treatment for eating disorders. Int J Eat
Disord, 1999. 25(4): p. 415-24.
12.
Derenne, J.L. and E.V. Beresin, Body image, media, and eating disorders. Acad
Psychiatry, 2006. 30(3): p. 257-61.
13.
Morgan, J.F., F. Reid, and J.H. Lacey, The SCOFF questionnaire: assessment of a
new screening tool for eating disorders. BMJ, 1999. 319(7223): p. 1467-8.
14.
Fairburn, C.G., Z. Cooper, and R. Shafran, Cognitive behaviour therapy for
eating disorders: a "transdiagnostic" theory and treatment. Behav Res Ther, 2003.
41(5): p. 509-28.
15.
Hotta, M., et al., Ghrelin increases hunger and food intake in patients with
restricting-type anorexia nervosa: a pilot study. Endocr J, 2009. 56(9): p. 1119-28.
16.
Faris, P.L., et al., Effect of decreasing afferent vagal activity with ondansetron on
symptoms of bulimia nervosa: a randomised, double-blind trial. Lancet, 2000.
355(9206): p. 792-7.
Bloodless surgery: treating the body and respecting the soul
The immense variety of different races, religious practices and beliefs that make up
Britain today are accommodated for in various ways, from prayer rooms within the
workplace, to Halal alternatives in the school cafeteria. Should medicine follow suit and
adapt its practice to take into account a patient’s religious views? Since both religion and
medicine are concerned with the person as a whole (the words ‘healing’ and ‘holiness’
are both derived from the concept of wholeness), one could say that the medical
community is obligated to serve the needs of patients in a manner consistent with their
beliefs.
“This is the mistake some doctors make with their patients. They try to produce health of
body apart from health of soul.” Plato
Most medical professionals would not think twice about giving a patient a blood
transfusion to save their life, and equally most would accept blood if necessary. However,
there are over seven million people who would refuse a blood transfusion even if the
alternative was certain death. Jehovah’s Witnesses are a fundamentalist Christian
religious group who interpret the Holy Scriptures literally, believing that the bible forbids
the ‘eating’ of blood.
“Every moving thing that liveth shall be meat for you; even as the green herb have I
given you all things. But flesh with the life thereof, which is the blood thereof, shall ye
not eat” Genesis 9:3–5
The interpretation that the ban extends to blood transfusion is unique to Jehovah’s
Witnesses who carry cards stating ‘I direct that no blood transfusions be given to me ...
even if ... necessary to preserve my life or health’. Conscious violation of the doctrine is a
serious offense leading to organised shunning.
So what options does a Jehovah’s Witness have if they need a hip replacement or even
brain surgery? The answer is a technique developed in the 1960s known as bloodless
surgery, which encompasses a spectrum of strategies intended to minimize blood loss and
hence avoid transfusion. Pre-operatively patients are given erythropoietin to increase the
production of red blood cells, whilst intra-operatively, fluids such as ringer's lactate
solution maintain blood pressure, and anti-fibrinolytics reduce acute bleeding. In extreme
cases, litres of blood can be recovered from the patient by performing ‘autologous blood
salvage’, a procedure that recovers and cleans blood lost during surgery or trauma before
returning it to the patient.
Novel medical instruments that reduce blood loss during surgery have been crucial in the
development of bloodless medicine. The ‘harmonic scalpel’, a vibrating laser, can
simultaneously cut tissue and clot blood thereby preventing bleeding, whilst in brain
surgery, a ‘gamma knife’ aims high dose radiation to precise points within the head
through a specialised helmet.
There has been a recent surge of interest in bloodless surgery, not just within the
Jehovah’s Witness community, but amongst patients who fear contracting blood-borne
infections, or want to reduce their healthcare costs. With major operations such as liver
resections, hip replacements and brain surgery all being performed in a bloodless manner,
it appears that there are no limitations to what bloodless surgery can achieve.
For a Jehovah’s Witness, bloodless surgery could determine the difference between life
and death. Most British hospitals today have chapels and on-call priests, indicating that
medicine is already conscious of religion and the impact that it can have on healthcare.
As society becomes ever more culturally and religiously diverse, medicine must evolve to
meet the needs of the patients’ body, whilst respecting their soul.
Annabel Christian is a fourth year medical student at Hertford College
Being superhuman
Rosie Ievins uncovers the world of prosthetics and the approach of the bionic man
The human body is the most technically advanced piece of machinery on the planet. It
has the ability to sense, move, process information, react, recover itself, learn, think,
reason and influence its surroundings. Furthermore, it may do this in conjunction with
other bodies, or articles external to itself, such as tools or other animals. Despite this, it is
still not perfect, and medicine is the art and science of travailing against the process of
death, disease and slow decay of the human body.
Technology has none of the above abilities. It remains rigid, does not respond except in a
pre-programmed fashion, does not seek out new information and has no impact on its
environment other than what is decided for it. However, technology has been
incorporated into the human body for as long as both have existed. Initially, this was in
the form of crude splints and sticks to aid healing and function. But with advances made
in physiology, neuroscience and nanotechnology, the possibilities of bionics are greater
than ever. And whilst restoring lost function is a noble aim, the potential to stretch our
capacity beyond known limits is enticing and ever closer.
Perhaps the most basic function of technology is to aid the natural healing process of the
human body. The body has great capacity to regenerate itself wholly appropriately after
injury. However, this process can prove painful, inconvenient, or incomplete, at least
without aid. The simple use of a splint can guide a healing leg as well as provide support
for mobilisation. The speciality of orthopaedics is dedicated to work in conjunction with
the body’s regrowth, using an ever-increasing set of tools to give a structure around
which bones may heal.
The regeneration game
Although this is the most common use, technology is not simply able to splint broken
limbs. In 2006, a prosthetic heart was implanted into a 14 year old girl who suffered an
acute viral myocarditis, in an attempt to carry her failing heart over while she waited for a
suitable organ for transplant. However, five months with the prosthetic heart gave her
own the chance to rest and recover. The prosthetic was then removed and she has
continued to not need a donor heart [1]. It would take brave surgeons to pioneer the area
of prostheses to take the strain of organs while recovery occurred, but development of
further prosthetic organs may make this a realistic therapeutic option.
There are occasions when the body is not able to heal itself after injury or illness and
doesn’t return to optimal function. While this may be a normal part of ageing, if it comes
especially early and suddenly, the loss of function (or body parts) is debilitating and
changes the patient’s life for the worse. In these situations, technological advances are
invaluable. The applications of technology in this area are many and varied. They range
from the bar in an elderly patient’s bathroom to the electronic prosthetic limb of a soldier.
However, despite the wide range of complexities, they all have the function of helping
the patient to cope with their disability, either through giving them a new way of doing an
old task or through making a task easier.
The history of prosthetics for this purpose dates back to ancient times, with gold wire for
keeping artificial teeth in place found in Roman tombs [2]. Lower limb prostheses were
in use in Egyptian times [3], and in the 16th century, Gotz von Berlichingen of Germany,
among others, was reported to have had an iron hand [4]. Limbs and joints were the first
body parts to be imitated by prosthetics. However, science has recently been able to
mimic the role of other parts of the human body. It has already been noted that heart
function may be replaced, usually temporarily, although complete prosthetic hearts have
been on the market for six years [5]. Pacemakers and renal dialysis are more common
uses of technology in replacing lost function in the human body. While other artificial
organs are not currently available, research is ongoing into them. Artificial lungs,
pancreas and skin look particularly promising and may be available within the next ten
years.
As the most common use of bionics is for prosthetic limbs, research and development has
given many advances. But upper limb prosthetics in particular remain profoundly poor
and many users stop wearing their prosthetics after a few years as they become
accustomed to their disability, because they give little functional benefit and are difficult
to use. The complexity of the arm is the limiting factor in designing effective prostheses;
there are 22 degrees of freedom on the arm and most modern prosthetics have just three,
reducing the range of movement and tasks possible. However, a new arm, with 18
degrees of freedom, has been made [6], giving users greater control of fine movements,
such as picking up and using keys. Advances in electronics allowed the processing
required for the arm, both complex and fast, to be fitted into the space and weight of a
real arm. The “Luke Arm” (named after Luke Skywalker’s realistic prosthetic in the Star
Wars saga) gives great control to users with a joystick like interface for control.
Mind control
While this advance is significant, it does require the user to learn how to control the arm
in a new way. The next stage would be the development of an arm that is controlled in
the same way as a normal arm. There has been one case of reinnervation to control a
prosthetic arm [7]. In this case, intact median, ulnar, musculocutaneous and radial nerves
were transferred to a woman’s serratus and pectoral muscles along with two sensory
nerves. After recovery, a prosthetic arm was fitted using the targeted muscle sites. She
was able to control the arm ‘intuitively’ by thinking about the movements she wanted to
make. Touching the chest wall felt like her prosthesis was being touched. And most
importantly, she had much better control and function with the new arm than with a
conventional prosthesis. However, this is the only case where nerves have been
incorporated into controlling a prosthetic. There are sure to be further advances in this
area, but it is unlikely that these will become viable treatment options in the near future.
A less invasive approach to using the mind to control actions is using ‘brainwaves’.
Electroencephalography (EEG) traces have been ‘translated’ and used to control both
computer mice and wheelchairs, usually by the user imagining the direction in which they
want the pointer or themselves to go. EEG traces are recorded from the surface of the
brain and the components that most reliably correspond with the movement are extracted
out and used to control the direction. Furthermore, as each user has different patterns, the
machines are able to learn these differences and adapt the movements accordingly. These
are major advances for patients with poor mobility, paralysis or quadriplegia. The gaming
industry is likely to drive the market, making mind-controlled computer mice more
affordable and achievable in our lifetime. Wheelchair controls have advanced to the point
where they can now be controlled in ‘real-time’, making them realistic for use in
everyday life.
Brain power
While restricted movement is one area where neuroprosthesis can help overcome
disability, there are other areas for application. Restriction of mental capacities is
arguably more debilitating for the sufferer than restriction of movement and currently
there is little medicine can offer these patients. Dementia is increasingly more common,
especially with an ageing global population, and the burden grows each year, practically,
financially and emotionally for patients and carers alike. Although there are drugs
available, these alleviate symptoms and do not stop the underlying process of the disease.
More must be done to change this.
The application of bionic science to enhance cognitive function is a frontier that remains
to be crossed. However, the basic building blocks are already in use. Deep brain
stimulation (DBS), where electrodes are implanted into the brain to stimulate activity in a
certain region, is already in use for the treatment of movement disorders, such as
Parkinson’s disease and tremors. The logical progression is for electrodes to be targeted
towards areas associated with cognitive function; however, the ethical implications would
be immense. Other cognitive enhancements include using technology to stimulate
plasticity, the mechanism thought to be behind forming memories, and prosthetic
hippocampi are already in development in one lab [8]. However, we need to know more
about how memories are formed and stored for this to become a reality.
With these powers I could become a superhero…..
As well as restoring function, it is also becoming possible to enhance it; perhaps even to
superhuman levels. In 2008, Oscar Pistorius, of South Africa, a bilateral below knee
amputee, was barred from competing in the Olympic Games as it was ruled that his legs
gave him an advantage over other competitors. This ruling was later withdrawn. He went
on to claim gold in the 100m, 200m and 400m sprints in the Paralympic Games. Other
organs that have been developed could be improved such that they extend the current
level of human function; joints with a greater range of movement, hearts that pump
greater quantities, haemoglobin-like substances that carry more oxygen than human red
blood cells. In the nervous system, applying bionic methods to normal, undiseased tissue
is conceivable. This would allow us to become superhuman, either through making us
faster or quicker at thinking or able to store more memories more easily. And advances
that these methods could bring will only open more doors into the unknown world of the
cyborg (a creature that is part human and part technology).
Currently, technology is only applied to the human body such that lost function is
replaced. Further advances, especially in the area of neuroprostheses, could provide hope
and the chance of a normal life for many patients where there are few current options for
treatment. This is mainly in the area of movement disorders, but might conceivably be
applied to cognitive problems in the future. Applying technology to normal tissue,
enhancing its function to become superhuman may also be possible with future
developments in different areas. We may soon find ourselves pushing the boundaries and
being forced to re-think just what it means to be ‘human’.
Rosie Ievins is a fifth year medical student at Corpus Christi College
References
[1] Holly Budd First paediatric patient in Canada weaned from Berlin Heart after own
heart
recovers.
(2007).
[Online].
Available
from:
http://www.hospitalnews.com/modules/magazines/mag.asp?ID=3&IID=101&AI
D=1324 [Accessed: 2nd March 2010]
[2] Gold braces found on ancient Roman teeth. (2007). [Online]. Available from:
http://www.mediacentre.gold.org/news/2007/05/15/story/6663/gold_braces_found
_on_ancient_roman_teeth [Accessed: 2nd March 2010]
[3] Mary Bellis The history of prosthetics. (2010). [Online]. Available from:
http://inventors.about.com/library/inventors/blprosthetic.htm [Accessed: 2nd
March 2010]
[4] The Iron Hand of the Goetz von Berlichingen. (2009). [Online]. Available from:
http://www.karlofgermany.com/Goetz.htm [Accessed: 2nd March 2010]
[5] SynCardia Systems. Inc. Total Artificial Heart Timeline. (2010). [Online].
Available
from:
http://www.syncardia.com/component/option,com_arttimeline/Itemid,707/timelin
eid,1/ [Accessed: 2nd March 2010]
[6] Sarah Adee Dean Kamen's "Luke Arm" Prosthesis Readies for Clinical Trials.
(2008).
[Online].
Available
from:
http://spectrum.ieee.org/biomedical/bionics/dean-kamens-luke-arm-prosthesisreadies-for-clinical-trials [Accessed: 7th March 2010]
[7] Kuiken TA, Miller LA, Lipshutz RD, Lock BA, Stubblefield K, Marasco PD.
Targeted reinnervation for enhanced prosthetic arm function in a woman with a
proximal amputation: a case study. Lancet 2007; 369: 371–80
[8] Dr Theodore W Berger Immortalized by science fiction writers in the past, the
concept of using micro computer chips to repair damaged brain functions is no
longer just intriguing reading. (2009). [Online]. Available from:
http://www.neural-prosthesis.com/index-1.html [Accessed: 18th March 2010]
Finding a way out
Jack Carruthers examines potential advances in neurodegenerative disease
Millions of people suffer from diseases caused by one small, mutated amyloidogenic
protein. For most of them, this protein aggregation has handed them a death sentence. But
at last there is a glimmer of hope that effective new treatments for such diseases can be
found. The deposition of this insoluble fibrous protein has been implicated in some of the
most headline-grabbing diseases including Alzheimer’s, Parkinson’s and Huntington’s. In
the West, the prevalence of such diseases is predicted to increase dramatically over the
next few years, reflecting an ageing population. For example, there were 4.5 million
cases of Alzheimer’s disease (AD) in the United States in 2003. By 2053, AD will affect
13.7 million people in that country [1]. There is good news though. Recent advances in
our understanding of amyloidogenic diseases have shed some light into the mechanisms
by which amyloid causes pathology. Such research will hopefully lead to better
treatments for diseases like AD.
Groping in the dark
The basic model for amyloid diseases is best demonstrated by the pathogenesis of AD. In
AD, a mutation in the gene encoding amyloid pre-cursor protein, which normally forms a
ubiquitous membrane glycoprotein, leads to the production of the misfolded amyloid-β
protein (Aβ). This aberrant protein accumulates to form fibrils of twisted-paired ribbons,
and is the major component of plaques in the brains of Alzheimer’s patients [2]. Similar
mutations leading to the aggregation of misfolded proteins are implicated in other
amyloid diseases, including Parkinson’s and Huntington’s.
The mechanisms by which Aβ causes severe neurodegenerative memory loss and
dementia in the late stages of AD remain to be elucidated. Indeed, we seem to be groping
in the dark when we prescribe some of the current treatment options. Donezepil, an
acetylcholine esterase inhibitor, is prescribed worldwide to combat AD. However, there
is little evidence to suggest that donezepil alters the progression of the disease. Some
studies have shown that donezepil may reduce hippocampal volume (a marker for AD)
but NICE withdrew its recommendation for the use of donezepil for mild-to-moderate
AD because there was no evidence of an improvement in functional outcome [3].
A light in the dark
Unsurprisingly, millions are spent each year on research into amyloid diseases, not least
because current treatment is so unsatisfactory. In order to develop an effective treatment,
we must understand how Aβ causes pathology. A recent break-through in our
understanding came with the utilisation of sensitive enzyme-linked immunosorbent
assays (ELISA), which showed that it was not the number of amyloid plaques that was
linked to neurodegeneration, but rather the diffuse number of amyloid oligomers, an
intermediate in the formation of the fibril. This evidence is supported by the fact that
many elderly individuals show amyloid plaque formation in their brains without
developing symptoms of AD [4]. There seems to be a relationship between the surface
area presented by amyloid plaques and oligomers, and their pathogenesis. Amyloid
plaques present less surface area than do oligomers. In cell-culture experiments involving
Huntington’s brain extracts, large plaques of huntingtin protein induced less cell death
than soluble oligomers [5].
But how does the accumulation of Aβ lead to the characteristic memory loss shown in
AD? Recent studies suggest that amyloid oligomers interfere with hippocampal long-term
potentiation (LTP), the means by which we form memory. In vivo micro-injections of
oligomers in rats’ brains disrupted learnt behaviour, inhibited LTP, and reduced the
density of dendritic spines of hippocampal pyramidal cells. These experiments also
highlighted the fact that amyloid plaques, whilst inert, can act as reservoirs of oligomers.
Injection of plaque material did not depress LTP, but did when solubilised first to form
amyloid oligomers [6].
Focussing treatment options
Current advances have also shown how we can detect amyloidogenic-susceptible
individuals before they develop the symptoms of AD. Research conducted by Roney and
colleagues showed that polymeric nanoparticles, when associated with radiolabelled
amyloid-affinity quinolones, could act as molecular probes in vivo. Brain uptake of these
markers was significantly increased in AD mice models [7]. Such mice models have been
invaluable in furthering research. Promising mouse models for AD have been developed
by injecting amyloid plaque-containing brain extract into the brains of mice [8]. These
models may allow researchers to elucidate exactly how amyloid oligomers cause neurone
death ie whether it is through membrane permeabilisation or through binding to specific
receptors.
Such research has allowed for treatment options that target amyloid oligomers
specifically, and this may be the future of treatment. This includes immune manipulation
through vaccination with inert amyloid oligomers, or through use of anti-amyloid
monoclonal antibodies [9]. Phase II clinical trials have shown some promise though
nothing is conclusive as yet [10]. Destabilising amyloid oligomers is another potential
treatment route. Studies have been promising involving the drug Alzhemed which
inhibits the interaction of Aβ with glycosaminoglycans (GAGs), a process believed to be
integral to the formation of amyloid oligomers. As a result of these findings, this drug is
in phase III clinical trials at the time of writing [11].
Walking into the light
It is important to consider the human face of amyloid diseases, especially when scientific
studies can seem remote and esoteric to the victims of these conditions. Alzheimer’s
patients have a mean life expectancy of just seven years from the time of diagnosis. This
period is characterised by insidious impairment of higher cognitive function, usually
accompanied by alterations in mood and behaviour. The prognosis of the disease is bleak:
progressive disorientation, memory loss, aphasia, profound disability, inability to speak,
and eventually death [12].
Other amyloid diseases like Parkinson’s are no less grim. In the latter, protein
accumulations lead to formation of Lewy bodies and death of neurones in the substantia
nigra. As a result, patients have difficulty initiating movement. Dementia, hallucinations
and death invariably result [12]. Suicide is common in amyloid victims and the sacrifice
and pain faced by the relatives and carers of amyloid patients can never be overemphasised. Let us hope that scientific research hurries to develop effective treatments to
save the minds and lives of millions.
Jack Carruthers is a second year medical student at St. Hilda’s College
References
[1] Alzheimer’s Society. Home Page. Available from: http://alzheimers.org.uk
[Accessed: 14th February 2010].
[2] Irvine, GB, El-Agnaf, OM, Shankar, GM, Walsh, DM. Protein aggregation in the
brain: the molecular basis for Alzheimer's and Parkinson's diseases. Molecular Medicine
2008.
[3] Xiong, G, Doraiswamy PM. Combination drug therapy for Alzheimer's disease: what
is evidence-based, and what is not? Geriatrics 60 (6): 22–6.
[4] Naslund, J. et al. Correlation between elevated levels of amyloid beta-peptide in the
brain and cognitive decline. JAMA 283, 1571–1577.
[5] Schaffar, G. et al. Cellular toxicity of polyglutamine expansion proteins: mechanism
of transcription factor deactivation. Mol. Cell 15, 95–105.
[6] Shankar GM et al. Amyloid- protein dimers isolated directly from Alzheimer's
brains impair synaptic plasticity and memory. Nature 14, 837 – 842.
[7] Roney, CA et al. Nanoparticulate Radiolabelled Quinolines Detect Amyloid Plaques
in Mouse Models of Alzheimer's Disease. International Journal of Alzheimer’s Disease
2009.
[8] Meyer-Luehmann, M. et al. Exogenous induction of cerebral β-amyloidogenesis is
governed by agent and host. Science 313, 1781–1784.
[9] Bard, F. et al. Peripherally administered antibodies against amyloid β-peptide enter
the central nervous system and reduce pathology in a mouse model of Alzheimer disease.
Nature Medicine 6, 916–919.
[10] Gilman, S. et al. Clinical effects of Aβ immunization (AN1792) in patients with AD
in an interrupted trial. Neurology 64, 1553–1562.
[11] McLaurin, J. et al. Interactions of Alzheimer amyloid-β peptides with
glycosaminoglycans effects on fibril nucleation and growth. Eur. J. Biochem. 266, 1101–
1110.
[12] Kumar, V. Amyloid diseases. In: Kumar, V., Abbas, AK., Fausto, N. editors.
Robbins’ Pathologic Basis of Disease (7th edition). London: Elsevier, 2004. p.p. 723 –
740.
Buzzing brains and spotless minds
Tom Campion ponders the mind, memory and whether we can just erase and rewind.
“Every man’s memory is his private literature” – Aldous Huxley
Ask the majority of people what defines them and what you eventually hit upon is their
memories, the blocks that build who they are. Our actions and thoughts define us, and the
record of our actions and thoughts is our memory. So it is not surprising that what
terrifies many of us is the confusion and black hole that follows when our memory is
impaired by enemies such as Alzheimer’s, traumatic injury, and the many other
pathological processes that affect our brains, especially as they grow older. This is picked
up in popular culture over and over again – the impact of the electroconvulsive therapy
(ECT) in One Flew Over the Cuckoo’s Nest, for example, or the cyclical world inhabited
by Leonard Shelby in Memento. And yet, despite our fears, or perhaps because of them,
we are learning how to destroy our memories. The clinical reasoning behind this seems
clear. Conditions such as post-traumatic stress disorder (PTSD) and abnormal grief
reactions may benefit from selective destruction of memory, but if the possibility does
become a reality, how will it be used? Will the imagined paradigm of Eternal Sunshine of
the Spotless Mind, where people can be written out of your ‘private literature’, become
commonplace? And should we even be worried about this? Do you really need to
remember that time in primary school you wet yourself in the middle of assembly?
Nuts and bolts
First of all, how does it work? This is an enormous question and it would be impossible
to encompass all of it here. Instead I will focus on the areas where techniques to erase
memory may be targeted. The general consensus is that at the basic level memory is a
product of long-term potentiation (when two neurons are stimulated together, there is a
long lasting enhancement in signal transmission between them) and synaptic plasticity
[1]. This connection must then mature; after the initial ‘acquisition’ phase, there is a
‘consolidation’ phase of hours to days in which the association must be reinforced to
survive. Early induction is mediated by the expression of more excitatory receptors on the
postsynaptic membrane, and late by the synthesis of new proteins brought about by the
actions of several protein kinases, including MEK/MAPKK [1] (remember this, it’ll
come in handy later). There follows ‘reconsolidation’, when the memory is retrieved
much later and by being retrieved can be stabilized [2]. These processes appear to happen
primarily in the limbic system, notably in the hippocampal formation, although the extent
to which long term memory is farmed out to the neocortex is hotly debated.
Straightforward enough. How hard could it be to interfere with? A quick look at the
website for Eternal Sunshine’s fictional clinic, Lacuna Inc, is worryingly prescient [3].
Let’s have a look at what Tom Wilkinson might actually be peddling.
“The existence of forgetting has never been proved. We only know that some things
don’t come to mind when we want them” – Friedrich Nietzsche
Remember to forget
This is the equivalent of your doctor telling you to eat your greens and go running; it
probably works but you feel somehow cheated that it’s all you got. Research as long ago
as 2001 appears to show that if you concentrate on forgetting something, it does actually
go away [4]. The experiment was even done on students so we can be confident that it’ll
work on us. They were told to learn pairs of words that were loosely related, so they
could remember one when shown the other. After this, one group was told to ‘remember’
the words and the other was told to try the charmingly Orwellian-sounding practice of
‘no-think’, where they actively suppressed the memory. When the stimuli were given
again, and even with a financial incentive to get the answer right, the no-thinkers lost. So
thinking about not remembering helps you forget.
Taking the blue pill
Remember that bit about protein kinases? Joseph LeDoux, (who apparently has a thing
about terrorizing rodents) and his lab were able to selectively remove the fear response
from one of two preconditioned stimuli using a MAPK inhibitor, U0126 [5]. Having
conditioned a fear response in the rats from two different tones (I guess it’s better not to
ask), they injected the U0126 into the lateral amygdala (the part of the brain thought to be
responsible for fear conditioning) and then played the rats one of the tones. After 24
hours, this tone no longer evoked a fear response whilst the other did. The same was
found by another group using an inhibitor of an isoform of protein kinase C (PKM zeta),
ZIP, only this time the beleaguered rodents (hopefully not the same ones) were
conditioned against a taste and the inhibitor was infused into the insular cortex, showing
that these effects are possible in the neocortex as well as the amygdala (and therefore
having more far-reaching implications for human memory) [6]. This was after three days
of training, so whether it would work in the longer term is uncertain as yet. However, as
with the method above, it does show the ability to be specific to one response, perhaps
giving the potential for selecting specific memories in the future. But then, it’s a long
way from rats to humans.
Be still, my heart
Or is it? This is perhaps the most interesting of the possible therapies because it actually
appears to work in humans. Propranolol, a beta-blocker, is hypothesized to work by
dampening down the emotional impact associated with a memory by blocking adrenaline
receptors in the amygdala [7]. In fairness, this in itself does not ‘delete’ the memory, but
without the emotional impact it is much less likely to be retrieved and in the context of
specific stimuli (for example arachnophobia) dulling the response would solve the
problem. It has been tested prophylactically: in one trial at Harvard in 2002, people from
the emergency department who had just been in car accidents were given either
propranolol or a placebo for 10 days after the event [8]. Three months on, none of the 11
given propranolol had a physical reaction to a script-driven recreation of the event,
whereas six of the 14 who had placebo did have a reaction. Sorted. This is currently
undergoing a major trial in veterans of the Iraq and Afghanistan wars suffering from
chronic PTSD, and if successful will go on to be tested against current best therapies for
PTSD [9].
Imagine the scene: walk into a clinic room like any other, talk to the friendly physician
who, after consenting you, explains that you will be given a tablet and then taken through
the event in question. In practice, this involves writing out the event, forcing your
memory retrieval processes into action. But this time, oddly, your heart does not race.
Your palms do not sweat. You feel calm and detached. This is the process of
‘devaluation’, as your memory begins to lose its power over you. Repeated doses, and the
memory means less to you each time, until stimulating with a script (your experience
being read out to you) or recreation with sound have no effect. In Eternal Sunshine of the
Spotless Mind, the client is asked to bring in any possessions that may trigger a memory
to avoid reactivation. With this process you would simply lose the traumatic feelings
associated with any such objects, rendering them harmless again.
Through rose-tinted glasses
So that is it; an end to unwanted emotional memory. A brave new world in which you
remember just what you want to. Is it really that simple? In theory, there is no reason why
this shouldn’t be possible for everyone now; propranolol is a common, safe and easily
available drug, and I’m sure many doctors would be happy to prescribe it to protect
against PTSD if they don’t already. The only real issues are ethical: do we have a
responsibility to remember? Consider this famous passage from one of our favourite
femme fatales:
"Canst thou not minister to a mind diseas'd,
Pluck from the memory a rooted sorrow,
Raze out the written troubles of the brain,
And with some sweet oblivious antidote
Cleanse the stuff'd bosom of that perilous stuff
Which weighs upon the heart?" (Lady Macbeth)
Well, yes, we can. Mrs Macbeth was failed by her doctor, but should she be treated
today? In an age where a person is defined by their past, for better or for worse, the
potential of this kind of drug could be huge; swathes of guilt erased with a pill,
significant events snuffed out of existence. There is an argument commonly made that the
emotional after-effects shape who we are, and that to be able to get rid of them starts us
down a slippery slope which results in a global deadening of feeling. It just depends
whether or not you think this is a bad thing. And, let us be honest, there is too much
money to be made in ‘cosmetic memory’ to let ethics get in the way. Remember that time
you wet yourself in assembly? Not for long…
Tom Campion is a fifth year medical student at Wadham College.
References
[1] Cooke SF, Bliss TVP. Plasticity in the human central nervous system. Brain 2006;
129(7): 1659-1673
[2] Tronson NC, Taylor JR. Molecular mechanisms of memory reconsolidation. Nat Rev
Neurosci 2007; 8: 262-275
[3] Lacuna Inc. [Online] Available from: http://www.lacunainc.com/home.html.
[Accessed: 17 February 2010]
[4] Anderson,M. C.& Green, C. Suppressing unwanted memories by executive control.
Nature 2001; 410: 366 – 369
[5] Doyere V, LeDoux JE. Synapse-specific reconsolidation of fear memories in the
lateral amygdala. Nat Neurosci 2007; 10(4): 414-6
[6] Shema R, Dudai Y. Rapid erasure of long-term memory associations in the Cortex by
an inhibitor of PKM-zeta. Science 2007; 317: 951-3
[7] Kindt M, Vervliet B. Beyond extinction: erasing human fear responses and preventing
the return of fear. Nat Neurosci 2009; 12: 256-8
[8] Pitman RK, Orr SP. Pilot study of secondary prevention of post-traumatic stress
disorder with propranolol. Biol Psychiatry 2002; 51: 189-192
[9] Clinical Trials: PTSD symptom reduction by propanolol given after trauma memory
activation
(2009).
[Online]
Available
from:
http://clinicaltrials.gov/ct2/show/NCT00645450. [Accessed: 20 February 2010]
In vitro fertilisation: the next generation
In vitro fertilisation (IVF) is a revolutionary concept that has allowed many couples who
are infertile to conceive a child. As technical advances have developed, the procedure has
become more widely employed, and greater numbers of children are conceived using
such artificial techniques. Each individual IVF cycle is estimated to cost the NHS
between £4,000 and £8,000 [1]. Many are forced to have IVF privately, due to guidelines
governing eligibility on this NHS, which can make it even more expensive. In addition to
the economic cost, IVF could have biological, psychological and social ramifications for
future generations.
Drawbacks
Since the birth of the first IVF-conceived baby, Louis Brown, in July 1978, over three
million individuals have been conceived worldwide by IVF [2]. The number of IVF
individuals is set to increase and this could have considerable implications on Western
society and its healthcare systems. Individuals who have conceived using IVF may
transfer certain genetic conditions that prevent their offspring from conceiving naturally.
This could potentially lead to an altered gene pool, which could have wide ranging
ramifications. With more than 12,500 babies born in the UK each year through IVF, this
altered gene pool could have a deleterious effect on future generations, whilst incurring
significant costs to the NHS in medical care [3].
Studies have suggested that IVF children can be prone to developing certain cancers. IVF
children have been found to have up to a seven times higher chance of developing a rare
form of retinoblastoma than those conceived by natural pregnancies [4]. Additionally,
children conceived with IVF have an increased probability of developing BeckwithWiedemann syndrome, which causes excessive growth, kidney abnormalities and an
increased risk of tumours [4]. Although the rise in such diseases is small, amplification
could arise potentially causing considerable expense for healthcare systems.
From an evolutionary perspective, individuals unable to conceive would not reproduce.
However, IVF has sidestepped this obstacle, allowing otherwise infertile individuals to
reproduce and pass on their genes. There are many environmental reasons for infertility
but in some circumstances the source of the infertility can be genetic. Consequently, there
could be an increased likelihood that IVF children will be infertile themselves and
therefore have to consider fertility treatment. With each subsequent generation, more IVF
children will be born who may also need help conceiving. Providing fertility treatment to
an ever-growing number of IVF individuals could cost healthcare systems more and more
money.
IVF often leads to multiple births because multiple embryos are used to increase the
chances of conception. Multiple births can involve more complications. They can have
detrimental health effects on the mother, including depressive illness and gestational
diabetes. For the embryos, sharing the uterine environment can lead to low birth weights,
increased risk of infant mortality, and intracranial bleeding. In addition to these health
risks, multiple births have socio-economic costs. The cost to the NHS for the delivery of
a singleton is £3,313, whilst per triplet it is an astonishing £32,354. Multiple pregnancies
are associated with 56% of the direct cost of IVF pregnancies despite representing less
than a third of the total childbirths per annum in the UK [5].
The unusual nature of their conception can lead to psychological difficulties for an IVF
child. Such a procedure can be difficult for a child to comprehend, particularly if there
are added unconventionalities such as surrogacy or sperm donation. It can be complicated
for a child to learn that they were carried by a different mother to the one that raised
them. Pregnancy creates an intimate bond between mother and child, and situations
where surrogacy is paid for can be seen as degrading this special relationship. Separation
for the surrogate mother is often hard and can result in legal difficulties and disputes.
Donation of sperm and eggs, when anonymous (if the child was born before 2005 in the
UK), can prove problematic when needing family history for medical purposes.
The issues of saviour siblings and designer babies are ethically contentious. Deaf couple,
Sharon Duchenseau and Candy McCullough, wanted to create a family where the
children were also deaf. They contacted potential donors themselves and chose a
candidate whose family was affected by deafness over several generations. As a result,
they conceived two deaf children [6]. Many would argue that it is unfair and cruel to
intentionally create a child, although the parents argued that they were only mimicking
the realistic offspring they would have had were they able to conceive naturally.
The mothers of IVF children can also suffer side effects of such treatment as they can be
detrimentally affected by the necessary hormones taken prior to the procedure. One piece
of research found that infertile women who had taken fertility drugs had 2.7 times the risk
of developing ovarian cancer in comparison to those who had never taken these drugs [7].
The risks of developing other conditions such as placenta praevia, ovarian
hyperstimulation syndrome (OHSS) and ectopic pregnancies are also raised.
IVF gives older mothers the possibility of conceiving. IVF is only offered on the NHS to
women aged 23-39 years. Privately the maximum age limit is 50 years. In 1997, a 60
year-old woman, Liz Buttle, lied about her age, claiming she was 49 to receive IVF
treatment privately, and consequently became the UK’s oldest mother at the time [9]. The
oldest woman from the UK to conceive using IVF is Elizabeth Munro who received
treatment in the Ukraine and was 66 when she gave birth [8]. The older the mother, the
greater the risk of risk of foetal complications. Some would argue that allowing ageing
mothers such as Elizabeth Munro to conceive is irresponsible as there is the chance that
older mothers will be less able to perform the demanding physical activities required to
raise a child.
Benefits
Recent advances in techniques such as pre-implantation genetic haplotyping (PGH)
facilitates screening for genetic disorders; if a deleterious genetic disorder is found, the
embryo can be aborted, reducing its frequency. However, there are substantial ethical
issues surrounding the selection and destruction of embryos.
IVF allows women to conceive later in life, which enables them to concentrate on their
careers whilst retaining the option to conceive. Proponents of IVF have argued that this
freedom to choose is beneficial for the economy [10]. It is worth emphasising that IVF
treatment does not have a guaranteed outcome, and the chances of success decrease with
age. Between the ages of forty and forty-two, the chances of success are 10.6% compared
with 28.2% for women under the age of thirty-five. In some countries birth rates have
fallen to 1.3 babies per capita, a value significantly lower than that required to maintain a
population. Denmark’s birth rate is being maintained with the help of artificial
reproduction technologies, which boosts the younger population, helping to prevent the
burden of a ageing society [11].
IVF has provided happiness to thousands of couples around the world but the effects of
this selective procedure have not fully unfolded as it is relatively new. It will be
interesting to observe if IVF has any notable effect on our gene pool. In the meantime, all
we can do now is watch and wait as the potential consequences of IVF reveal themselves
to future generations.
Fact Box
 Louis Brown was the first successful IVF-conceived baby and was born on 25th
July 1978 [13]
 Each individual IVF cycle is estimated to cost the NHS between £4,000 and
£8,000 although costs could be higher if donor eggs or sperm are required.
 The oldest woman from the UK to conceive via IVF is Elizabeth Munro who
received treatment in the Ukraine and was 66 when she gave birth
 IVF is currently only offered on the NHS to women within the 23 to 39 age range;
privately this limit does not apply with a maximum age of 50 years old.
 Over 3 million IVF conceived individuals exist worldwide and more than 12,500
babies are born in the UK each year through IVF.
Lisanne Stock is a first year medical student at St Anne’s College
References:
1. ‘Do I have to pay for IVF Treatment?’ (2008) [Online]. Available from:
http://www.nhs.uk/chq/Pages/889.aspx?CategoryID=54&SubCategoryID=127
2. The Centre for Human Reproduction. Mark Dixon (2009) [Online]. Available from:
http://www.centerforhumanreprod.com/pdf/pr_patients_entitled_maximal_050409.pd
f
3. ‘IVF’(2008) [Online]. Available from: http://news.bbc.co.uk/1/hi/health/308662.stm
4. ‘Beckwith-Wiedemann syndrome and assisted reproduction technology.’ E R Maher
et all (2003) [Online]. Available from: http://jmg.bmj.com/content/40/1/62.full
5. Ruth Deech and Anna Smajdor Chapter 6 ‘From IVF to Immortality: Contreoversy in
the Era of Reproductive Technology’ Oxford Press (2007)
6. Spriggs ‘Couple create a child who is deaf like them’ M J Med Ethics (2002)
7. John W. Malo ‘Ovulation induction Drugs risk Ovarian Cancer’[Online]. Available
from: http://www.ivf.com/ovca.html
8. Sarah Spain ‘66 year-old women is Britain’s oldest mother to be’ (2009) [Online].
Available
from:
http://www.ivf.net/ivf/66_year_old_women_is_britain_s_oldest_mother_to_beo4188.html
9. ‘I had to lie about my age’ The Telegraph (2006) [Online]. Available from:
http://www.telegraph.co.uk/news/main.jhtml?xml=news/2006/05/05/nmumo5.xml
10. Human Fertisliation and Embryology Authority ‘How likely am I to get pregnant?’
[Online]. Available from: available at <http://www.hfea.gov.uk/en/979.html
11. Rachel Nowak ‘More IVF keeps the birth rate up’ (2007) [Online]. Available from:
http://www.newscientist.com/article/mg19425984.500-more-ivf-keeps-the-birth-rateup.html Magazine issue 2598.
12. ‘How
IVF
is
performed
(2008)
[Online].
Available
from:
http://www.nhs.uk/Conditions/IVF/Pages/How-is-it-performed.aspx
13. Ruth Deech and Anna Smajdor Chapter 1 ‘From IVF to Immortality: Contreoversy in
the Era of Reproductive Technology’ Oxford Press (2007)
All you need is love
Matt Harris ponders the importance of not losing that loving feeling.
Word count: 1,867
“Nothing in the world is single,
All things by a law divine
In one spirit meet and mingle –
Why not I with thine?
And the sunlight clasps the earth,
And the moonbeams kiss the sea –
What are all these kissings worth
If thou kiss not me?”
Extract from Love's philosophy by Percy Bysshe Shelley
“I can feel my love for you growing stronger day by day
And I can’t wait to see you again
So I can hold you in my arms
Is this love that I’m feeling?
Is this the love that I’ve been searching for?
Is this love, or am I dreaming?
This must be love, ‘cos it’s really got a hold on me”
Extract from Is this love? by Whitesnake
What is love?
Most people have experienced love at some point in their lives, whether towards family,
friends, partners or all of the above. Love is an intensely personal and individual
experience and any attempt to define it would lead to both inaccuracies and inadequacies.
Suffice it to say that love is a complex state of emotions that can include contentment,
desire, lust, exhilaration, euphoria, jealousy, rage, and despair. There are clearly different
types of love, such as romantic and parental. According to Sternberg, there are as many
as eight types, based on different combinations and weightings of intimacy, passion and
decision-commitment [1]. However one defines love, most would agree that it is an
extremely powerful force that can drive individuals to acts both wonderful and terrible
alike.
How do we love? Let me count the ways!
Although love is traditionally the domain of artists and poets, science has begun to turn
its attention to love in recent years, particularly in the fields of psychology, neuroscience
and evolutionary biology. Recent advances in functional magnetic resonance imaging
have allowed neuroscientists to elucidate the neural correlates of love [2]. Such
experiments involve detecting areas of increased brain activity in response to a visual
input, for example, a photograph of a loved one. The brain areas involved essentially
constitute the reward system and include the medial insula, anterior cingulate, and
hippocampus [2]. These contain high concentrations of dopamine, which is released from
the hypothalamus and is experienced as a ‘high’. The same process happens when we
ingest opioid drugs, such as heroin. This ‘high’ reinforces the reward-seeking behavior
and soon we crave the ‘hit’ of our loved one. This similarity between love and drug
addiction should come as no surprise to us. The etymological derivation of ‘love’ actually
comes from words meaning ‘desire’, ‘yearning’ and ‘satisfaction’ [3]. It is a familiar
concept that has been used as a metaphor many times in popular culture, whether in Roxy
Music’s 1975 hit song Love is the drug, or Robert Palmer’s 1986 rock classic, Addicted
to love.
“Whoa, you like to think that you’re immune to the stuff, whoa yeah!
It’s closer to the truth to say you can’t get enough
You know you’re gonna have to face it, you’re addicted to love!”
And who could forget Seal’s Kiss from a rose? “Love remained a drug that's the high and
not the pill…to me you're like a growing addiction that I can't deny.” Quite right, Seal. In
addition to a rise in dopamine, love also involves a fall in serotonin to levels commonly
found in patients with obsessive compulsive disorder [2]. Love, it seems, is not just an
addiction but a form of mild insanity, which would explain romantic notions such as ‘the
madness of love’. Furthermore, when in love, increased activity of the reward system is
associated with decreased activity of regions in the frontal cortex involved in negative
emotions. This leads to a relaxation in critical judgment towards a loved one that we
would not normally extend to others [2]. Could this be the neural basis of the notion that
‘love is blind’?
Why do we love?
There is a spectrum of mating systems in the natural world ranging from polygamy (more
than one mate; promiscuous) to monogamy (only one mate; devoted). One would expect
polygamy to be a more successful strategy, as it leads to a greater number of reproductive
encounters. In such transient couplings, there is no purpose for individuals to be bound
together by love. However, in certain animal societies, particularly birds, monogamy is
more successful, allowing males and females to pool their resources when raising young,
and affording them better reproductive success. Interestingly, studies in prairie voles have
shown that a single gene related to vasopressin may determine whether a species is
polygamous or monogamous, and that by mutating the gene, slutty little voles can be
transformed from promiscuous to devoted. However, before pairing, monogamous
animals are faced with the dilemma of who to take as a mate. Once chosen, they will
commit a significant amount of time and resources to this mate, so the decision is an
important one. One of the most convincing theories for the evolution of love suggests that
love has evolved to solve this problem of commitment [4]. If we detect love in a potential
mate, we know that they will be committed to us, that they will not leave us in sickness or
in health, for richer or poorer. Indeed many signs of love involve acts of self-sacrifice.
Love may also be the reward we experience when the commitment problem is solved,
which ties in with the neural reward circuits discussed above. However, the commitment
may not necessarily be for life. If love is a drug, once the drug wears off, the love has
gone.
Just like any other strategy, love is open to exploitation by cheaters. Men may deceive
women about the extent of their feelings in order to gain short-term sexual access [4].
Result! In response, women have evolved defence mechanisms against such deception,
by imposing a longer courtship process before consenting to sex and developing a
superior ability to detect nonverbal signals [4]. Denied!
Another emotion that co-evolved with love is jealousy [4]. At first glance, one
might consider jealousy to be the opposite of love, but we must not confuse it
with hate. In fact jealousy is a component of the deepest love. If one does not feel
jealous when they discover that their lover is seeing someone else, how can they
be in love with them? It is likely that jealousy evolved to guard against loss of
love to a rival [4]. Many signs of jealousy are simultaneously acts of devotion and
vigilance, for example, unexpectedly dropping by to visit a partner. Paradoxically,
jealousy can rip apart even the most harmonious relationships. At its worst, it can
lead to murder. But why would such a behaviour evolve? If a cuckolded man kills
his unfaithful lover, he wipes out the potential to have further children by her,
halves the care that any existing children will receive, and risks retribution from
others close to her. In practice, it is the younger, healthier and more attractive
women who tend to be murdered by their husbands for infidelity. This is because
the loss to the cuckolded man is compounded by the simultaneous gain to his
rival. This explains why we feel jealousy specifically in the context of loss of love
to a rival as opposed to loss of love in general, for example, when death takes
them from us or when circumstances change such that the relationship is no
longer compatible [4].
This illustrates the eternal struggle between the sexes. Love is not truly altruistic.
Although love involves self-sacrifice, this is repaid by the reproductive rewards.
Individuals do not consider ‘the good of the species’, only themselves. Both males
and females desire certain goals from a pairing, but often these goals are not
compatible. When a couple’s goals are in line with each other’s, they will
experience a loving relationship, but once they start to diverge or clash, that love
will die.
In opposition to this, one might claim that love occasionally involves the ultimate
sacrifice, where an individual gives their own life to save that of a mate. One might
wonder how such behaviour could evolve and the author would like to offer the following
possible explanations. Firstly, it may be that only older individuals will engage in this, by
which time they have mated and past on their self-sacrificing genes. Secondly, it may be
that their mate can provide better care than themselves to any existing children they have.
Finally, it may be an oddity of human society that has evolved due to its insulation from
natural selection.
The healing power of love
Many references have been made in popular culture to the healing power of love and sex.
In his 1982 soul classic Sexual healing, Marvin Gaye tells the listener that he became
“sick this morning” and that “the love you give to me will treat me”, emphasising that
“sexual healing, baby, is good for me, sexual healing is something that’s good for me”.
He says that he can “get on the phone this morning and call you up baby” (NHS Direct
provides a variety of services both online and over the phone) and that it “helps to relieve
the mind” (clearly there are benefits for mental health as well). He goes on to state that
“you’re my medicine, open up and let me in” (patient compliance with any prescribed
medication is an important consideration) and that “soon we’ll be makin’ it honey, oh
we’re doin’ fine” (prognosis following treatment is good). Finally he mentions that “I
can’t wait for you to operate”, although the indications for this are not explained.
In the short-term, ‘acute’ love can be quite stressful, leading to symptoms such as
sweating, palpitations, and even diarrhoea and vomiting. However, in the longer-term,
love can be a powerful coping mechanism, leading to states that are anxiolytic, stressrelieving and health-promoting [3]. So will love ever be prescribed in the future? Will
Mrs Smith find a hefty dose of love at the top of the management plan for her
osteoarthritic hip? Of course not; the notion is utterly preposterous. Then again, is
anything off limits to complementary medicine?
Conclusions
That love has evolved demonstrates that the neural circuitry to love exists in every human
being on Earth. Whilst it is feasible, and tragic, that an individual might go an entire
lifetime without ever having experienced love, the potential to love is there. Throughout
history, various societies have attempted to ban love, claiming it to be undignified or
disguised lust, for example, the Mormons [4]. All such attempts have been monumental
failures, which is a testament to the raw power and universality of love. In the words of
Huey Lewis and the News:
“You don't need money, don't take fame
Don't need no credit card to ride this train
It's strong and it's sudden and it's cruel sometimes
But it might just save your life
That's the power of love”
Matt Harris is a third year graduate-entry medical student at Magdalen College
The author would like to thank Dame Helen Mirren in Calendar Girls and Sue Barker
MBE on A Question of Sport for teaching him all about the true meaning of love.
References
[1] Sternberg RJ. A triangular theory of love. Psychological Review 1986; 93: 119-135
[2] Zeki S. The neurobiology of love. FEBS Letters 2007; 581: 2575–2579
[3] Esch T, Stefano GB. The neurobiology of love. Neuroendocrinology Letters 2005;
3:175-192
[4] Buss DM. The evolution of love. In: Sternberg RJ, Weis K, editors. The new
psychology of love. Yale University Press; 2006. p. 65-82.
Love is the drug
Oxytocin. Noun.
1. A hormone that is released from the posterior pituitary gland in the brain. Its
principle roles are to stimulate uterine contractions and milk ejection during
pregnancy and lactation, respectively.
2. ‘The cuddle hormone’. Creates feelings of love and attachment between couples,
parents and their babies, and can increase the propensity to trust strangers [1].
3. The altruistic hormone. Its levels correlate with feelings of empathy and
compassion.
Touch me, baby!
Everyone has realised the importance of human touch at some point in their lives. Babies
need it, grandparents thrive on it and bankers are more likely to lend money when they
get it. This is because touch boosts the oxytocin response and increases feelings of
emotional attachment in both the giver and receiver by activating reward pathways in the
brain [2]. In addition, regularly boosting the oxytocin response ensures that it is
optimized and ready to fire when required later in personal relationships with a partner or
child [3], leading to an improved love life and better parenting skills. This means even
the quintessential ‘tough guy’ from Slough should hug people occasionally.
If hugging feels like too deep an intrusion of personal space, there are other things that
can be done to demonstrate warmth and raise oxytocin levels, such as relaxing the arms
and exposing the ventral side of the body. Turn your palms to face strangers when you
meet them and you’ll leave them feeling well disposed, and more likely to become a
friend. The flip side to this is that oxytocin levels and the associated feelings of love or
attachment increase according to the amount of touch received. This is why you might
feel incredibly attached to a tactile individual even if you haven’t known them for long.
The cuddle hormone can definitely complicate sex. Highly dedicated research teams
locked couples in a dimly lit room and politely requested that they procreate whilst taking
blood samples before, during and after the process to measure oxytocin levels. They
found that a surge of oxytocin can be released during orgasm, and then questioned their
subjects on their views regarding their relationship using the Adult Attachment Scale.
They concluded that the oxytocin surge can create stronger feelings of love in a woman
than in a man, possibly because women have more responsive oxytocin receptors [4]. The
woman may therefore end up far more attached than she would like in the morning. For
this reason, women who work as prostitutes would be safer with a lower baseline level of
oxytocin - they run the risk of becoming emotionally attached to their clients if their
oxytocin response is too strong [5].
Hold me, baby
Oxytocin stimulates uterine contractions and milk ejection during pregnancy and
lactation, respectively. Given this involvement between a mother and her baby, it will
come as no surprise to learn that the stimulus of a baby’s smile also produces a strong
oxytocin surge in the mother. This creates a general urge towards loving which is focused
specifically at the baby due to the simultaneous release of another hormone, prolactin,
which is required for milk production.
Other hormones aid this newfound motherly love, most notably dopamine and endorphins
[5]. Dopamine is released in the limbic and reward pathways of the brain and rises during
the anticipation of a reward. In this case, dopamine levels rise in the mother’s brain just
before she knows her little darling’s toothless grin is about to come out, and together with
oxytocin, reinforces the rush of love she feels for her baby. How cute is that?
Recent research suggests that the strength of the mother-child bond can be predicted by
measuring the levels of oxytocin during pregnancy [6]. Mothers with higher levels of
oxytocin during the first trimester show greater bonding behaviours after birth, such as
constant affectionate touch and a focused gaze on the baby. So it would seem that some
women are naturally more ‘motherly’ than others and it reiterates the idea that boosting
the oxytocin response earlier in life by cuddling others can make you a better parent. This
could have massive ramifications for a child’s development and hence the rest of their
life!
Survival of the most compassionate?
Oxytocin has a lesser-known reputation as ‘the altruistic hormone’ because its levels are
thought to correlate with feelings of empathy and compassion. A number of studies have
shown that individuals with a particular variant of the oxytocin receptor gene are better
able to read the emotional state of others and make more altruistic decisions in difficult
scenarios. In one experiment, subjects with this variant were asked to answer a set of
questions involving moral dilemmas such as “If you could save one family member from
death by letting five innocent people die, would you do it?” [7]. When compared with a
control group, it was found that the group with the oxytocin receptor variant were
significantly more likely to save the five innocent people.
Given natural selection’s preference towards ‘survival of the fittest’, it’s interesting that
this variant persists. Does this imply that there are survival benefits to being altruistic?
Perhaps compassionate people command more respect over others, leading to better
interpersonal relationships and eventually reproduction. On the other hand, people who
have this variant of the gene may be exploited by being more compassionate others,
conferring a survival disadvantage! Either way, if you know someone who is incredibly
generous, they’re likely to have high oxytocin responses. Make the most of it as they
can’t help being generous and probably rather like it anyway.
Oxytocin can make you feel affectionate to strangers, fall in love, bond with your baby
and reveal your Samaritan tendencies. It’s a powerful but potentially dangerous hormone.
Treat with caution: love is the drug.
Tanya Deb is a fourth year medical student at Green Templeton College
References
1. Uvnas- Moberg K. ‘Oxytocin may mediate the benefits of positive social
interaction and emotions’ Psychoneuroendocrinology 1998 Nov; 23(8):819-35
2. Komisaruk BR, Whipple B ‘Love as sensory stimulation: physiological
consequences of its deprivation and expression’ Psychoneuroendocrinology 1998
Nov; 23(8):819-35
3. Cushing B.S., Carter C.S. ‘Prior exposure to oxytocin mimics the effects of social
contact and facilitates sexual behaviour in females’ J Neuroendocrinol 1999
Oct;11(10):765-9
4. Carmichael M.S., Humbert R., Dixen J., Palmisano G., Greenleaf W., Davidson J.
‘Plasma Oxytocin Increases in the Human Sexual Response’ Journal of Clinical
Endocrinology & Metabolism Vol. 64, No. 1 27-31
5. Insel, T.R. ‘Is social attachment an addictive disorder?’ Physiol Behav. 2003
Aug;79(3):351-7
6. Lothian, J. ‘The birth of a breastfeeding baby and mother’ J Perinat Educ. 2005
Winter; 14(1): 42–45.
7. Zak J. P., Stanton A. A., Ahmadi S. ‘Oxytocin increases generosity in humans’
PLoS ONE. 2007; 2(11): e1128.
8. http://www.verolabs.com/ (peruse for your own amusement)
Spirituality and healthcare: a match made in heaven?
There is mounting evidence that ‘having faith’ may directly benefit people’s health, and
spirituality is increasingly capturing the interest of psychiatrists as an adjunct to therapy
for mental illness. Specifically, mindfulness-based cognitive behavioural therapy
(MBCBT), rooted in Buddhist meditation practice, reduces the severity of symptoms [1]
and recurrence rates in major depression [2]. In light of this, the Mental Health
Foundation advocates the prescription of NICE-approved MBCBT in its recent Be
Mindful campaign [3]. Given this, could religion have a place in healthcare too?
Interestingly, patients who pray regularly have better ‘global functioning’ after cardiac
surgery, while patients who turn to emergency prayer during or afterwards do worse [4].
It is clear that spirituality has undeniable potential to help certain patients cope with
illness, even if acting as a stress-reducing “placebo” [5]. In addition, prescriptions for
anti-depressants have doubled in the last decade [6]; this highlights the need for a more
cost-effective approach, which spiritual intervention could provide. Importantly, this
potential will not be realised if spiritual therapies are not discussed with patients at the
point of access. Is the message filtering through? Recent figures show that only one in
twenty GPs prescribe meditation [3] and few would be willing to discuss faith with their
patients. A solution could be the adoption of a spiritual history, whereby patients are
routinely questioned about their thoughts on spirituality. Whether this would lead to a
spiritual intervention such as MBCBT or not, it would allow doctors and patients alike to
consider the profound effect that spirituality can have on health.
Lucy Farrimond is a fourth year medical student at Keble College
References
[1] Barnhofer, T et al, Mindfulness-based cognitive therapy as a treatment for chronic
depression: A preliminary study. Behaviour Research and Therapy (2009) 47: 5, 366-373
[2] Teasdale JD et al, Prevention of relapse/recurrence in major depression by
mindfulness-based cognitive therapy.Journal of Consulting and Clinical Psychology.
(2000). 68(4), 615-623.
[3] Mental Health Foundation. Be Mindful, (2010) [Online]. Available from:
http://www.bemindful.co.uk/ [Accessed: 14th March 2010].
[4] Ai AL et al, 2006. Depression, faith-based coping, and short-term postoperative
global
functioning in adult and older patients undergoing cardiac surgery. Journal of
Psychosomatic Research 60 (2006) 21– 28
[6] Prescription Costs Analysis, NHS Information Centre, 2009 reported in Mental
Health
Foundation.
Be
Mindful,
(2010)
[Online].
Available
from:
http://www.bemindful.co.uk/ [Accessed: 14th March 2010].
[5] Dawkins, R. The God Delusion (1st edition). Bantam Press; 2006.
Stepping off the treadmill
Lance Rane looks at what to do if you decide medicine isn’t for you.
Training for medicine is a long, demanding process. As students, many of us will come to
question our commitment at one time or another. For some, a sense of disillusion
remains. As careers go, there are few that require such reserves of motivation and drive,
yet we are asked to commit at an age when we are still maturing and our personalities
still developing. What seemed like the right thing to do as a 17 year old can come to feel
like a naive mistake for the disheartened student facing several unhappy years at medical
school. For the de-motivated, medicine rapidly becomes an alienating slog. Doubt can be
compounded by a feeling of helplessness; for such an important problem, it may seem
that there is remarkably little support available, indeed that there is very little recognition
of the problem at all. For students who find themselves in such a situation, what is the
best thing to do?
Reflect and analyse
First and foremost, it is essential to analyse carefully why you have fallen out with
medicine. The purposes of this reflection are twofold; firstly, it may reveal that your
reasons are remediable, and secondly, introspection of this sort is crucial to informing
your decision about the type of career that you’ll eventually enter, whether that be within
or outside of medicine.
Perhaps things will be different once you’ve qualified. Perhaps it is only a certain aspect
of medicine that is the problem. As students we are properly exposed to a regrettably
small number of specialities. It is easy to forget that in the myriad different paths that any
particular medical career might take, it is almost inevitable that you will find satisfaction
in some area. Books like the ubiquitous So You Want to be a Brain Surgeon?, by Eccles
and Sanders, can be worthwhile sources of information on medical specialities. If you’d
like to explore further there is an online psychometric instrument, Sci 59, available in the
Cairns Library in the JR, which helps students identify the specialities to which they
might be best suited.
With regard to choosing a career to which you are suited and will be content, it is clear
that you must know yourself, your strengths and your weaknesses equally, if you are to
make the most of your potential. To this aim the Medical School is an excellent source of
support. Dr Peggy Frith in particular is an exponent of sound careers advice, and the first
point of call for those with worries. The fifth year appraisal is an especially useful
opportunity for voicing any concerns.
For those with persisting doubts it is a good idea to visit the Careers Service to discuss
your problems and explore your options early on. Located on Banbury Road, their offices
house a plethora of invaluable resources; books about self-assessment and changing
career, information on all of the main employment sectors and personal advisors on hand
to answer your questions. Appointments can be booked in person or online. Claire
Chesworth specialises in advising on medical careers. Medicine is a tough slog for the
de-motivated, but as Claire is quick to point out, it is almost invariably worth completing
your training. A medical degree shows commitment and opens up all kinds of
opportunities, both within and outside of the profession. It also allows you to come back
to the subject later, if you so wish. Nevertheless, if you are at all considering leaving, you
should explore your options early on.
Transferable skills
There are of course huge psychological barriers to leaving medicine. However, the oftperceived notion that a ‘vocational’ training in some way limits your career options is
largely untrue. According to Claire Chesworth, some 40% of graduate jobs are open to
any degree discipline. As medics, however, we enjoy a particularly privileged position;
we have an extensive set of transferable skills and are widely employable. Problemsolving, leadership, communication skills and team-working ability, all important
components of the doctor’s repertoire, are widely sought out by potential employers.
Skills in reading and critically appraising evidence are transferable to a multitude of
employment sectors. An Oxbridge training brings the particular advantage of a strong
scientific background and with it greater prospects in a range of science-based fields.
Browsing the jobs sections of the BMJ and www.nhs.co.uk will give some idea of the
range of different jobs available to the medically qualified. The website
www.prospects.ac.uk has a useful job-exploration tool for helping decide on the types of
careers to which you might be best suited (see www.prospects.ac.uk/links/pplanner ).
Many employers will be sympathetic to your change of heart. Banking and management
consultancy are invariably popular with medics, with employers attracted by such skills
as leadership and the ability to work effectively as part of a team. Most areas of the
scientific sector are open to those who hold a science BA. Then of course there are a
multitude of non-clinical roles for which a medical degree is either desirable or indeed
essential. And if you really are set on something completely different, it is never too late
to retrain, but you must have firm ideas of what you want to do. To this aim, speaking to
people in the profession and gaining work experience is essential. The stories of those
interviewed here are illustration enough of the fact that your medical training need not be
prescriptive of your eventual career.
Final advice
Too often there is a perceived expectation of an unthinking commitment and servitude to
medicine from medical students. Such an attitude is damaging and does not acknowledge
the reality, that our motivations are often tried. If you are having doubts, finish your
training, but in the meantime pursue your interests, become familiar with your strengths,
weaknesses and motivations, and do not be afraid to reflect and consider your options.
Self-assessment of this sort is healthy; indeed it is crucial to gaining the most out of
yourself and your prospective career. And if you do decide to stick with medicine, you
will be no lesser a doctor for it.
Lance Rane is a fifth year medical student at New College
References
Houghton, A. Should I give up Medicine? Student BMJ June 2009
Interview: the banker
Name: Wished to remain anonymous
Age: 28
Medical School: Oxford University
Stage of dropout: ST1
Current occupation: Investment banking with a focus on drug companies
Why did you move away from medicine and into your current role?
Number one, the NHS annoyed me: the way it was mismanaged. Number two,
Modernising Medical Careers; they were creating a second tier, a junior consultant grade,
which I didn’t want to become. The other side is I actually like science, and combining
science and medicine is hard to do well. With this job I get to do lots of science, but you
can give your opinion.
Is there anything else that drew you to your current role?
I guess there’s this presumption that you get compensated better. It wasn’t actually
money, because I was offered the same amount. It was more about the linking between
performance and some kind of appreciation. In the NHS you are not rewarded in
accordance with your performance. In this job if you perform, you get paid and you get
promoted and you get recognised. But if you don’t perform you get kicked out.
How did you find the transition?
Bloody hard. What I’m doing now, combining studying for a new qualification with long
hours, makes for a very difficult life. Anyone who thinks that the grass is greener needs
to think long and hard about it, rather than thinking it’s going to be rosy.
So there wasn’t a prerequisite for a financial grounding?
No there wasn’t, what’s more important is that someone can learn, is receptive and has a
genuine interest in the job. You can learn the finance on the job.
How have you found that your medical training has helped you?
Sticking power and the ability to work unsociable shifts. Also being able to talk to
people; bedside manner is key to winning clients.
What do you think are the advantages and disadvantages of your new job compared
with medicine?
The disadvantages include insecurity: there comes a point when even the best guys get
fired. And the hours. It’s a lot more difficult in that fewer people make it. In medicine
everyone makes it to consultant. The upsides would be creativity and you’re influencing
what’s happening in the world of medicine. Pay as well; there is the potential to be paid
more but even that is not guaranteed.
Any regrets?
I do think now and again, maybe I should have carried on. I jumped off at the worst
point. And now I’ve come to this and I’ve been at the worst point in this job all over
again. But I think I would regret it if I hadn’t taken that opportunity. So it works both
ways.
Do you have any advice for someone thinking about doing the same kind of thing?
Find out what the job involves. Keep your eyes open to pros and cons- the grass is not
always greener. Do an internship to really get an idea.
Interview: the actor
Name: Prasanna Puwanarajah BA BM BCh
Age: 28
Medical School: Oxford University
Stage of dropout: After F2
Current occupation: Combining acting, writing and directing with locum work
What projects are you currently working on?
A lot of stuff, mainly writing, acting, directing. I did a lot of acting as a student and then
as a junior doctor, I just never stopped it. At the moment I’m doing 90% arts to 10%
locums.
How did you find the transition in ‘stepping away’ after F2, and did your medical
training help you in any way?
It’s a very peculiar industry to get established in and the difficulty was wondering when I
was going to get back into medicine. So in a way [my training] was a bit of a hindrance
as opposed to something that really helped. But there are a lot of things you can bring to
bear as a director and an actor that you may have picked up from some work in medicine.
But then that’s always the case I think; there’s always material you can draw on. So yes
and no.
How have you found combining the two careers?
It’s brilliant, the one informs the other, the one is a break from the other. I think spending
your life turning one wheel can get very ‘samey’ and you can forget why you wanted to
do something in the first place.
Do you see any particular advantages or disadvantages in comparing your two
professions?
There are wonderful things about medicine that you don’t get in any other profession, and
similarly as an actor or a director there are extraordinary moments of creativity and
excitement. And [acting] is much more autonomous. You navigate your own path, but the
caveat with that is professional insecurity. You have to work hard at getting yourself
work.
Do you have any regrets?
Not at all. The art is really quite hand to mouth - you never really know what projects are
potentially there and finding money is a perpetual headache. But no regrets at all. It’s
always best to risk regretting things that you’ve done than to regret things that you
haven’t.
Do you have any definite plans for the future?
I’m not tying myself to any particular timescale. The key thing about being an artist is not
to worry too much about how long things might take. In medicine you’re thinking “I’ll be
a consultant by the time I’m 32” and, quite happily, that’s not how it works in the
creative world.
Do you have any advice for students in a similar position to the one you were in:
balancing their studies with serious hobbies or interests outside of medicine?
Just do it. Particularly when you’re just out of medical school and both personally and
professionally you’re a bit more of a stem cell. I think people generate these myths that
you’ll never work again if you don’t spend your annual leave going to a conferences.
Most of the myths really are myths and people should just crack on and do what they
enjoy.
Interview: the pharmaceutical advisor
Name: Dr Soumya Mukherjee MA MBBS MRCS
Age: 27
Medical School: Preclinical: Oxford University, Clinical: Imperial College London;
graduated 2007
Stage of dropout: ST1 in cardiac surgery
Current occupation: Medical advisor at major pharmaceuticals company since October
2009
What does your job involve?
I work in between the clinicians and industry. It’s about making sure that the products
[drugs] are safe, they’ve got efficacy and that further clinical trials are developed to
expand their use. And I work with marketing, building relationships with doctors.
Why did you leave medicine?
I found that ward life was repetitive and pathway-driven. There’s a lack of independence
and creativity as an F1 or an F2, minimal development and they don’t pay for training.
Now, I’ve more choice, options and resources. In industry you can do anything and the
support is there.
What drew you to your current role?
You can have a broader impact. And I’m getting extra skills that as a medic you never
get: management skills, real communication skills, real teamwork and leadership.
How difficult was the transition? What did you do by way of preparation?
It was quite tough to get interviews. In terms of preparation, I spent months researching
it: I met medical directors in New York, I visited the offices in Paris, and I read about the
company. You’ve got to give due diligence. I was walking away from a solid surgical
post. I’d spent money on training courses and weekends writing papers.
How did your medical training help you?
They like you to have your MRCS/MRCP, it’s like a seal of approval. You need to show
them that you know enough. As a medic you have more credibility in my role. You’re
much more likely to build a rapport when you approach other medics than if you were
someone in marketing. Also, the fact that you’re used to reading papers is helpful. As a
doctor you can look at that material critically and evaluate it and plan clinical trials. You
do that every day in pharmaceuticals.
What do you consider to be the advantages and disadvantages of your new job
compared to medicine?
In industry, you don’t have the job stability. You carry a lot of work home. You do a lot
of travelling. There’s more politics than I experienced in the NHS. In surgery, your
success rates speak for themselves. The salary doesn’t compare, it’s a joke [in medicine].
In industry, you can be on so much more in only a couple of years. And it’s a nicer
working environment.
Any regrets?
No. I’ve got a plan and people to support me in case things go wrong. It’s so interesting,
business. I’m fascinated by it and I’ve got no regrets at all.
Do you have any advice for students considering leaving medicine?
Think about what you want. Is it a push away, or the attraction of something new? Assess
your own skills and attributes: does it match what I want? Look at what’s out there. You
need to talk to people. Plan ahead.
Interview: the solicitor
Name: Wished to remain anonymous
Stage of dropout: ST2 in Obs & Gynae
Current occupation: Trainee solicitor
What drew you to Law?
I’ve always been a very strong scientist and for me it was going to be a challenge. It
requires completely different skills, which for me was a big attraction. And in Law
everything you do has to be done meticulously to a high standard. You get to really
analyse and ponder over things. As a doctor, a lot of the skill is based around doing a lot
of things quickly, and ‘making do’. It’s all very protocol driven, and it really frustrated
me. I like to be made to think around things, to be challenged. In law, there’s more
opportunity for that.
How difficult was the transition? Was there any specific preparation you had to do?
It was quite difficult because not many people leave medicine; there was no-one for me
to turn to. But because I was so clear about what I wanted to do that made it a lot easier.
They asked for my reasons [for leaving medicine], but because I was leaving for all the
right reasons, they didn’t hold it against me.
How does your medical training help you?
Definitely communication skills and staying calm in difficult situations.
Any regrets?
None. The more I learn about the career, the more I love it. I’m really glad I went through
my medical training, I wouldn’t change that for the world. It’s an amazing degree to
have.
Do you have any advice for students considering leaving medicine?
You need to know exactly what you want to do. You should research all the careers out
there and spend time talking to people and doing work experience. Have a plan and
realistic expectations of what you’re going into. I knew that all the training I’d done had
not been in vain, and for that reason I didn’t see it as that big a jump. A lot of the skills
are transferable. You can look at it either way. You can say you have to be brave or you
can consider how exciting it is, how many opportunities there are. It’s exciting to think
about all the things you can do with a medical degree.
Opting out of destruction?
Charis Demetriou argues against conscientious objection to abortion.
Let’s get one thing straight: I believe abortion is murder. I believe that once we die, on
the one side angels will list all the good deeds we did in life, and on the other demons
will list all the bad things we did in life. Or some variant of this. Depending on which
side the scales tilt we get to go to heaven or hell. Having performed abortions would, to
me, bring the demons’ side down significantly. Yet I also believe that doctors who decide
to follow a career in obstetrics and gynaecology should have no choice but to perform
these procedures.
Yes, I believe that abortions are wrong, but they are also legal - and for as long as they
remain so patients have the right to access them. The number of abortions needing to be
performed by the private sector doubled from 20% in 1997 to 40% in 2007 because the
demand for them increasingly outstrips NHS capacity [1]. The situation has reached such
alarming levels across the continent, that the Council of Europe is attempting to limit the
freedom of health care providers to refuse abortion services on grounds of conscientious
objection [2].
But in addition to these logistical arguments against gynaecologists opting out of
abortions stand several principle arguments of why it is inappropriate. Firstly, it is unfair
on colleagues who may be neutral to this topic, and who might then be reduced to
‘abortion doctors’ simply to make up numbers. Hopefully, nobody enters this field in
order to be confined to performing abortions and it is unfair for any medical professional
to have their career path determined by the preferences of their colleagues.
It is understandable that for anyone with sincere conscientious objections to abortions,
such as myself, participating in them is no solution to the above problems. But what I
perhaps have less sympathy for is their choice to put themselves in the position of
working in the field where they are most likely to come across these issues. Indeed, there
is some evidence that even before their specialist training commences only about half of
trainee obstetricians and gynaecologists are willing to perform abortions later in their
training [3]. It is true that you will encounter this issue in many areas of medicine, such
as primary care, but in no other field will it be a daily incidence. And even if one opts out
of actually performing abortions per se, questions of a similar nature will still be cropping
up left, right and centre, for example, regarding pre-natal screening and diagnosis. So
why enter this field?
The usual response to this by the people who do so, is that one of the main reasons why
people choose Obs & Gynae is because of how they want to bring life into the world.
Therefore, the people most likely to decide to follow this path, are also the people most
likely to object to abortions. Fair enough, but such is life: things come in packages. One
cannot pick the things we want to do and disregard the things we dislike. Just imagine the
reaction of the medical profession to a trainee colorectal surgeon who wanted to resect
bowel cancer but didn’t wish to be trained in haemorrhoidectomies.
Adding insult to injury, it is suspected that even though many have genuine conscientious
objections, others simply don’t want to admit that they are involved in this unpleasant
procedure, a situation dubbed ‘dinner party syndrome’ [1]. Although such suspicions
haven’t been quantified, this is an opinion shared by the Royal College of Obstetricians
and Gynaecologists, as expressed by their spokesperson Kate Guthrie: “We have always
had conscientious objectors, but more doctors now […] don’t see abortion care as
attractive” [4]. She continues: “You get no thanks for performing abortions. Who admits
to friends at a dinner party that they are an abortionist?” [5]. Such feelings are probably
the result of the fact that younger doctors have not been exposed to the horrors of backstreet, botched abortions and therefore lack the motivation to carry out this work [6].
I am pro-life. But I also don’t want to allow my personal views to affect the quality of
care that any patient deserves. And so, even though a career in obstetrics would have
been a dream come true for me, I simply have to make the difficult decision of saying no
to it. Do I think that doctors should be given the option not to perform abortions?
Absolutely. But that option needs to be synonymous with making the decision,
unpleasant as it may be, to give up a career in obstetrics and gynaecology. Or not making
that decision, but making sure to do buckets of good as well. You know, just to balance
the scales out.
Charis Demetriou is a fifth year medical student at Brasenose College
References
1. Laurance J. Abortion crisis as doctors refuse to perform surgery. The Independent
Telegraph 16th April 2007.
2. Parliamentary Assembly, Council of Europe. Draft resolution: Women’s access to
lawful medical care: the problem of unregulated use of conscientious objection.
(2008). [Online]. Available from:
http://www.eclj.org/PDF/Motion_for_resolution.pdf [Accessed: 12th March
2010]
3. Roy G., Parvataneni R., Friedman B., Eastwood K., Darney P.D., Steinauer J.
Abortion training in Canadian obstetrics and gynecology residency programs.
Obstet Gynecol 2006; 108(2):309-14
4. Abortion ‘crisis’ threatens NHS. BBC News 16th April 2007. Available from:
http://news.bbc.co.uk/2/hi/6558823.stm [Accessed 10th March 2010]
5. Britain’s abortion backlash. Irish Independent 19th May 2007. Available from:
http://www.irishcatholicdoctors.com/documents/AbortionintheUKMay2007.htm
[Accessed 14th March 2010].
6. Roe J., Francome C., Bush M. Recruitment and training of British obstetriciangynaecologists for abortion provision: conscientious objection versus opting out.
Reproductive Health Matters 1999; 7(14):97-105.
Technology and tradition: the supersedence of the surgical knife
The surgical scene is changing. Technology is sweeping in, set to provide new
minimally-invasive and non-invasive alternatives that promise to significantly reduce
complications and hospital stay. This revolution is heralded by the beginnings of robotic
surgery in the UK and entirely non-invasive techniques like extracorporeal lithotripsy for
kidney stones and gallstones, which mean that the surgeon’s knife may be becoming ever
more redundant. With technology infiltrating surgery at an accelerating pace, how will
the roles of the traditional surgical team change to accommodate it?
New technologies
The Hippocratic Oath makes a distinction between surgeons and physicians, asking
doctors to swear “I will not cut for stone” [1] (referring specifically to the removal of
kidney stones; lithotomy). Given this ancient promise, it is ironic that, in Oxford at least,
urology is the field leading the way in surgical alternatives that eliminate the need to cut.
For example, extracorporeal shock wave lithotripsy (ESWL) is a technique where a high
energy acoustic wave is propagated with low attenuation through soft tissue and water,
releasing its energy at a change of tissue density (the stone) with a force that shatters it.
The small fragments remaining are passed out in urine. With a short catheterization time,
no need for anaesthesia or hospital stay, and success rates as high as 90% [2], it is no
wonder that ESWL has risen rapidly to become a first or second line therapy since its
inception in the 1980s.
Of course, ESWL isn’t perfect. There is evidence to suggest renal tissue is damaged by
the shock waves, leading to major complications like hypertension or diabetes mellitus in
later life [3]. Although newer versions of ESWL may cause less damage, this highlights
the importance of long-term follow-up of outcomes for new and innovative therapies.
This is highly relevant to robotic surgery, where the big question – is it worth it? – has
yet to be answered. The da Vinci robotic surgical system, recently acquired by the
Urology Unit at the Churchill Hospital, is now in widespread use in cardiac valve repair,
gynaecological procedures and, in Oxford, radical prostatectomy for prostate cancer. The
surgeon controls the robot using a console, the movements of their fingers and hands
being translated into finer micro-movements of four robotic arms. The system reduces
tremor, and the design allows both a greater range of movement than allowed by the
human wrist and the alignment of gaze direction with the hands (this is not the case in
laparoscopic surgery, where surgeons must look away from the patient to glance at
images on screens). The hope is that robotic prostatectomy will allow for an improvement
in functional outcome without reducing rates of complete tumour resection.
However, just as the introduction of laparoscopic surgery required the re-training of
experienced surgeons, studies so far have shown that robotic surgery has a steep learning
curve, though less than that for traditional laparoscopic surgery [4]. This could be one
reason why the functional outcomes of robotic prostatectomy have yet to be proved better
than those of the alternatives [5]. In a US study, even without initial cost, maintenance
and repair, a net loss was made on every case where the da Vinci system was used and
the reduction in hospital stay reduced cost by only a third. Hopefully, with time, the
benefits will begin to outweigh the costs.
New personalities
In the meantime, another issue is how the newest member, the robot, will impact the
dynamic of the traditional surgical care team. Its introduction comes at a time when the
team is changing in other respects. An example is the introduction of surgical care
practitioners, defined in the Department of Health framework as:
“A non-medical…member of the extended surgical team, who performs surgical
intervention, pre-operative and post-operative care under the direction and supervision of
a consultant surgeon”[6].
The introduction of this role in 2004 was met with some resistance by the surgical
community. There were concerns [7] that surgical trainees would have new competition
for training and exposure to surgery, opportunities already limited by the European
Working Time Directive [8].
These concerns are refuted by Gillian Basnett, surgical care practitioner in the Cambridge
Urology Unit, who believes the introduction of her role has “definitely” benefited the
system. Gillian has participated in the training of surgical consultants and registrars in the
use of the da Vinci robot, and taught medical students the principles of robotic surgery.
She is one of very few permanent members of the surgical team, and says “I am able to
travel through various clinical areas; the ward, the theatre, the clinic. This gives the
consultants confidence… we help with continuity of care.” With respect to concerns
raised about reduced exposure to surgical training, surgical care practitioners release
junior doctors, allowing them to keep in line with new restrictions. Most importantly
however, according to Gillian, are the benefits to the patient.
The advantages of the surgical care practitioner in the robotic team are evident; a
permanent team member with the technological expertise necessary to train surgeons in
the use of the robot will undoubtedly smooth out the ‘teething’ process associated with
the inception of a new medical procedure. Perhaps we will also see non-medical
practitioners of this sort conducting procedures like ESWL, as part of a general trend to
improve continuity of care in the face of changing surgical technology.
In conclusion, it is clear that the surgical skill-set is evolving. Surgeons are relinquishing
their knives in favour of technology that is hoped will facilitate better outcomes for the
patient. Surgical care practitioners are being trained in surgical intervention, and
becoming experts in new technology; non-medically trained practitioners are proving
invaluable members of the surgical team. Interestingly, the Hippocratic Oath asks doctors
to swear “I will leave this operation to be performed by practitioners, specialists in this
art”. So in a way, surgery is moving backwards too.
Lucy Farrimond is a fourth year medical student at Keble College.
References
[1] National Institute of Health. The Hippocratic Oath. (2002) [Online]. Available from:
http://www.nlm.nih.gov/hmd/greek/greek_oath.html. [Accessed: 13 March 2010]
[2] Rassweiler JJ, et al. Treatment of renal stones by extracorporeal shockwave
lithotripsy: an update. Eur Urol 2001; 39:187±199.
[3] Krambeck, AE et al. Diabetes mellitus and hypertension associated with shock wave
lithotripsy of renal and proximal ureteral stones at 19 years of follow-up. J Urol 2006;
175:1742.
[4] Ahlering, TE et al. Successful Transfer of Open Surgical Skills to a Laparoscopic
Environment Using a Robotic Interface: Initial Experience With Laparoscopic Radical
Prostatectomy. J Urol 2005; 1738:1741.
[5] DiMarco DS, Ho KL, Leibovich BC: Early complications and surgical margin status
following radical retropubic prostatectomy (RRP) compared to robotassisted laparoscopic
prostatectomy (RALP). J Urol. 2005; 173: 277
[6] The Department of Health. The National Curriculum Framework for Surgical Care
Practitioners
(2006)
[Online].
Available
from:
http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndG
uidance/DH_4133018. [Accessed 13 March 2010].
[7] CA Bruce, IA Bruce, L Williams. The impact of surgical care practitioners on
surgical training. J R Soc Med 2006; 99:432-433.
[8] Council Directive 93/104/EC. Official Journal of the European Community 1993;
L307: 18-24
Deskilling me softly
Medicine is a skilful profession. These skills are diverse, from history taking to elaborate
surgical procedures, and form the basis of professional competence. However, junior
doctors’ skills are under threat.
Firstly, as mentioned ‘Technology and tradition’ by Lucy Farrimond, there is the
European Working Time Directive. Not only does this impact on patients by hampering
continuity of care, it also harms doctors by limiting their hours and therefore ability to
gain experience.
Secondly, there is the introduction of specialist nurses and phlebotomists. Because they
perform a specific task, these roles have dedicated training and more opportunity to
practice. As such, they become specialists in their field and so, in theory, deliver better
patient care. The problem is that in removing doctors from the equation, juniors lose the
opportunity to practise. However, when a phlebotomist fails to get blood from a shocked
heroin addict, who ya gonna call? The junior doctor. It’s analogous to Da Vinci asking for
a hand with the Mona Lisa from a toddler with a crayon.
Why do phlebotomists think doctors will be any better? Perhaps because doctors have no
option; they have to take blood or else it’s not going to happen. Perhaps it is this
unwillingness to admit defeat that means doctors will always be the last port of call, even
if they are less skilled to do the task.
So how can we overcome this problem of deskilling? As juniors become increasingly like
office staff, we will no longer be immersed in the care of the patients, so can no longer
hope to gain skills by osmosis. Instead, we must be more proactive and aggressive in
finding teaching and practising procedures. The opportunities are still out there, we’re just
going to have to fight for them.
Jack Pottle is a third year graduate-entry medical student at Magdalen College
Who’s looking after number one?
Primum non nocere, or, first, do no harm is one of the earliest maxims we are taught as
medical students and one of the fundamental principles of medicine that can be traced
back in writing as far as 5th century BC Greece. But, just how good are we as a profession
at following that advice when it comes to our own physical and mental well-being?
Anecdotal evidence that doctors fail to practise what they preach with regards to looking
after themselves is widespread, from medical students embarking on heavy bingedrinking bouts to the increased suicide rates seen in doctors. But how much of it can
actually be backed up by hard evidence and how much is urban myth?
Lessons learned
In some areas, doctors have been leading the way in promoting a healthier lifestyle. Ever
since Richard Doll published his 1954 study (using doctors as subjects) concluding that
cigarette smoking led to the development of lung cancer [1], smoking rates amongst
physicians have dropped dramatically. The difference in mortality risk between medicine
and other professions is one of the most significant in public health statistics.
Hitting the bottle
There has long been an association between doctors and alcohol and this has given rise to
the perception that doctors, more than other professions, are prone to drinking heavily.
This has been perpetuated by some high-profile cases and media reports into the area.
Studies in the 1970s claimed that alcoholism occurred more often in male doctors than in
other men of the same social class. In 2005, BBC One’s Real Story reported that 750
hospital staff in the past 10 years had been disciplined for alcohol-related incidents. The
BMA subsequently estimated that one in fifteen medics have a problem with alcohol or
drugs at some point in their lives.
However, this is not agreed upon by all. Elliot found that ‘doctors and nurses are less
likely than the general population to drink above the 21/14 units/week level’ [2].
The lack of consistency and comparability between the data and the high amounts of
variability in the groups used makes it very difficult to objectively interpret the various
studies on this subject.
There is a perception among some that the culture of heavy drinking begins, if not before,
then at medical school. Numerous studies have described the so-called ‘binge-drinking
culture’ amongst undergraduates. Some data suggests that the number of female students
drinking more than 14 units/week was more than three times the number in the general
population. Medical students are certainly not exempt from this. Sinclair reported high
levels of drinking and a bingeing culture in his study of a London medical school [3]. A
study of US medical schools showed that 18% of female and 11% of male medical
students claimed their alcohol intake had increased throughout their training. A common
trend across all studies over the past 30 years or so was the increase in numbers of female
students drinking more than the recommended amount. This was seen very clearly in
medical students where females seem to be drinking more like their male counterparts
than females in the general population. While the evidence does not seem to suggest that
medical students drink significantly more than other students, there is plenty of evidence
to suggest that many drink heavily and often. A longitudinal study at Newcastle
University showed that this habit of drinking to excess continued into the house officer
years and beyond [4].
Regardless of whether or not more doctors than average drink heavily, there are some
that do and the alarmingly inescapable fact is that these doctors can be a risk to their
patients by impairing professional skills and judgement. Doctors who have an alcohol
problem may be in denial and less likely than non-medics to seek help for their
addictions, perhaps due to the stigma of alcohol abuse that remains. This, coupled with a
reluctance within the medical profession to confront or whistleblow on colleagues, means
that doctors with alcohol abuse problems often present late. In treatment however,
doctors do very well and often recover fully.
Depression: the silent killer
Doctors are not immune from mental health issues besides alcohol abuse. As early as
1858, it was noted that doctors have higher rates of suicide than the general population
[5]. Although suicide rates have been declining in the UK over the past 25 years, the rate
for doctors remains significantly above the average, especially for female doctors.
Underdiagnosed depression is cited as one of the principle causes of this strikingly high
suicide rate. Depression is an incredibly common illness, with a lifetime prevalence
estimated to be 10-20%, and there is no reason why this should be any different within
the medical population. Indeed, medics actually have higher rates of depression than most
other professional groups. A number of reasons for this have been suggested:





High pressure, high stress career dealing with emotionally draining topics
Pressure of constantly striving to provide a high-quality service to the public
The ‘medical type’ ie perfectionist, high-achieving, unwilling to admit failure or
ask for help from colleagues
Perceived stigma of admitting mental illness
Culture of medicine affording low priority to physician mental health
All these factors mean that depression in doctors frequently goes undiagnosed and
therefore untreated and, in some, will lead to suicide. Some studies have shown that the
usual risk factors for completed suicide also apply to physicians ie drug or alcohol abuse,
bereavement, divorce, family history of suicide. Another important factor is the access to
means of suicide. Doctors have access to lethal drugs and the knowledge to use them
effectively and thus their ratio of completed versus attempted suicides is much higher
than across the general population. An important point to consider here is that female
suicides (attempts and completed) are usually overdoses (male suicides are usually
violent and thus have a higher completed suicide rate despite a lower overall suicide rate
of attempts and completed). The fact that female doctors have the knowledge to overdose
successfully may partially account for their higher suicide rate compared to the general
population simply because more of the attempts are successful.
Irrespective of this, it remains that becoming a doctor increases your risk of depression
and suicide. While we are becoming better at diagnosing depression and preventing
suicide in our patients, it seems we are not doing the same for ourselves. The collective
failure to recognise this and do something about it can have tragic consequences for
some. Things are beginning to change, albeit slowly. In America, a number of medical
schools have implemented awareness programs and a recent meta-analysis recommended
the implementation of an early-intervention programme that has been used successfully
in the Canadian Air Force [6]. It would seem sensible to have, or to consider at least, the
introduction of similar awareness and intervention schemes in the UK. Increasing
education might also help de-stigmatise the issue.
‘Physician, heal thyself’ was first written in the Bible and shows that the concept of
doctors being poor patients is not new. However, the idea that we all believe ourselves to
be invincible superheroes is probably a little too simplistic. There is a sharp contrast
between doctors’ heightened awareness of the dangers of cigarette smoking and the
inattention given to depression and suicide. The evidence is not wholly concrete on all
fronts, especially with regards to alcohol consumption amongst doctors. However, it
remains important because there are some doctors who suffer from alcohol/drug abuse or
mental illness which can impact on their practice. Further investigation into these issues
ought to be carried out and measures put in place to deal with them. Medicine is a richly
rewarding but high-pressured career and by failing to fully address the physical and
mental health needs of doctors themselves, it seems we may be doing ourselves and those
under our care a disservice.
Charlie Skinner is a third year graduate-entry medical student at Worcester College
References
[1] Doll R, Bradford Hill A. The Mortality of Doctors in Relation to Smoking. BMJ
1954; 1(4887): 1451-1455.
[2] Elliott
[3] Sinclair S. Making doctors – an institutional apprenticeship. Oxford. Berg Publishers.
1997.
[4] Newbury-Birch D, Walshaw D, Karnali F. Drink and drugs: from medical students to
doctors. Drug Alcohol Depend. 2001; 64:265-270.
[5] Bucknill J, Tuke D. A Manual of Psychological Medicine. London. John Churchill
Publishers. 1858
[6] Schernhammer E, Colditz A. Suicide Rates Among Physicians: A Quantitative and
Gender Assessment Meta-analysis. Am J Psychiatry. 2004; 161:2295-2302.
Why do we swear? The Purpose of the Hippocratic Oath
The Hippocratic Oath is renowned throughout history for being recited by medical
students on completing their studies and commencing their jobs as doctors. It is
commonly thought of as a code of ethical conduct written to benefit patients, but could it
be that the Oath was written for more selfish reasons?
According to the historian Sir Geoffrey Lloyd, the Oath “stood for an ideal”, providing
physicians with a gold standard of conduct. A patient in Ancient Greece could be safe in
the knowledge that their physician had sworn the Oath and was therefore obliged to act
by its standards of care. This protected the patient and meant the physician’s skills
remained in favour, providing job security. Importantly, the Oath also protected the
physician should things go wrong; if the physician has done all he can, his reputation will
not be tarnished.
“I will use regimens for the benefit of the ill in accordance with my ability and
judgement, but from...harm or injustice I will keep [them].” [1]
Perhaps subsidiary to this are the ethics stated in the Oath. Ethics do well to augment a
physician’s reputation and maintenance of this would uphold his clientèle and income.
The concept of confidentiality is emphasised: “about whatever I may see or hear...I will
remain silent”.
The Oath documents the prominent role of a physician within the medical marketplace,
stating “I will not cut, and certainly not those suffering from stone, but I will cede [this]
to men [who are] practitioners of this activity”. Lithotomy is specifically mentioned as a
metaphor for all surgery [2] so that even in the case of a patient suffering the extreme
pain of stones, a physician must not cut but refer to those with the necessary surgical
skills, despite the lower rank of these men. This upholds the relative status of the
physician in society.
The Oath describes the closest relationship possible between a teacher and a pupil, that of
a father and son. Whether this was meant literally or more figuratively it depicts a closed
group of physicians [2] within which expertise, and consequently power, would be
retained thereby protecting the physician’s income.
“I swear...to regard him who has taught me this téchnē (art) as equal to my parents, and to
share, in partnership, my livelihood with him...[and to]...teach [his offspring] this téchnē
should they desire to learn [it]”
Is the Oath still relevant today? In our modern society, legal proceedings in a court of law
have required physicians to break confidentiality [3]. If physicians can break one clause
in the Oath, can they be expected to uphold all others? This inconsistency highlights the
need for a novel version. The Oath in its original form is also directly antagonistic with
procedures carried out in today’s medical practice, particularly abortion and euthanasia.
Pro-life campaigners may cite the Oath amongst the reasons for prohibiting such
controversial issues. Despite the public view that the Hippocratic Oath is commonly
recited by physicians entering their profession, in fact only half of UK medical schools
uphold this tradition [4]. Would a modern version be more popular? Unsurprisingly, there
is debate regarding modernisation of the Oath; could and how would abortion be
included? Euthanasia is fast becoming the subject of discussion as clinics in Switzerland
become destinations for patients in the UK. How would the Oath incorporate assisted
suicide? Simply leaving out the line “I will not give a drug that is deadly to anyone if
asked [for it]” would be a glaring omission and may make one wonder why the Oath
should be reinstated at all.
The act of swearing the Oath places one amongst the “giants who have pledged it in the
past, and contributed so profoundly to advances in the art and science of medicine” [5].
This rite of passage fills one with a sense of pride and aspiration, and thus the value of
the literal meaning of the Oath is engulfed by its symbolism. Is this reason enough to
restore the Oath to its former glory, regardless of content? Whatever the answer, since its
conception, the Hippocratic Oath has fulfilled its purpose: providing society with a code
of conduct and standards for physicians, which by protecting the patient, has protected
the physician.
Emily Clark is a fourth year medical student at Balliol College
The author would like to thank Professor Arnott for his invaluable help in the preparation
of this article. He is always reminding students of the importance of the statement in the
Oath that they must look after their teachers in old age!
References
1. Translation after Heinrich von Staden, ‘In a pure and holy way: Personal and
Professional Conduct in the Hippocratic Oath’ Journal of the History of Medicine
and Allied Sciences, 1996, 51: 404-437.
2. L. Edelstein, The Hippocratic Oath: Text, Translation and Interpretation,
Baltimore, The Johns Hopkins University Press, 1943, pp. 1-74.
3. GMC. Confidentiality: Protecting and Providing Information. 2004
4. Hurwitz, B; Richardson, R. Swearing to care: the resurgence in medical oaths.
BMJ 1997; 315:1671-1674
5. J. Stanton, The Value of Life Committee, The Reinstatement of the Hippocratic
Oath, June 1995: http://www.hmc.org.qa/heartviews/VOL6NO2/history.htm
Obituary: Professor Carl Pearson (01/06/53 – 08/07/08)
Carl Pearson encompassed within his medical career a diversity of research, teaching and
clinical experience, in this country, the USA and in Tanzania. He has died aged 55 from
metastatic kidney cancer. He was born and educated in the North of England, where his
father was a GP for a small colliery village. The family was very medically biased and
although initially attracted to Law, he changed to Medicine after A level studies in the
humanities, when he found he was being excluded from family conversations about
medicine. He passed the 1st MB examination at St Thomas’s Hospital and then went up to
Trinity College, Oxford in October 1971, to read Physiological Sciences. Early on in his
medical studies, he developed a passionate interest in the function of the brain, which was
encouraged by his tutor in neuroanatomy, Dr T.P.S Powell. In his final year, with some
temerity, he approached Dr Powell to enquire about the possibility of delaying clinical
studies to enrol for a research degree in neuroanatomy and was gratified to be awarded an
MRC postgraduate research studentship.
There followed a rigorous three years’ training in traditional research methods in
neuroanatomy with a series of papers, culminating in the award of the degree of D.Phil.
On entry into clinical medical studies in Oxford in 1977, he took to clinical work and
patient contact with alacrity, enjoying immensely all opportunities to talk to patients.
Contemporaries at Oxford will probably remember him best of all however, immersed in
a game of bridge in Osler House, chuckling wickedly over a winning hand; and for his
portrayal of Dr Sidney Truelove, the eminent physician, to whom Carl bore an uncanny
resemblance, at the medical student Tyngewick Pantomime of 1979.
He loved clinical work and therefore some were surprised when on graduating in 1980,
he announced his intention to return to Tom Powell’s laboratory, to continue with his
research, teaching and demonstration of anatomy and neuroanatomy, initially as a
University demonstrator and subsequently as a Wellcome Research Fellow. This was a
very productive research period, with many contributions to the knowledge of the
development of Alzheimers Disease. During this time he also became very popular as a
college tutor, with a prodigious reputation for lucid lecturing and tutorial style, which
continued throughout his university career. He received many tutorial appointments
amongst the Oxford colleges, but valued most highly his association with Corpus Christi
College, as Research Fellow (1983-7).
The Wellcome Fellowship permitted the chance of working abroad, and thus in 1986-7,
he enjoyed a highly productive year in the laboratory of Professor Sol Snyder, in the
department of neurosciences at Johns Hopkins University Medical School in Baltimore,
USA. Here he developed the technique of quantified in situ hybridisation of the brain,
which consolidated his research reputation and led to a string of papers and
collaborations, in which he was ever generous. He returned from the USA to an
appointment as Senior Lecturer at St Mary’s Hospital Medical school in London and
soon after, was appointed as Professor of Neuroscience at Sheffield University, at the
age of 35. His collaborative style and infectious enjoyment of the science attracted a
stream of medical and scientific postgraduate students to work in his laboratory, who all
obtained the degrees they enrolled for and who always displayed their loyalty. He
continued in Sheffield as legendary teacher; the lucidity of his style was undimmed by
the burgeoning numbers in the lecture theatres and was undoubtedly aided by his
grounding in the humanities.
Carl made significant contributions to clinical neuroscience in three areas. Firstly, with
Tom Powell, he made substantial discoveries regarding connectivity in the brain,
particularly of the cholinergic pathways, which at the time were emerging as central to
understanding the pathogenesis of Alzheimer’s disease. Secondly, working with clinical
colleagues in Oxford, he was the first to describe clearly how Alzheimer’s disease
spreads within the human brain, and to hypothesise how the process might begin. The key
paper he published on this subject in 1985 has become a citation classic, and ensures that
his name will never be forgotten in this field. Thirdly, following his return from his
sabbatical year in Johns Hopkins, Carl was the first in UK to use molecular techniques
for the study of gene expression in the human brain, and to apply them productively to
the study of Alzheimer’s disease. The latter studies were conducted at St Mary’s Hospital
Medical School in London in the late 1980s, where there was a remarkable concentration
of emerging names in clinical neuroscience, all of whom would go on to eminence
elsewhere.
The 1990s were difficult times for those engaged in research and teaching in the
universities, and Carl felt the compromises were unacceptable and would not permit
standards of teaching and research to be diminished. In hindsight, it was perhaps a
mistake that he accepted the Chair of Neuroscience in Sheffield, since the exceptional
productivity and research environment which he had created at St Mary’s could not be
recreated, and the increasing administrative and teaching burdens diminished his ability
to do research of the quality and originality of which he was capable. By 1999, it was
clear that he felt the need of a radical change of direction in his career.
By chance, he noticed an advertisement in the BMJ for teachers of undergraduate
physiology and anatomy in the newly created Kilimanjaro Christian Medical College
(KCMC) in Tanzania. A totally different sabbatical year then followed in 2001, spent
giving his expertise in teaching undergraduate medical students in the developing Third
World medical school, for which he retained a great affection thereafter. Carl would have
been the first to admit that he gained more from Africa than he had given-a common
experience amongst those who have worked in Third World countries. The sabbatical
year gave time for reflection on the possibilities of a radical career change and he
returned to England resolved to resume a career in clinical medicine, which he did in
August 2002.
He took house jobs Medicine and Surgery in Hull, where he was remembered for his
meticulous and, by then, old-fashioned attention to all his patients, with no care for
clockwatching. Despite being then aged 50, he completed his preregistration year without
a single day off sick.
Considering his background, it would have been a natural choice to consider pathology or
neuropathology to complement his research activity. However the contact with patients
had been too attractive. His gregarious nature, humour, excellent approach to patients and
his strong moral and socialist principles made General Practice a natural choice. He was
delighted to be accepted on the training program for General Practice in the North East,
where again, he fulfilled a series of rigorous junior training posts with aplomb. He
obtained a position as General Practitioner in Sunderland, where he spent the last 18
months of his working life, amongst the happiest and most fulfilling periods of his entire
career. It gave him great pleasure to be awarded the MRCGP in April, an award which
his daughter received on his behalf. He continued his interest in teaching and was glad to
give tutorials in neuroanatomy to a young medical student friend during the final few
weeks of his life.
A generously hospitable and sociable Yorkshireman, Carl loved good wine, cricket and
brass bands. He played the cornet brilliantly, receiving his initial training as a boy in the
Salvation Army. As an undergraduate, he was a founder member of the Broad Street
Stompers jazz band, which was later reprised in his clinical years as the Tyngewick Band
and in recent years as the St Georges Pantomime Band. He was also a proud member of
the Morris Motors Concert Band, playing under the baton of Sir Harry Mortimer in the
mid 1970s. A man of strong Christian faith, he became a confirmed Anglican in 1997,
and devoted much time and energy to his local church, St George’s, Jesmond, serving as
Church Warden from 1997-2000. On returning from Tanzania in 2001, he established a
fund for sponsorship of medical education in KCMC, which, to date, has put 6 students
through the medical course and is still going strong.
He met Enid, a fellow medical student in November 1971, during their first term at
Oxford; they married in 1975 and he was an unfailingly loyal supporter of her career in
Obstetrics and Gynaecology. They have one daughter, Grace, aged 15, who also does
pantomime.
Text and photographs kindly provided by Enid Pearson
Photographs show Carl and Dr Sidney Truelove (or ‘Fairy Freelove’) in the 1979
Tyngewick pantomime
Ends
Download