Issue 5 - University of Winchester

advertisement
The ‘privacy arms race’ is the theme of our next conference in April 2015 and this bulletin aptly
illustrates many of the contemporary battlegrounds over information rights: disputes around
‘relational contracts’; apps that up the ‘creepy factor’; the ‘right-to-be-forgotten’ and freedom of
expression in employment; concerns around loss of control over data, big or small; the debate
over real identities online; the question of how transparent the family courts should be. We look
forward to exploring these and other issues at TRILCon15; the call for papers and early-bird
booking are now open.
IT based rights - the law of contract regains control
Opinion piece: an App too far?
Freedom of expression and employment
Can employers ever ‘forget’ their former employees?
Opinion Piece: A Privacy Precariat? Considering the Future Denial of Information Rights
Big Data: should we worry about the data or the decision or both?
“Be yourself; everyone else is already taken”
Family court transparency – but at what cost?
IT based rights - the law of contract regains control
By David Chalk, Centre for Information Rights
The importance of the law of contract in the protection of IT based rights and interests has been a
November 2014
Issue No 5
SAVE THE DATE!
Second Winchester Conference
on Trust, Risk, Information & the
Law
Tues 21 April 2015
Theme: ‘the privacy arms race’
#TRILCon15 #privacyarmsrace
Keynotes from Professor
Christopher Hankin, Imperial
College London and Dr Kieron
O’Hara, University of Southampton
CALL FOR PAPERS NOW OPEN
Click here for more information
Early-bird conference booking
available here.
theme of the CIR bulletins since March 2013 and indeed in the last edition (March 2014) I had to admit
Follow us on Twitter
that the law of contract had met its match but that didn’t last long. This month sees a relatively recent
and not uncontroversial development in the general law of contract being used to protect invention
@_UoWCIR
rights in respect of software. The contractual principle in question is that of good faith.
It may seem odd that a general principle of good faith does not run through the English law of contract
but its absence does reflect the historical context in which our rule book was written – namely the
nineteenth century and the ethos of non-intervention – businesses were well able to look after their own
interests without the need for a paternalistic law of contract breathing down their necks.
So what has changed? Richard Spearman Q.C. sitting as a Deputy High Court Judge in Bristol
Groundschool Ltd v Intelligent Data Capture Ltd [2014] EWHC 2145 (Ch) has applied the earlier
decision of Leggatt J in Yam Seng Pte Ltd v International Trade Corp Ltd [2013] EWHC 111 (QB) and
found there to be an implied obligation of good faith in a contract between an inventor and his partners
that had as its object the creation of a computer based version of the inventor’s system.
Relational contracts
In Yam Seng Leggatt J referred to ‘relational contracts – those contracts where the essence is an
ongoing relationship between the parties. Such a contract require ‘a high degree of communication,
cooperation and predictable performance based on mutual trust and confidence and involve
expectations of loyalty which are not legislated for in the express terms of the contract but are implicit in
the parties’ understanding and necessary to give business efficacy to the arrangements. Examples of
such relational contracts might include some joint venture agreements, franchise agreements and long
term distributorship agreements.’ [142]
In Bristol Groundschool Ltd v Intelligent Data Capture Ltd the parties made a contract to exploit a pilot
training system by developing a computer-based training module from Bristol’s existing system. The
contract provided that Bristol owned and retained the copyright to the textual material of the system as
well as static artworks produced by Intelligent Data Capture that had been done at Bristol’s expense.
The contractual arrangements were in place over a period of years but eventually the parties fell out.
Mr Whittingham who had set up Bristol and who had invented the system now feared that his ability to
continue in business would be completely destroyed if Intelligent Data decided, as he thought they
would, to cease to co-operate. He therefore downloaded materials from Intelligent Data’s system in
order to create his own version of the product.
It was held that following Yam Seng there was an implied term of good faith as this was a ‘relational’
contract. Mr Whittingham had acted in a commercially unacceptable manner in downloading materials.
Mr Whittingham caused Intelligent Data’s computer to perform functions with intent to secure access to
at least some data that was, and that he knew to be, unauthorised. The lack of exploration of
alternatives to the self-help measures that he engaged in did not accord with the normally accepted
standards of honest conduct.
The practical significance of the breach here was that it had been relied upon by Intelligent Data in
defence of themselves being sued for breach by Bristol. In the event the judge concluded that Bristol’s
breach of the implied terms of good faith was not repudiatory. Intelligent Data were therefore liable for
breach to Bristol.
Conclusion
The significance of the implied terms outside of these particular contractual arrangements is potentially
enormous. The so called ‘relational’ contract in this case is common place – the development of
software is always going to involve long term relationships. The extension of Leggatt J’s decision in
Yam Seng in reality provides the courts with a policing power that sits very uneasily with the English
approach to the law of contract.
An App too far?
By Helen James, Centre for Information Rights
As a society we appear to be becoming increasingly hooked on the use of mobile apps not only as
sources of entertainment and information but as a means of monitoring our health and wellbeing. For
example there are apps to manage asthma and COPD, apps to monitor heart rate and blood pressure,
apps with total body exercise programmes and even, I kid you not, apps to monitor your menstrual
cycle complete with emoticons! Healthcare insurers have long used apps as a means of encouraging
and rewarding health behaviour and exercise - and a reduction no doubt in the draw on the claims
fund. In February 2012, according to the Daily Telegraph Health Secretary Andrew Lansley was so
convinced of the potential benefits of apps to healthcare provision that he apparently compiled a list of
nearly 500 tools that would be recommended by the NHS. This included apps to scan bar codes to
identify potentially harmful ingredients for those who suffer from food allergies, an app to spot potential
breast cancer and another to monitor diabetes.
Whilst many of these apps will undoubtedly bring benefits to users, others are more concerning. A new
generation of apps is under development in the US that will apparently, through techniques such as
analysis of changes in voice patterns and the social activity of users, assist with the early recognition of
possible depression and the development of destructive behavioural patterns. This can then be used to
alert those at risk or to supplement other clinical diagnostic tools.
If this is not evidence enough of the creepy factor at work it gets worse. The UK based charity, the
Samaritans, recently launched an app (called RADAR) that scans Twitter timelines and will alert, it
seems, just about anyone to the fact that you’ve tweeted a couple of miserable messages and may be
about to hurl yourself prematurely into the long goodnight. In terms of privacy this is a tricky one. A
tweet most definitely has more of an air of the public than the private about it. To that extent the
information can hardly be regarded as confidential. However, surely what’s at stake here is the
processing of the tweeted information in a way that the account holder may not have meant it to be.
Thus through the algorithmic analysis of the tweet and the identification of certain key words and
phrases an individual thought to be at risk of harm is identified and that information is sent to third party
followers, perhaps leaving the subject in blissful ignorance of events.
As Gareth Cornfield writing online in the Register points out, using an analytical tool which triggers an
automated alert in the presence of certain key words and phrases is a potential breach of s12 Data
Protection Act, which whilst initially designed to protect employees from events triggering automated
disciplinary processes, he argues can be applied in this case.
Aside from data protection issues the sharing of such information has other implications. How often, for
example, are we casual and perhaps careless with the terminology we use to describe relatively trivial
events, a bad day at the office for example? To find a throw away phrase, however inappropriately we
may have used it, being evidence of an apparently suicidal tendency that are loved ones are suddenly
and alarmingly alerted to is really a step too far. Those of us with a tendency to the dramatic may find
that we cry wolf to often setting off alerts in such quantities that will mean when we really need help we
are ignored by those inured to our bleating. What about the right to self-determination? If I am of full
capacity and wish to prematurely hasten my shuffle from this mortal coil it is arguably my absolute right
to do so (in the absence perhaps of causing harm or creating a danger to others). And on it goes…as
Gareth Cornfield says not all Twitter followers are good Twitter followers - do we want those who follow
us for less than virtuous reasons to be alerted to our most vulnerable moments?
It should be noted that the charity, which pronounces itself as keeping ‘everything confidential’ has, as
of the 7th November, suspended this app, as a response to the serious concerns raised about it, many
of them with mental health conditions. However, this will not be the end of the story. Whether or not
Radar is re-launched, there will be others and worse. Still, at least no-one has yet thought of a tool
capable of visually recording us at our most vulnerable without our knowledge!
Freedom of expression and employment
By Megan Pearson, Centre for Information Rights
Megan Pearson summarises an article which will appear in the December edition of the
Industrial Law Journal
Although there might be some doubt as to what it covers at the margins, the right to freedom of
expression is well established in English law. Freedom of expression as a right during employment,
however, is less clearly established and so far has given rise to little case law. This is surely not
because of its lack of practical importance. It is likely that conversations in the workplace may turn to
controversial matters and that employees may sometimes express their views in a way which their
colleagues find highly offensive or at least irritating. Some employers may also wish to control
employees’ expression that takes place outside work on the grounds that it leads to workplace
disharmony or affects an employer’s reputation.
Always permitting an employer to restrict such speech could evidently have severe consequences for
freedom of expression, both for the individual concerned and for society in general. There will be a
substantial ‘chilling effect’ if employees fear that they will lose their job if they express their views.
As things stand, unfair dismissal law does not necessarily adequate protect freedom of expression. In
considering whether a dismissal was fair, the question for the Employment Tribunal is whether the
employer’s decision was within ‘the range of reasonable responses’. There is a distinct difference in
emphasis between taking into account concerns about freedom of speech when considering
reasonableness, especially such an attenuated assessment of reasonableness, and a starting point
that all interferences with freedom of speech must be justified.
Smith v Trafford Housing Trust [2012] EWHC 3221 demonstrates the risk to free expression that
disciplinary action in employment can pose, although the court ultimately decided in his favour.
A Housing Manager, who had listed his employment on his Facebook page, put a link on Facebook to
a news article entitled ‘Gay church ‘marriages’ set to get the go-ahead’, with the comment ‘an equality
too far’. After a colleague posted ‘does this mean you don’t approve?’ he replied that while the
existence of civil same-sex marriage was up to the state, ‘the bible is quite specific that marriage is for
men and women… the state shouldn’t impose its rules on places of faith and conscience.’ As a result,
Smith was demoted to a non-managerial position with a 40% reduction in pay; an action he claimed
was a breach of contract. He did not bring a case for unfair dismissal because he could not afford to do
so in the short time limits available for bringing such a claim.
In deciding the case, the High Court did not directly consider his rights to freedom of expression or
religion. However, it held that the Trust was in breach of contract since, ‘his moderate expression of his
particular views about gay marriage in church, on his personal Facebook wall at a weekend out of
working hours, could not sensibly lead any reasonable reader to think the worst of the Trust for having
employed him as a manager’ and thus he did not bring the Trust into disrepute.
Smith demonstrates the problems that can arise for employers and employees in seeking to balance
freedom of expression against other interests. While it demonstrates a concern for rights in
employment, the courts’ overall approach remains to be worked out.
Can employers ever ‘forget’ their former employees?
By Louise Randall, Shoosmiths LLP, Louise.Randall@shoosmiths.co.uk
Marion Oswald interviews Louise Randall about the Google Spain ‘right to be forgotten’ decision and its
implications in an employment context
What would you say are the key points that employers should note from the Google Spain case
(“Google”), in particular in relation to the so-called right to be forgotten?
Employers should note that the Google Spain case was very fact specific and related to an individual
being able to require Google to remove the direct link between his name and a newspaper article within
the Google search engine.
The outcome of the case was that the claimant was able to request Google remove links to material
concerning him that was old, irrelevant and found to be without a significant public interest.
The original newspaper article was not removed from the internet as a result of this case and the
original court records may well remain in existence. Accordingly, the Claimant was not “forgotten”.
Instead, the link on the Google search engine between his name and the newspaper article has been
erased.
Effectively the Google case created a right to be found less easily on the internet. Such a right
provides individuals some degree of control over what is found when their name is searched for on the
internet.
Does the Google Spain case change anything for employers in how they deal with employee
data?
From an employment law perspective, the Google case makes no real difference to the operation of the
Data Protection Act 1998 in practice. Employees (both current and former) have the right to expect
employers to ensure that their personal information is accurate, adequate, relevant, up to date and not
excessive. What the Google case made clear was that a case-by-case assessment is needed to
consider the type of information in question, its sensitivity for the individual’s private life and the interest
of the public in having access to that information.
Going forwards, the concept of a “right to be forgotten” in the workplace will continue to need to be
balanced against the employers’ legitimate interests in retaining certain information. In view of
regulatory and insurance requirements which continue even after the end of the employment
relationships, employers may not be able to ‘forget’ their employees entirely.
For those employers who research potential employees via the internet – the results that they obtain in
the future may not be as comprehensive as they once were, in the event that the prospective employee
has exercised their right to be found less easily.
There have been many serious case reviews and other investigations in the public sector,
where public sector services have failed, for instance, to provide proper care or to detect child
abuse. If an ex-employee was named on an employer's website in such a report, can they use
the right to be forgotten to have their name removed?
The Google decision upholds the position that a case-by-case assessment is needed to consider the
type of information in question, its sensitivity for the individual’s private life and the interest of the public
in having access to that information.
The Google case found that an individual’s privacy rights will override, as a rule, not only the economic
interest of the data controller but also the interest of the general public. However, this will not always be
the case. For example, if it appeared, because of the role played by the data subject in public life, that
the interference with his fundamental rights is justified by the overriding interest of the general public in
having access to the information in question, such an individual is unlikely to succeed in insisting on
their right to be forgotten.
Relevant factors to consider in this scenario would include the amount of time that has elapsed since
the case review was conducted and / or since the employee left the employer.
The employer would need to examine the reason why the report was being displayed on their website.
The decision whether or not to remove the employee’s name would be influenced where a regulatory
authority requires the employee’s name to be published by the employer in the report, likely to be the
exception rather than the rule.
In light of the Google case, unless the employer can establish a either a regulatory or public interest
reason for publishing the former employee’s name in the report on their website, it would be advisable
in these circumstances to either redact the report or (where redaction would not be adequate enough to
prevent the identification of the ex-employee) to remove the report from the website altogether.
The original report would still exist and would remain in the public domain (assuming it had been
published by the original author of the report). It would simply be the link to the report from the
employer’s website that is removed.
If an employee had a grievance raised against him/her, and the grievance was not upheld in the
subsequent internal investigation, can the employee insist on all records of the grievance being
deleted from the employer's files?
It is unlikely that the employee would be able to insist that the records are deleted immediately after the
grievance outcome and / or in their entirety. For example the person who raised the grievance in the
first place may seek legal recourse in light of the employer’s decision not to uphold the grievance.
Accordingly, the employer would want to be able to retain the records to show what steps it took to
address the grievance in any subsequent litigation.
However, it might be the case that the employee in this scenario could reasonably require the employer
to redact their identity from the records.
The employer would also need to think carefully about where the records are stored. For example,
HMRC, regulatory bodies (such as the Financial Conduct Authority and organisations responsible for
safeguarding of children and young people), employer liability insurers, pension fund administrators
and the Health & Safety Executive all require employers to maintain certain information in relation to
their employees extending beyond the life of the employment relationship, meaning that the employee
cannot be ‘forgotten’ entirely. However, this does not mean that employers should retain full and
detailed HR files on former employees ad infinitim. Indeed, best practice would be for the employer to
conduct regular file reviews to ensure that only relevant information is retained on HR files.
Employees (both current and former) have the right to expect employers to ensure that their personal
information is accurate, adequate, relevant and not excessive. Accordingly, the employee could
dispute the accuracy of the records. In these types of cases, I would not recommend that the original
records are destroyed where there is a dispute over accuracy. However, it would not be unreasonable
for the employee to request that their written observations about the accuracy are retained with the
records.
A Privacy Precariat? Considering the Future Denial of Information Rights
By Robin Smith, Head of Privacy and UHL and Co-Founder of Health 2.0 Nottingham think-tank
@robinsmith64
Eminent Hungarian sociologist Frank Furedi stated “Without privacy no other rights have much value”.
Despite its amorphous characteristics, privacy is the essential right in our information age. It protects
individuals and enables the conduct of private life without interference, particularly from commercial or
state interests. It can be managed in accordance with current status in life; reduced during social times
and increased when one does not want to engage with a diverse range of interests. Consider how the
state responds to security threats by increasing surveillance or interventions to meet the ‘national
interest’.
This fluidity is why privacy matters; it is something that in a diverse and complex world can be
controlled directly to allow us to conduct our lives in line with our wishes. To destroy privacy is to lose
something essential to our lives; control.
But an emergent risk for our digital society is the ability for all to protect his/ her privacy. With NHS
England suffering a public humiliation when its much planned ‘Care.data’ programme failed at the first
hurdle, a debate began about the denial of information rights within the Health sector. Many NHS
organisations are now fielding individual and freedom of information queries to clarify how personal
medical data is being used and shared as public concern about this matter is increasing. One of the
key concerns for the public is the notion that individual information rights including the right to limit
processing are being neglected or circumvented by government bodies.
Concerns have been expressed about a ‘privacy precariat‘ being created by current central government
policy; a strata of society who through engagement with the public sector will see his/her individual and
family privacy threatened and reduced by excessive sharing across services.
The threat to individual identity as the public sector seeks to share more information with partners or
with commercial organisations has a high impact of the lives of a great number of individuals,
particularly in urban communities. With public bodies intervening in lives to discuss medical, sexual and
psychological affairs the concern is that this data will become part of aggregated data sets, published
under the ‘open standards’ philosophy pursued by many at a national government level. IF private
information relating to children is shared with an unscrupulous private sector organisation that may
trade in identity theft what are the long-term implications for individuals that need to be vigilant for the
rest of his/ her life to avoid repeated instances of identity fraud.
The crux of this problem is levels of information literacy. How is the public sector advising individuals
who have many interactions with increasingly mixed services, where the public sector devolves duties
to private partners? Should there be data loss or sale of personal information how will the most
vulnerable individuals seek remedy to this breach of rights and how can they be assured that frauds
won’t be repeated?
Private interests pursue an open government philosophy because there is genuine belief that it can fuel
better innovation and knowledge generation across the economy. Certainly the Coalition government
since 2010 has promoted increased financial transparency. What has been missed is feedback from
the increased numbers of data breaches in the UK and how this should inform government privacy
standards, despite excellent efforts by the Information Commissioner.
Big Data – should we worry about the data or the decision or both?
By Marion Oswald, Centre for Information Rights
This is a transcript of Marion Oswald’s speech at the ‘Big Data: Big Danger?’ debate at the
‘Battle of Ideas’ at the Barbican on 19 October organised by the Institute of Ideas
In the context of big data where personal information is involved, I would like to talk about 3 ‘big
dangers’:
- First, the danger of generalisation: the debate becomes polarised into a black and white ‘big data is
good’/’big data is bad’ argument;
- Secondly, a danger of too much focus on the technology and the collection of data, and not enough
on how personal data is being used;
- And thirdly, a danger that we admit defeat when it comes to legal regulation on the basis that it’s all
a bit too hard and we don’t understand how the technology or the algorithms work anyway.
Returning to my first danger, the danger of generalisation, I would like to read to you this definition: “Big
data is high-volume, high-velocity and high-variety information assets that demand cost-effective,
innovative forms of information processing for enhanced insight and decision making.” (Gartner)
I have to say that this sort of thing makes me rather cross. I would encourage all of us here to move
away from such buzz-wordy definitions which do not help us make judgements about data use. Even
the term ‘big data’ is an unhelpful one in my view. We may be talking about very large datasets or
different sources of data being brought together, or a little bit of data about lots of people or lots of data
about a small number of people, but in many ways, the fundamental questions remain the same. To
judge whether big data in any particular case poses a danger, we could ask what sort of data is being
collected or generated: provided by the individual; observed e.g. through cookies; derived using a
combination of data; inferred from analytics? Why is it being collected? What is being done with it?
What decisions are made because of it? How do those decisions impact on individuals or on
society? Is the use of the data outside an individual’s likely privacy expectations? What harm might
occur from the use of the data?
Which brings me onto my second danger, of too much focus on technology and data collection, and not
enough on how personal data is used, often the point at which most harm to individuals can
occur. Data protection law has tended to regard data collection as the key point in the information
lifecycle at which individuals will be given choices and suppression of personal data as a way of
protecting individuals from harm, such as the recent ‘right to be forgotten’ decision. In today’s
information environment though, we are often talking about information that already exists, for instance
it’s been collected by a mobile phone business as part of the provision of its services, or it’s been
posted on social media by individuals, or it’s been generated by a hospital as part of patient treatment.
So I would argue that it is how data is used that should be our focus, and the focus of regulation. Take
mobile phone data collected by a provider: that data may be used by the provider to market other
products to its customers. Fair enough, we might say, provided that they stop if we ask them
to. Aggregated data might be passed to a local council to enable it to monitor traffic speeds and to take
decisions about which roads need repairing. Fair enough, we might again say, provided that no names
are attached – we can see the public benefit.
What if the mobile provider itself combines the data with other available information in order to predict
where the next crime hot-spot will be. It sells the results of its analysis (not the data itself) to an
insurance company which then ups the home insurance premiums of people living in that predicted
crime hot-spot. What’s the harm? The mobile provider has not passed on any individual details and it
anonymised the data before carrying out its analysis.
This type of analysis has recently been carried out by MIT & others using human behavioural data
derived from mobile phone activity in London combined with demographic data, the authors claiming
that their analysis increases the accuracy of prediction when compared with other methods
(Bogomolov et al, Once Upon a Crime: Towards Crime Prediction from Demographics and Mobile
Data, 10 Sep 2014). But alarm bells might be ringing with some of you now. Why? Perhaps we are
concerned about decisions that affect individuals being taken purely on the basis of an algorithm that
we don’t understand and that may turn out not to be 100% accurate. Is this something that should be
allowed? This is the question that must be asked in every context and we should be prepared to judge.
So thirdly and finally, the law. Are the issues raised by big data really so difficult that we cannot
possibly expect the law to tackle them? I would say not. I would say however that if we continue to
focus on consent or hiding data as the way of regulating big data, we will ultimately fail. There is often
no real choice for the individual but to provide his/her data in order to receive a service. Therefore, the
requirement for consent at the point of collection is weakened as a way of ensuring fairness. Of
course, we do not want a data collection free-for-all but in my view, equal or even more focus needs to
be put on the use/misuse of data. If society wishes to prevent or minimise use of information that may
be damaging in certain contexts, it should implement rules to do that, prioritising the most serious
harms and risks. The Government is attempting just that in relation to jurors searching electronically
for information relating to the case he or she is trying, and in relation to so-called ‘revenge porn.’ No
law will be successful in preventing all harmful activity and there is always the risk of unintended
consequences, but I think we must at least have a go at making new and better law when it comes to
big data.
“Be yourself; everyone else is already taken”
By Carol Kilgannon, Centre for Information Rights
While they might make peculiar philosophical bedfellows, Oscar Wilde and Mark Zuckerburg have both
advocated the vital importance of being yourself. Even so, while Wilde’s rationale (quoted above) is
flawless and universal, Facebook’s CEO roots his viewpoint firmly in the context of internet identity and
has been hotly criticised. After a very public campaign by the Lesbian Gay Bisexual and Transgender
community earlier this year, Facebook famously changed its “real identity” policy for users’ accounts
and now requires “authentic identities”. This may reduce, to an extent, the discrimination which was so
keenly felt by the drag queens who forced the change, but it doesn’t fundamentally change the
requirement.
Zuckerburg’s policy is legal and, to him and his company, a sign of integrity: you should not hide your
online activities behind anonymity. This reasoning is likely to gain greater popularity as the problem of
trolling grows and gains greater publicity. There is no doubt at all that trolling can ruin lives, operates
under cover of anonymity and is an increasing problem. And, although the Malicious Communications
Act 1988 (among others) is capable of dealing with the problem of trolling, it does not make the
problem of finding the real identity of the troll any easier.
It is not unlikely that this issue will gain a toehold in the pre-election battle for public opinion: Chris
Grayling recently announced a quadrupling of sentences available under the Act in the wake of another
celebrity trolling incident. One potentially popular call may well be for a Zuckerberg type “real” online
identity requirement. This could make identifying trolls easier and, after all, if you’re doing nothing
wrong, you have no need for anonymity. This is a simple, but also simplistic response to a real problem
however, and ignores the legitimate reasons people sometimes use online persona such as
whistleblowers and domestic abuse victims hiding from abusers. It also fails to identify the wrong: it is
the misuse by some individuals that is the harm here, not the use of an alter ego. If detection is the
problem, then we need to educate our police service in the means of identifying the potential crime and
in the grounds on which a disclosure order can be sought; no need for a sledgehammer to break this
nut.
Family court transparency – but at what cost?
By Sarah Meads, Centre for Information Rights
Sarah Meads comments on the consultation paper issued by the President of the Family
Division on 15th August 2014
In the consultation paper issued by Sir James Munby, the President of the Family Division, he sets out
further plans to open up the Family Courts to the media. He justifies such steps on the basis of
improving the public's understanding of the Court process and on the basis that he feels the public has
a legitimate interest in being able to read what the Judges are doing.
He is asking for the profession's views about adding a catchphrase of a few words after each case
number on the Court list to help identify what the case is about. Normally the parties' names are
included as well as the specific Court case number that is allocated to every matter. If it is a children
case then the names of the parties are not listed, but only the matter number. It is hard to think exactly
what catchphrase could be used to help a member of the media identify whether it is something he or
she wishes to sit in on. The words "matrimonial" or “divorce”, for example, will not give sufficient
information to reveal whether this is something about which our friends on Fleet Street may be
interested in. In addition, the words "children dispute" again would be insufficient to decipher whether
the case is newsworthy. It remains to be seen whether any professional responding is able to suggest
a form of wording which would encapsulate the President's idea.
The President refers in his recent consultation paper to the media having a watchdog role. It is difficult
to see how the media perform the role of a watchdog when in fact one would imagine their presence is
simply at cases which are newsworthy. In over eight years of being a qualified Family Law Solicitor I
have never known the media to be interested in any of my cases. Presumably it is the cases involving
celebrities or shock value cases which concern children that the media would be interested in. Given
this, whether they really perform the role of a watchdog is questionable.
Another area contained in the latest document from the President is his consideration that some
experts' reports, or extracts of reports, be made available to the media. He says:
"It will not be every expert's report that will be released but only those identified by the Judge, having
heard submissions".
Given that it is now much more difficult to obtain an expert due to the recent changes to the Family
Procedure Rules (experts must now be necessary, as opposed to reasonably required), and given the
more or less abolition of legal aid for private cases by the Legal Aid, Sentencing and Punishment of
Offenders Act 2012, funding for experts is hard to come by. The President seems to be suggesting that
further Court time and parties' precious resources be used for submissions to the Judge about what
parts of an expert's report should be disclosed. It is difficult to see how parties are going to be able to
fund their legal representatives' fees for preparing and presenting such submissions in this age of
austerity.
He is also seeking preliminary views about hearing certain types of family case in public. He seeks
views as to what type of family case might initially be appropriate for hearing in public and what
restrictions and safeguards would be appropriate. It is always a source of comfort to my clients to know
that their cases generally cannot be known about by anyone other than their former spouse or partner,
the legal representatives and the Judge. Family cases often have sensitive dimensions, not least where
there are children concerned. Even with the anonymisation of children's names, in a local area any
article in a local newspaper may lead to the discovery of the child or children concerned as it may not
be too difficult to work out to which family the case refers. Uncharted damage to children may follow.
It remains to be seen what further steps the President will take and, even with further steps to open up
the Courts, whether representatives from the media will wish to attend or read published judgments
more than before. One suspects that it will just remain the obviously juicy cases which attract the eye of
our friends from Fleet Street.
Thanks go to all the contributors to this issue. Contributions to future issues are welcome and suggestions can be emailed to the Editor,
Marion Oswald
You have received this email because you have expressed an interest in the work of the Centre for Information Rights.
To unsubscribe, please email cir@winchester.ac.uk
The contents of this e-bulletin are for reference and comment purposes only as at the date of issue and do not constitute legal advice.
© The Centre for Information Rights, Department of Law, University of Winchester, Winchester, SO22 4NR
Download