Item 5 Paper No: CM/01/13/04 Developing a strategic framework to guide the Care Quality Commission’s programme of evaluation REPORT Kieran Walshe and Denham Phipps January 2013 Item 5 Paper No: CM/01/13/04 Item 5 Paper No: CM/01/13/04 Executive summary 1 1 Introduction Background and context Analysis of the regulatory model Learning from research Comparisons with other regulators Using existing data Structure of the report 2 2 The regulatory model: an analysis Introduction Regulatory mission and purpose Registration Standard setting Information gathering and risk assessment Inspection and reporting Enforcement Information provision Reviewing the regulatory model 7 3 Differentiation in regulatory design Introduction Learning from research Comparisons with other regulators Using existing data Conclusions 23 4 Regulatory standard setting Introduction Learning from research Comparisons with other regulators Using existing data Conclusions 33 5 Risk based regulatory approaches Introduction Learning from research Comparisons with other regulators Using existing data Conclusions 43 Item 5 Paper No: CM/01/13/04 6 The competencies of the regulatory workforce Introduction Learning from research Comparisons with other regulators Using existing data Conclusions 55 7 Conclusions 66 References 68 Item 5 Paper No: CM/01/13/04 Item 5 Paper No: CM/01/13/04 Executive summary This report sets out to show how evidence and research can be used by CQC to evaluate how well its current regulatory arrangements in health and social care work, and plan future changes in those arrangements to improve their effectiveness and efficiency. We are very thankful for the help, advice and support we received from many CQC staff in its preparation. Of course, we remain responsible for its content and conclusions, and for any errors or omissions. We develop a “logic model” of the current regulatory arrangements which tries to map out how they are intended to work, and what assumptions are made or consequences flow from those arrangements. We conclude that as currently configured, CQC is a “safety-net” regulator, focused on dealing with poor performance but with limited capacity or capability to drive or support wider performance improvement. We note that even in those terms the current regulatory model has some inherent problems, but that the direction of future strategy signals a shift towards a more ambitious interpretation of CQC’s remit and purpose, which has important implications for the regulatory design/arrangements. We then discuss four areas – differentiation in regulatory design, standard setting, riskbased regulation, and the regulatory or inspection workforce – and look at what evidence is available from several sources about how current regulatory arrangements work. We conclude: CQC’s generic regulatory model is unusual (compared with other regulators) and hard to make work. There are good reasons to consider greater differentiation between sectors (such as NHS/healthcare and adult social care) and within large sectors like adult social care. This might mean the development of more specific and tailored standards and guidance, and greater specialisation among the inspection workforce. CQC’s current standards are largely construed as an essential or minimal level of performance. CQC could within its existing legislative and regulatory framework create a more differentiated, demanding and service specific set of standards, and it could consider making more use of standards developed in or with the sectors it regulates. CQC had used a risk-based model of regulation, in which it adjusted its use of regulatory interventions like inspection with providers based on an assessment of risk and performance, but has recently returned to a universal schedule of annual inspection in most sectors. We find that even modestly proportionate or risk-based regulation requires a strong and stable database of performance data which has clear predictive validity, and a graduated range of regulatory interventions short of full inspection. CQC’s use of a generic inspection workforce is, as far as we can see, not emulated by other regulators and is problematic for several reasons. Regulatory staff need content expertise in the area they regulate, methods expertise in the regulatory system, and interpersonal or behavioural expertise in dealing with people and organisations in sometimes difficult and contested circumstances. Specialisation has many advantages, and investment in the development of the inspection workforce is worthwhile. We conclude that when introducing tested and in the evidence CQC could make more use of evaluations than it has in the past both innovations in its regulatory arrangements so that they are properly routine working of its regulatory arrangements so that it has ongoing of their effectiveness and impact. 1 Item 5 Paper No: CM/01/13/04 Chapter 1 Introduction Background and context This report is the result of a short research project commissioned by the Care Quality Commission (CQC) from the University of Manchester. Its aim was firstly, to help CQC to bring evidence to bear on examining and exploring the effectiveness and efficiency of its current regulatory arrangements; and secondly to help CQC to develop its own internal capacity to undertaken and use research and evaluation so that, going forward, it can make better use of opportunities to test, trial and assess regulatory changes and innovations, improve its efficiency and effectiveness, and gain a more robust understanding of its impact on the quality of health and social care in England. Analysis of the regulatory model Our first step was to develop and test a “logic model” which mapped out the underlying “programme theory” for each major component of the regulatory arrangements. In this approach we drew both on established methods for programme theory explication (see for example Bickman 1987; Rogers 2008) and recent developments in realist or theory driven evaluation (see Pawson and Tilley 1997; Pawson and Manzano-Santaella 2012; Marchal et al 2012). The purpose here was to make explicit the assumptions or presumed mechanisms by which these regulatory components bring about change in regulated organisations. In our experience, there may be multiple and sometimes contradictory theories in use, and part of the value of mapping out the programme theory is to express these alternatives in forms which then make them testable, and which bring to the surface areas of contestation or inconsistency. We developed the logic model around the main statutory functions of CQC – registration, compliance, enforcement, and information provision. For each function, we used documents and interviews with CQC staff to explore the mechanisms at work, and to try to understand the regulatory model and the choices and consequences it represents. We were acutely conscious that this was a time of change within CQC, and the organisation had in September 2012 published a strategic review on which it was consulting stakeholders. That strategic review signalled important, even fundamental changes to the regulatory model, and so in our work while we focused on mapping the current regulatory model we tried to take explicit account of the likely future direction as well. 2 Item 5 Paper No: CM/01/13/04 As part of this work, we sought to establish what were the areas where research and evaluation might make an important difference to CQC’s decision making – where there was significant uncertainty about how (or how well) the regulatory model worked, and where change seemed likely. On that basis we chose four topics for further research: Generic versus differentiated regulatory standards and processes – whether regulators use the same methods, standards and processes for organisations in different sectors or of different types or whether and how they differentiate and use different standards or processes for different organisations. How regulatory standards are set and measured – issues like whether standards are minimal, median or maximal, how they are measured, whether compliance/achievement is measured dichotomously or on a scale, how compliance is defined and how a threshold for acceptable performance is set, and whether different standards are used/applied for different organisations. Risk based or proportionate regulation – to what extent regulators try to make the regulatory process responsive to organisational performance, and so focus more attention on organisations which perform less well or represent greater risk, and how this is done The regulatory or inspection workforce – what competencies are required, what kind of people are used to undertake regulation and inspection, how are they recruited and trained/developed, how is their performance appraised. In each of these four areas, we set about drawing together what was already known from the research literature; comparing practice at CQC with four other regulators; and exploring what was known or might be found from using existing data sources within CQC. Learning from research We looked for existing research evidence in the four topic areas using a wide range of bibliographic databases. The general strategy for the literature review was to identify relevant literature in public administration, healthcare, management and safety science, and where appropriate, governmental policy documents. The researchers searched the MEDLINE (1946-2012), EMBASE (1980-2012), ABI Inform (1971-2012), ASSIA (1987-2012), British Nursing Index (1993-2012), Social Services Abstracts (1979-2012), and HMIC (19792012) databases. For the regulatory workforce topic, PsycInfo (1806-2012) was also consulted. The keywords used for each topic were as follows: Differentiation of models: regulat* model* 3 Item 5 Paper No: CM/01/13/04 Regulatory standard setting: regulat* AND standard*; ("nursing home" or "care home") and regulation Risk-based regulation: risk based regulat* Regulatory workforce: regulat* AND inspect* AND (train* OR compet*) In addition to the search of academic databases, public domain repositories (for example, government and NHS websites, and relevant professional organisations) were consulted. Also, the authors’ previous work on related topics, and the reference lists of retrieved articles, were consulted for relevant material. Comparisons with other regulators We identified four other regulatory agencies with whom to compare CQC in the four topic areas. We chose two healthcare regulators – the Joint Commission for the Accreditation of Healthcare Organisations in the United States and the Inspectie voor de Gezondheidszorg (the Healthcare Inspectorate) in the Netherlands. We chose two UK-based non-healthcare regulators – the Office for Standards in Education, Children’s Services and Skills (OFSTED) which regulates children’s services, particularly schools, and the Homes and Communities Agency which regulates social housing providers. Our aim was to provide a range of examples of regulatory policy practice, in both the health and social care setting and elsewhere. Some important characteristics of the comparator regulators – their overall remit/purpose, their size and scale, and who and what they are responsible for regulating – are set out in table 1.1. We gathered data about the comparator regulators through a review of published and unpublished documents, and interviews with a member of staff from each regulator. The interpretation of the comparator regulators’ arrangements contained in this report is, of course, ours rather than an official statement of the agencies concerned. The table illustrates that the four regulators vary in some important ways, though they have much in common. Their statements of regulatory remit/purpose – taken from their own documents such as annual reports – generally all focus on performance improvement though some are more ambitious than others. They range in scale from about 120 staff to 1,400 or more, and in annual turnover from £48 million (€55 million) to £167 million pa. Most though not all are public/state organisations (JCAHO is a private not-for-profit foundation). All regulate thousands of organisations, and all but HCA regulate across multiple sectors or service areas. 4 Item 5 Paper No: CM/01/13/04 Table 1.1. Comparing CQC with four other regulators Name Care Quality Commission Joint Commission for the Accreditation of Healthcare Organizations Inspectie voor de Gezondheidszorg (Dutch Healthcare Inspectorate) Abbreviation Sector CQC Health and social care England Non Departmental Public Body JCAHO Healthcare IGZ Healthcare USA Private not for profit foundation Formal regulatory remit/ purpose to “protect and promote” the health, safety and welfare of service users and the “general purpose of encouraging the improvement of health and social care services” Annual turnover £149 million “To continuously improve health care for the public, in collaboration with other stakeholders, by evaluating health care organizations and inspiring them to excel in providing safe and effective care of the highest quality and value.” $165 million+ Netherlands Part of Ministry for Health, Welfare and Sport, but partially independent “Promotes public health through effective enforcement of the quality of health services, prevention measures and medical products.” No of staff 1,885 Type of organisations regulated No of organisations regulated Country Organisational form Office for Standards in Education, Children’s Services and Skills OFSTED Childrens services England Non ministerial government department Homes and Communities Agency “inspects and regulates to achieve excellence in the care of children and young people, and in education and skills for learners of all ages, thereby raising standards and improving lives” “focus of our activity is on governance, financial viability and value for money as the basis for robust economic regulation; maintaining lender confidence and protecting taxpayers” €55 million £167 million 800 500 Health and social care providers Hospitals, long term care, behavioural healthcare, labs, homecare, ambulatory care Curative healthcare, long term care, public health, pharmaceuticals and medical devices, health professionals 22k in health, social care 20k 3k organisations, 800k professionals 1,400 plus contracted staff Schools (maintained and independent), further education, adult learning, early years/ childcare, childrens homes, childrens social care, adoption/ fostering 23k schools and about 100k other providers £55 million (whole of HCA – regulation function not separated) 120 in regulation function Registered social landlords – incl LAs, housing associations and for-profit providers 5 HCA Social housing England Non Departmental Public Body 1,500 though 400 large RSLs are 90% of provision Item 5 Paper No: CM/01/13/04 Using existing data We were keen to see whether existing data held by CQC, particularly that produced for or as a byproduct of the regulatory process, could be used to understand current practice in CQC in the four topic areas. To that end, we met with CQC staff responsible for intelligence and data analysis, to get a better understanding of data availability and they undertook some analyses which we draw on in this report by way of examples, where possible. Our aim here has been to establish the principle that existing or routine data can be used to serve the purposes of evaluation, either retrospectively or prospectively. Structure of the report The next chapter of this report – chapter 2 - sets out results of our work on the regulatory model. It offers a detailed examination of the current regulatory model, organised around the four main regulatory functions of CQC – registration, compliance, enforcement, and information provision. We break down compliance, which is a large and complex function, into three components of standard setting, information gathering and risk assessment, and inspection and reporting. For each of these functions, the chapter aims to describe what CQC does, set out the alternate theories or logic models which underlie the function and which one seems plausibly to “fit” CQC, and comment on how CQC’s approach and the theory or logic model fit together, and what might be the consequences of the model at work. The chapter concludes by reflecting on the overall current regulatory model, and noting some of the key changes to that model signalled i CQC’s strategic review. Chapters 3, 4, 5 and 6 of the report then take the four topic areas in turn – differentiation in regulatory design, regulatory standard setting, risk-based regulation, and the regulatory workforce. For each topic, the chapter first reviews the research evidence and seeks to create a framework for analysis, then explores how our four comparator regulators deal with the topic in hand, and then considers whether CQC’s existing data could be used to understand how the current regulatory model works in this area. Each chapter finishes with some brief conclusions. Finally, chapter 7 of the report draws together our conclusions, and outlines some brief recommendations for the future development of CQC’s framework for evaluation and research. 6 Item 5 Paper No: CM/01/13/04 Chapter 2 The regulatory model: an analysis Introduction In this chapter we set out to map out and describe CQC’s regulatory arrangements in terms which make explicit the underlying assumptions or mechanisms that are (or are intended to be) in operation. This is sometimes described as creating a “logic model” or mapping the “programme theories” at work and it can be valuable in two ways. First, it require us to be explicit about the way that regulation is meant to work, and so allows those underlying assumptions or mechanisms to be questioned or contested and questions about how the theoretical intent plays out in practice to be asked (Prosser 1999). Second, it provides a framework for evaluating the effectiveness of those regulatory arrangements, and deciding what data is needed to evaluate how well they work in practice. Our methods for undertaking this analysis are described in chapter 1, but relied mainly on interviews with CQC staff and a review of relevant CQC documents. Our starting point for this analysis was the four main regulatory functions of CQC – registration, compliance, enforcement, and information provision – and in the rest of this chapter we set out our understanding of the regulatory model in these areas. We break down compliance, which is a large and complex function, into three components of standard setting, information gathering and risk assessment, and inspection and reporting. For each of these functions, we try to do three things – describe what CQC does, set out the alternate theories or logic models which underlie the function and which one seems plausibly to “fit” CQC, and comment on how CQC’s approach and the theory or logic model fit together, and what might be the consequences of the model at work. Before starting to discuss these regulatory functions, we raise three issues that essentially concern CQC’s regulatory philosophy and its mission or purpose. They arise from our interviews and document review, and are important determinants of regulatory design and development. Regulatory mission and purpose A discussion of the regulatory model in use and how it works has to recognise that the choice of model is shaped by the regulatory mission and purpose – what CQC is here to do. While that is defined in a number of places – in legislation, in CQC’s strategic plans and annual reports, and in communications with regulated organisations, we think there are three aspects of mission and purpose which need to be highlighted because of their impact on the regulatory model. 7 Item 5 Paper No: CM/01/13/04 1. Is CQC’s regulatory purpose to ensure that all regulated organisations meet certain minimum or essential requirements, or is it to raise performance and standards of care across all organisations, or does it seek to do both? This is a crucial question because minimal or safety-net regulation requires quite different regulatory processes, standards, measurement systems and so on from maximal or improvement-oriented regulation. We understand that while the legislation which established CQC gave it a remit to “protect and promote” the health, safety and welfare of service users and the “general purpose of encouraging the improvement of health and social care services” (Health and Social Care Act 2008) its strategy has focused in recent years primarily on acting to deal with poor quality care. However, the recently published strategic review seems to signal a substantial change in direction, with CQC once more seeking to drive improvement. 2. Does CQC see its mission and purpose as determined by the legislative powers and duties it has been given, or does it see those legislative powers and duties as a means through which to achieve its mission and purpose? In other words, does CQC conceive of its mission in narrow terms as to fulfil the legal duties set out in the Health and Social Care Act 2008 and, where it wishes to, to exercise the discretionary powers that legislation gives it? Or does CQC seek more broadly to embrace the wider mission and purpose defined in its legislative objectives, and regard the legislative provisions as providing a set of tools and a framework within which to enact that wider mission and purpose? Again, this question matters because it shapes the approach CQC takes to regulatory design and development, particularly how much discretion or room for manoeuvre it has (or believes it has) in regulatory design decisions. A legislatively-led regulator will tend to retreat to the core functions explicitly enshrined in legislation, and will not seek to do things, even if they are necessary to its mission and purpose, which it is not explicitly empowered to do by the legislation. A mission-led regulator will be more willing to use its informal and quasi-legal “soft” powers, and the leverage it gets from positional authority, professional standing, market pressures, media attention and the like to achieve its mission – and its strict legislative functions may be only part of what it does. 3. What view does CQC take of the organisations it regulates, and what kind of relationships does CQC seek to have with regulated organisations? Does CQC see the provider organisations it regulates as partners or as adversaries; as generally well-intentioned and honest or as amoral, utility-maximising and calculating? Does CQC see the relationships it has with providers as essentially transactional ones which exist just to deliver the immediate purposes of any regulatory interaction, or does it see these relationships – and some of their “softer” characteristics such as mutual trust and 8 Item 5 Paper No: CM/01/13/04 respect, credibility, and longitudinality – as materially important to the regulatory process and its effectiveness. This question matters once again because whether you take an instrumental or a socio-organisational view of these regulatory processes and relationships fundamentally affects the approach to the regulatory model. Of course, there may be other aspects of CQCs regulatory philosophy and mission which deserve debate too, but our point is that these issues – whether debated or unspoken – are important influences on the design and development of regulatory arrangements, as we hope the analysis of the CQC regulatory model in the rest of this chapter helps to illustrate. Registration CQC has a statutory duty to register providers of health and social care, and the workload of registration associated with its growing regulatory scope (such as dental services and general practice) has been considerable. The process of registration is well defined, and is delivered by a separate operational workforce from that responsible for ongoing regulation. However, our interviews suggested some divergence of views about purposes of registration, and the models in use which these mechanisms might suggest. We identified three different and not necessarily compatible models which might be at work: Registration as a threshold for new providers to meet. Here, registration sets a standard which new providers have to reach before they are allowed to enter the market. It should deter poor providers from coming in (when they see what will be expected of them and realise they cannot meet required standards) and it should drive aspirant providers to improve in order to get their registration - a potentially powerful incentive. In this model, some providers might take quite a while to achieve the required standards, and some might well be rejected. Registration as the start of a regulatory relationship with the provider. Here, registration is the first opportunity to get to know a new provider, to begin building up knowledge about their performance which will then be used in ongoing regulatory intervention and to start to establish a constructive working relationship with them. It is likely to involve extended and face to face contact with the provider, as well as initial data collection to establish a baseline performance. Again, it might take some time for providers to complete the process and get registered, and providers might be risk rated or triaged at registration in this model in ways that then shape subsequent regulatory interactions. Registration as administrative data capture. Here, registration is primarily an administrative process, in which the basic facts about the provider are established or checked – issues like financial standing, the identity of key individuals, and conformance with essential and statutory requirements. It is a transaction which can be accomplished 9 Item 5 Paper No: CM/01/13/04 quickly and efficiently, perhaps with little or no actual face to face contact, but at its completion there is limited assurance about performance, and no established relationship. In a sense, this model postpones dealing with any concerns about performance from registration to ongoing regulation. We think that as it is currently configured, CQC’s registration process and its metrics are primarily constructed around the third model – to provide an administrative record of registered providers. There is a focus in the registration metrics on speed and efficiency of process. Interviewees did not generally think that registration required providers to improve to meet the essential standards, or that registration provided assurance that newly registered providers were meeting the essential standards. Because registration is conducted by a separate team, it does not seem to initiate the regulatory relationship, and some questions emerged about the transition of responsibility and the handover of information to those responsible for ongoing regulation. However, this probably means that CQC has foregone opportunities for improvement in the registration process, that registration does not provide much assurance about the performance of new providers, and that regulatory work which could be achieved through registration is essentially deferred and transferred to those responsible for ongoing regulation. Standard setting CQC works to a set of regulations set out in secondary legislation by the Department of Health, which have then been mapped into a separate set of 28 outcomes in CQC’s compliance framework which is designed to be used across all providers. For each outcome, there is some guidance about its interpretation in the compliance framework. CQC inspections focus on what are often called its 16 essential standards (items drawn from the set of outcomes) and on each inspection compliance with at least five of the 16 essential standards is measured dichotomously – one in each of five areas or dimensions. If a provider is found to be non-compliant, enforcement action is taken using the statutory regulations, not the outcomes. Almost every regulator sets out its expectations of providers in some form of standards, rules, or regulations, often accompanied by further guidance on interpretation and implementation. The processes for standard setting, the way they are written, and how they are communicated are all connected to the wider regulatory purpose – what they are for. We can identify at least four potential models underlying the setting of regulatory standards Standards as a mechanism to set or frame stakeholders’ values and performance expectations. Here, the regulator sets out its requirements, often at quite a high or conceptual level, both to communicate them to providers and also to inform others – 10 Item 5 Paper No: CM/01/13/04 commissioners, patients/users, and the public. In so doing the regulator helps to set stakeholder expectations and to give prominence to important issues or concerns – shaping the climate of regulation rather than necessarily setting detailed requirements. These standards may be expressed in quite abstract and high level terms – more like principles or expressions of purpose or intent - and may not be particularly designed for measurement. In a sense, such standards are an explicit expression of values more that they are a tool for performance measurement. Standards as a mechanism for improvement through self-enforced compliance. It can be argued that much regulatory compliance results not from regulatory interventions like inspections but from providers seeing what the regulator wants, understanding those requirements and responding – possibly in anticipation of future inspection, but also for other reasons such as competitive pressures or professional motivations. For this to work, the standards need to require improvement in most providers, who in turn need to understand the standards well, largely agree with what they require, and know how to and be able to implement them. So these standards are likely to be maximal, explicit, detailed, and accompanied by further guidance. Standards as a mechanism for compliance through measurement and enforcement. In this model, the standards are seen as primarily providing a framework for measuring providers’ performance, often via inspection, and then for tackling noncompliance or poor performance. This means that standards need to be expressed in terms which facilitate valid and reliable measurement. There may be a particular focus on setting standards in areas where enforcement is seen as needed, and on determining noncompliance reliably in ways that then allows enforcement. Standards are likely to be minimal in nature since the regulator is unlikely to be able to take enforcement action with more than a small proportion of providers. Standards as a mechanism for differentiating between providers. In this model, the standards are used to measure performance, not just to assess compliance and initiate enforcement. Often, the intention is that comparative performance against the standards will be used by providers themselves and by others (such as users or commissioners or services) to make decisions about services they use, and that this will act as an incentive to improve performance. These standards are likely to expressed in terms which facilitate valid and reliable measurement and to be designed to measure a wide variation of performance levels. These four models are not mutually exclusive, and a regulator might seek to draw on more than one of them in its approach to standard setting. In our interviews with CQC staff and review of documents, it seemed that perhaps the first and more probably the third model were in use – standard setting for compliance through measurement and enforcement. But 11 Item 5 Paper No: CM/01/13/04 a number of quite contested issues emerged, suggesting that there are some tensions or internal inconsistencies in the regulatory model which merit some discussion: Generic standards. CQC has a single set of generic standards/outcomes which it uses to assess the performance of quite a diverse range of providers in health and social care. It does not produce interpretative guidance or specific information for particular sectors or service areas. The case for doing this seems to be that the fundamental dimensions of performance are generic or universal and so can be understood and applied in any sector. This may be so, but the more heterogeneous regulated organisations are, the more difficult it is likely to be to make generic standards valid and reliable, which they need to be if they are to be used in measurement and enforcement (as in the third model above). Moreover, if providers are to change in response to the regulator’s standards, they need to have a good understanding of their content and implementation, and for this purpose detailed sector-specific guidance is probably also helpful. Minimal, median or maximal standards. CQC describes its 16 standards as “essential” and noncompliance with any of those standards may lead to compliance actions and enforcement. Compliance with most of the standards is actually quite high (on average 73% of organisations inspected comply with all essential standards). CQC has chosen to focus its attention in recent years on finding and dealing with poor practice. These observations all suggest that CQC’s standards are minimal ones – a kind of safety net standard below which no provider should fall. Most but not all our interviewees thought this was the case. Some regulators adopt median standards – representing typical rather than minimal performance – and others set maximal or optimal standards, which are likely to require many organisations to improve in order to meet them. Minimal standards would not be helpful in the second and fourth model outlined above. Content or focus of standards. CQC’s standards and particularly its outcomes, are intended to focused on patient or user experience, and their measurement is therefore predominantly undertaken through observing care or talking to patients/users and staff/caregivers. Most regulators have to decide how much they use their standards to define and then measure directly the quality of services delivered, or how much they use them to define and then measure systems and processes which are intended to ensure quality services are delivered - an important distinction. The former is problematic because of the costs and complexity of direct observation at any scale, especially in a larger regulated organisation with many services, and because it provides only a measure of performance at the point and place of inspection and observation with no indication of whether this is likely to be typical or sustained performance. The latter is problematic if those systems and processes are not good and reliable proxies for the quality of service. Many regulators end up measuring both – with the balance 12 Item 5 Paper No: CM/01/13/04 reflecting those measurement challenges and the scale and complexity of the organisation. Understanding and implementation. CQC provides quite limited guidance in its compliance framework on how to interpret or implement the generic standards and outcomes. By introducing a separate set of “outcomes” which are, essentially, a relabeling and resequencing of the statutory regulations (and most of which are not, by most usual definitions, outcomes) it has also added an additional layer of complexity. If regulated organisations are expected to understand the standards and know how to implement them then the lack of sector-specific guidance and the apparent complexity of the compliance framework may be unhelpful. Measurement. We come later to examine the model underlying the inspection process, but it is worth considering not just the measurability of standards, but the nature of measurement used (a dichotomous judgement of compliant/not compliant versus a more discriminating scale of some kind) and the way that data from observation, interview and other sources is integrated into the data item of measurement. It has been quite hard for us to understand, through our interviews and our review of documents like the compliance and judgement frameworks, how CQC measures organisations against the standards/outcomes and particularly how it reaches a reliable and valid compliance/noncompliance judgement based on quite limited and largely observational data. Perhaps the key conclusion from this discussion of the model underlying standard setting is that different regulatory purposes require quite different kinds of standards. A regulator who sets out mainly to prevent poor practice – what is often called a safety-net regulator – probably wants minimal standards, couched in simple and measurable terms without much added guidance or interpretation, which are then straightforward to enforce by just assessing compliance/non-compliance and then acting accordingly. A regulator who seeks to drive improvement across the whole sector probably wants maximal standards, with a higher level of detail and description, and accompanied by quite specific guidance on implementation. Assessment probably involves a graduated rating of performance against the standard, along with feedback on the areas for improvement. 13 Item 5 Paper No: CM/01/13/04 Information gathering and risk assessment CQC collates information about regulated organisations in a Quality and Risk Profile. Some information is drawn from routine data, and this is supplemented by information from other sources such as CQC’s own inspections, feedback from other regulators, complaints, “whistleblower” reports, etc. Data is statistically aggregated to form “z-scores” which are normalised quantitative estimates of risk. The quantity and quality of information varies across different types of regulated organisations. The idea is that compliance inspectors use the QRP to decide when to inspect organisations and what areas to focus on during their inspections. Our interviews suggested that in practice, QRPs were not well used, for a number of reasons discussed below. The idea of collecting information about regulated organisations and using it to tailor the regulatory process to individual organisational circumstances or characteristics is a very common one, and is often called risk-based or proportionate regulation. It is intuitively attractive, and seems on the face of it to be a way to make the best use of limited regulatory resources, by focusing them on providers where there is a demonstrated need for performance improvement. There are three basic models at work here: Using information to determine when regulatory interventions are used. By collecting information about ongoing performance, the regulator is able to measure current performance and/or predict performance trajectory or future performance, and this information is used to decide where to focus regulatory resources for interventions such as inspections or visits. Poor or declining performance is likely to trigger regulatory intervention. Importantly, this model assumes that performance can be measured sufficiently accurately and in a timely fashion, and that valid predictive measures of performance are available. It also assumes that regulatory staff are able to use that information effectively in their decision making about when to intervene. Using information to focus or direct attention during regulatory interventions. By collecting information about performance, the regulator is able to identify areas of poor or questionable performance which it can then focus on during a regulatory intervention such as a visit or inspection. For example, it may identify particular services or functions which appear to be poorly performing in comparison with others, in this organisation or elsewhere. This model assumes again that valid performance measures are available, and that regulatory staff are able to use the information effectively in inspections. Making regulated organisations aware that information on performance may lead to regulatory interventions. Because regulated organisations know their performance is 14 Item 5 Paper No: CM/01/13/04 being monitored by the regulator, and that poor or declining performance may lead to an unwelcome regulatory intervention, they act themselves to monitor their own performance and to respond to emerging concerns before regulatory intervention occurs. This model assumes that valid performance measures are available, that the data is provided not just to the regulator but also to the regulated organisation, and that they have the internal capacity to use it to change their performance. From our interviews, it seems that CQC has sought to apply both the first and second models in its development of QRPs. However, a number of factors have contributed to the use of QRPs being less effective than they might have been. First, the quality, completeness and timeliness of information has been questionable. For some organisations the QRP contains very little information, while for others (notably NHS organisations) it contains a large amount of information which is difficult to make sense of. It is difficult to manage and use “soft” intelligence about performance in the QRP, and it was not clear to us how the soft intelligence that inspectors acquire is recorded, shared or integrated. Second, it is far from clear that the information in the QRP has sufficient predictive value – in other words, that it is useful in identifying individual organisations that are performing poorly now or will perform poorly in future – and this does not seem to have been demonstrated. Third, it is not clear that compliance inspectors have the capacity and resources to use information from the QRP in their inspections, as the second model assumes. Fourth, the reintroduction of annual inspections for most organisations has undercut the main purpose of the first model outlined above, which is to direct regulatory resources and interventions. The principle of risk-based or proportionate regulation is straightforward, but its practice or implementation requires the availability of timely ongoing monitoring data which is both a valid measure of current performance and a valid predictor of future performance. This is a challenging requirement to meet, which may explain why many regulators use regulatory arrangements which are only weakly or partially risk-based/proportionate. Inspection and reporting CQC undertakes inspections of regulated organisations. It has committed to inspecting most organisations at least once every 12 months. Inspections are unannounced, with inspectors simply arriving at the organisation with no advance notice. The length of the inspection and the number of inspectors involved varies with the sector and the size and complexity of the organisation, from less than a day with single inspector to two or three days with a small team. Inspectors carry a mixed portfolio of organisations and are not expected to have any particular content expertise in the sectors they inspect. They can call on specialist inspectors with content expertise if they wish to, though this is fairly rare. Most time during inspections is used in direct observation of care, talking with patients/users and talking with staff. On each inspection five of the 16 essential standards are checked (with the idea that 15 Item 5 Paper No: CM/01/13/04 over a three year cycle all of the standards will have been checked). Compliance with each standard is recorded, and a narrative report of the inspection is produced. Inspection is by far the commonest form of regulatory intervention. At its simplest, regulators use inspections to find out whether regulated organisations are conforming with the requirements set out in their regulations or standards. But inspection can serve wider purposes than measurement, and we identify four important models in use: Inspection as a driver for improvement in advance of inspection. Foreknowledge of the prospect of inspection or of the actual date of inspection leads the regulated organisation to assess itself against the regulator’s requirements and to seek to demonstrate compliance. Changes are made to deal with areas of non-compliance in advance of the inspection. This links to the model of standard setting for self-enforced compliance which was outlined earlier. Inspection as a measure of compliance to support enforcement. This is the straightforward use of inspection to measure whether a regulated organisation is conforming to the regulator’s requirements set out in their regulations or standards. If it is not, enforcement action – or the prospect of enforcement action – is used to make the organisation implement necessary changes. Often there is some form of follow-up or reinspection to check that the organisation has become compliant. Inspection as a measure of performance to support improvement. This sees inspection as an external evaluation or assessment of the performance of the regulated organisation which provides a diagnostic opportunity for the organisation to then make improvements. There may be – in some cases – formal enforcement action taken when there is serious non-compliance or poor performance, but in most cases formal enforcement action is not taken, and for most organisations the inspection would identify at least some opportunities for improvement. Inspection as a driver for other regulated organisations to improve. Here, the potential effects of inspection on other providers (not that being inspected) who have not been inspected but who anticipate they may or will be in the future are important. Inspection results are published and publicised in forms which are designed for other providers to learn from them – either as examples of poor practice or examples of good practice. CQC seems to use only the second model in its inspection process. This is consistent with a view of the regulatory process that sees it as primarily focused on preventing and dealing with non-compliance with minimal standards, through enforcement, as discussed earlier in sections dealing with standard setting and regulatory purpose and mission. It is not consistent with any wider regulatory purpose which, for example, sees it as concerned with 16 Item 5 Paper No: CM/01/13/04 improving performance. But even in those terms, there are at least four further issues to do with the inspection process which seem worthy of more consideration. Firstly, the process of inspection is largely oriented towards direct observation and assessment of care processes, and it was not clear to us how well that works in larger organisations, where only a very small proportion of care delivery can ever be observed or assessed. It seems that there is an underlying but largely untested presumption that observed care processes in one part of an organisation are a good proxy for unobserved care processes elsewhere in that organisation. Secondly, the decision to inspect a limited set of essential standards (generally 5 of the 16 standards) at each inspection sets the scope of inspection and enforcement to a subset of requirements rather than to all of them. We understand the rationale for this decision was essentially predicated on the assumption that inspecting against fewer standards would take less time, though that does not seem to have been tested. It means that longitudinal and interorganisational comparisons of performance are problematic (as the standards tested at each inspection vary). Thirdly, as already noted, we found it difficult to understand how the dichotomous judgement about whether or not an organisation is compliant with a given standard is actually reached, and what evidential burden or threshold to deem an organisation non-compliant was required. The reliability of inspection judgements of compliance does not seem to have been tested, and the absence of explicit guidance about the interpretation of standards for particular sectors or in particular circumstances means there is much reliance on inspector judgement. Fourthly, the decision to give inspectors mixed portfolios of organisations and to not require inspectors to have any content knowledge or expertise in the sectors is predicated on the assumption that content knowledge or expertise is not needed to reach a valid and reliable judgement, and that too has not been tested, was much contested by interviewees, and seems intuitively implausible. Overall, it seems that CQC’s inspection process is designed around the inspection for compliance and enforcement model outlined above (and CQC does not attempt to use the other inspection models listed) but that it is difficult to reconcile even the fairly limited terms and purposes of that model with its execution in practice, for the several reasons outlined above. Enforcement CQC has a range of enforcement powers which it can use with non-compliant providers. These range from compliance actions and warning notices up through civil penalties and placing conditions on registration to the suspension or cancellation of registration. When a provider is found to be non-compliant, a judgement framework is used to determine the enforcement response. In practice, the great majority of enforcement actions have been 17 Item 5 Paper No: CM/01/13/04 compliance actions though latterly CQC has made increasingly frequent use of warning notices. It is worth noting that formal enforcement is just one of the ways that regulators secure improvement or compliance, and as the earlier sections of this chapter suggest, other regulatory interventions (such as standard setting and inspection) have important roles alongside enforcement. However, in understanding how enforcement brings about compliance we can identify four main models at work: Informal enforcement or the prospect of enforcement as an incentive to drive compliance. Here, it is the prospect of enforcement action, or informal action short of enforcement which causes the provider to make changes and achieve compliance. Informal actions may include inspectors communicating the issues of concern verbally in advance of a formal inspection report which would lead to enforcement, while mechanisms like notice of deferred action may be used to give providers the prospect of enforcement. In both cases, it is assumed that the prospect of enforcement action is sufficient incentive for providers to change, and that they have the capacity to change. Enforcement as an incentive to drive compliance. Here, the straightforward rationale is that providers, faced with the costs associated with enforcement (for example, fines, loss of business or damage to reputation, continuing or increased regulatory scrutiny and attention, etc) act to make changes and achieve compliance because doing so will cost them less in the short or long run. This means that the actual impact of enforcement action needs to be greater than the cost of compliance (or a provider might rationally decide to accept the enforcement action as a business cost), and it again assumes that providers have the capacity to change. Enforcement as a symbolic action to drive compliance. In this model, the actual content of enforcement action is seen as less significant, and enforcement is seen as having a symbolic purpose – in publicly identifying and labelling a provider as non-compliant. The actual penalty or cost associated with enforcement for the provider may be relatively trivial. This “naming and shaming” view of enforcement assumes that providers place a high value on their reputation and public standing, and perhaps see themselves as good corporate citizens with a social purpose which would be incompatible with being noncompliant with the regulatory requirements, so they will respond to the symbolic impact of enforcement action. It also, again, assumes that providers have the capacity to change. Enforcement as a driver for other regulated organisations to achieve compliance. In this model, enforcement action against one provider is seen as providing a lesson – and a deterrence to non-compliance – to other providers. Here, enforcement action is likely to be made public and publicised, with the intention that other providers will take 18 Item 5 Paper No: CM/01/13/04 notice. Again, it presumes that the prospect of facing similar enforcement action is sufficient incentive for providers to change, and that they have the capacity to change. It seems to us that CQC makes some use of a number of these models, but its enforcement actions are generally relatively trivial in terms of their actual impact on providers, so the dominant model might be thought to be that of enforcement as symbolic action. It does publicise its enforcement actions, and this might be thought to influence other providers to achieve compliance. However, some of the assumptions underlying these models deserve further consideration. Firstly, relying on enforcement action to secure compliance is an expensive approach to driving change, because enforcement action is usually resource intensive for the regulator. This means there is an effective and relatively low limit on how many providers the regulator can feasibly use enforcement action against. Secondly, for enforcement action to have a symbolic value and deterrent effect as the models suggest, it probably has to be used quite rarely in any case. Used routinely, enforcement risks becoming an accepted cost of doing business, and an irritant rather than an incentive to providers to comply. Thirdly, higher level enforcement powers are difficult to use in practice – they often have spillover effects on other stakeholders than the provider, and their use can therefore appear disproportionate - so regulators tend to make use mainly or only of lower level enforcement powers, as indeed CQC does. It is important to know whether such lower level enforcement powers are indeed achieving their aim of getting the provider to comply. Fourthly, and perhaps most importantly, all the enforcement models assume that the provider has the capacity to change and comply. If it does not, repeated enforcement actions are unlikely to secure sustained compliance, may even result in a deterioration of performance, and other ways to bring about change and improvement in performance may be needed. Information provision CQC publishes information from its regulatory processes on its website, in the form of reports about providers and other outputs such as its annual reports and other publications. The website has been designed to allow users to find information about individual providers and to find reports of inspections and their results. CQC also publishes a wide range of other information not specific to particular providers, through its annual report, reports on themed inspections, and guidance on its compliance and enforcement policies. CQC does not publish all the information it holds about providers – for example, the content of the Quality and Risk Profile (QRP) discussed earlier is not published. Information provision might be thought to work in three ways: Information provision to be used by other stakeholders in their decision making. Here, the information published by the regulator is designed to be used by other stakeholders 19 Item 5 Paper No: CM/01/13/04 such service users and their families, service commissioners/funders (such as local authorities, PCTs/CCGs), the local and national media, etc. The intended use of this data might be quite instrumental – for example in choosing a nursing home or hospital – or might be more symbolic – for example, in a local newspaper publicising findings from an inspection. In both cases the intention would be that the use of information from the regulator exerts direct or indirect influence on the provider to comply with regulatory requirements. For this model to work, it assumes that the information is provided in a content, form and timescale that these target audiences will find comprehensible and usable, and that they have the capacity and motivation to access and use it. Information provision to be used by providers in compliance and improvement. Here, the target audience for information provision is the provider community, and the publication of information about both poor and good practice may be intended to help them understand regulatory requirements and to encourage self-enforced compliance and improvement. For this model to work, it assumes that the regulatory process produces information that providers will find useful in understanding and conforming to regulatory requirements, and again that they have the capacity and motivation to access and use it. Information provision as a mechanism for public accountability. Here, the publication of information is an end in itself – making both the regulator itself and regulated organisations accountable to the public and demonstrating that the regulatory arrangements work. There is not necessarily an expectation that publication will lead to any particular outcomes, or that there is a functional mechanism at work – publication is the purpose in itself. It seems that CQC seeks to use mainly the first model in its information provision. We are not aware of any information that has been collected on whether and how much target audiences such as patients/service users, families, commissioners and others use CQC’s published reports. However, we would observe that the constraints which have already been discussed in the sections on compliance, to do with standard setting and the volume and quality of information collected through the inspection process, may significantly limit its utility for others in their decision making. The dichotomous judgements of compliance or non-compliance cover only a partial set of the essential standards and in any case are not particularly discriminating since most providers are fully compliant. The narrative content of reports is mostly focused on evidencing compliance or non-compliance, and does not provide a coherent account of the quality of care or the overall performance of the provider. In short, the reports are designed for use in enforcement by CQC, not for use by other stakeholders in their decision making. The QRP, which may contain a wider data set on performance, is not published. Reviewing the regulatory model 20 Item 5 Paper No: CM/01/13/04 The purpose of this chapter was to set out the regulatory model in use, alongside common alternate models, and to outline the consequences and implications of those regulatory choices. Those models are summarised in table 2.1 below, in which the models which appear to be currently in use in CQC are shaded. Overall, it appears that CQC’s current regulatory model is largely consistent with that of a “safety net” regulator, focused mainly or only on securing compliance with minimal standards by providers at the bottom end of the performance distribution through enforcement actions. Table 2.1. A summary of the regulatory model analysis Component of regulation Registration Compliance – standard setting Compliance – risk assessment Compliance – inspection Enforcement Information provision Regulatory models Threshold to meet for performance Relationship building with provider Administrative data capture Framing values and expectations Improvement Compliance Differentiating through selfthrough performance of enforced measurement and providers compliance enforcement Determining when to use Focusing or directing Making providers aware that regulatory interventions attention during regulator may intervene regulatory intervention Driver for Measure Measure Driver for other improvement in compliance to performance to providers to advance of support support improve inspection enforcement improvement Informal Enforcement as Enforcement as Enforcement as enforcement or incentive to drive symbolic action to driver for other prospect drives compliance drive compliance providers to compliance achieve compliance Information to be used by Information to be used Information as mechanism for other stakeholders in by providers in public accountability decision making compliance and improvement Even within its own terms, there are some important inconsistencies within the current regulatory model. First, the adoption of generic standards/outcomes with little or no sector-specific definition or guidance makes understanding and interpretation by providers and the regulator difficult, and is likely even with minimal, enforcement oriented regulation to make valid and reliable measurement problematic. Second, the model envisages proportionate or risk-based regulation with most attention focused on the worst performing providers, but in practice this has been difficult to operationalise and has now been partly abandoned. Third, the use of a generic inspection process and inspection workforce, with no assumed content knowledge of health and social care or specialist expertise is likely, even with minimal, enforcement oriented regulation to make valid and reliable measurement problematic. Fourthly, the likely utility of information provision is 21 Item 5 Paper No: CM/01/13/04 fundamentally constrained by the limited nature of the data about performance gathered through the regulatory compliance process. One consequence of adopting the current regulatory model – focused on “safety net” regulation, compliance with minimum standards, and poor performing providers – is that the overall impact of the regulator is likely to be low. A lot of regulatory resource is expended on overseeing the majority of organisations, who are compliant anyway. While regulatory resource is meant to be concentrated on poorly performing providers, it may be quite difficult for the regulator to secure sustained improvements in their performance, and even if it does the overall effect on quality in the sector as a whole is probably quite limited. Enforcement is the main tool for securing improvement, but as noted earlier enforcement is a resource intensive way to secure improvement. Overall, this regulatory model may not seem like good value for money to important stakeholders in regulation. CQC’s current strategic review anticipates a number of important changes to the regulatory model which has been described in this chapter. In particular it proposes that CQC will: “Develop a model of regulation based on what drives the greatest improvements in the quality of care” “Move towards a model of differentiated regulation. This means we will regulate different sectors in different ways. To do this we will make greater use of information, including an evaluation of the impact of our regulatory activities.” “Need to regulate different services in different ways and at different times to make sure we achieve the greatest improvement in quality. In terms of the frequency and intensity of inspections, this may mean revisiting our regulatory approach, including adapting our current annual and bi-annual inspection regime.” The first of these three changes especially seems to us to imply some profound changes to the regulatory model, and to each of its components outlined in this chapter and summarised in table 2.1. With these changes to the regulatory model in mind, the following chapters of this report examine the evidence base in four areas – differentiation in regulatory design, regulatory standard setting, risk based regulation, and the competencies of the inspection workforce. 22 Item 5 Paper No: CM/01/13/04 Chapter 3 Differentiation in regulatory design Introduction In chapter 2 we noted that CQC has up to now adopted a generic regulatory model, and has striven to sustain genericism in regulatory processes, standards and methods, and in the inspection workforce. We also noted that the recent CQC strategic review suggests that in future there will be greater regulatory differentiation – between sectors, or types of organisation. In this chapter, we first review the research literature on how regulatory agencies approach the task of designing their regulatory regime, and what might be the benefits and disbenefits of genericism versus differentiation. We then examine how our four comparator regulators – which have varying levels of heterogeneity in the organisations they regulate – approach regulatory design and the issues of differentiation, and explore what the existing CQC data on inspections shows about the extent of existing differentiation in regulatory design. Learning from research Given the diversity of health and social care services, activities and settings and the diversity of organisational forms and types which deliver health and social care services, a question confronting any organisation that seeks to regulate in this area is whether, in regulatory terms, “one size fits all”. In answering this question, it is helpful to consider the service or organisational characteristics which may vary, the degree of heterogeneity which may exist, and the implications of that variation or heterogeneity for the purposes of regulation. Of course, it is important to recognise that the mission or purpose of regulation may itself vary across sectors or organisations. Day and Klein (1987) compared British and American models to nursing home regulation, and found that each was informed by different philosophies as well as differences in the structure and nature of the nursing home sector in the two countries. At a basic level, a distinction could be made between a centralised, legalistic approach and a more informal and voluntary one. An alternative distinction could be made between deterrence-based models (which are focused on enforcing the regulatory requirements) and compliance-based models (which are focused on developing a working relationship with the regulated organisations). Day & Klein summarise these two “dimensions” in terms of a more general distinction between a technological model (that emphasises administrative control and sophisticated assessment methods) and a social interaction model (that emphasises social control and assessor judgement). They note that whether one model, the other, or a combination of the two should be emphasised in a given 23 Item 5 Paper No: CM/01/13/04 regulatory system depends on what the expectations of the system are given the social and political environment. One way to explore the way that regulatory mission and purpose might vary across sectors is to use Hood’s (1991) framework for describing the core values of schemes for public administration. These values are: “Keep it lean and purposeful” – this value emphasises the allocation of resources to tightly defined tasks. Success is defined as frugality, and failure as waste; “Keep it honest and fair” – this value emphasises fairness and mutuality. Success is defined as the proper discharge of one’s duties, and failure as abuses of office; “Keep it robust and resilient” – this value emphasises reliability and adaptability. Success is defined as robustness, and failure as a breakdown or the presence of risk. Hood argues that any or all of these might be represented in a given administrative scheme, although each has different implications for the design of a scheme and so may conflict with each other. He further alludes to the different value sets being associated with variations in organisational design; for example, the third set is characterised by having spare capacity for dealing with crises and a culture that encourages learning and creativity. The question to consider is the extent to which each value set is represented within a regulatory scheme, and whether this dictates or is dictated by the nature of the organisations that are being regulated. For example, the regulators of a sector that is especially cost-sensitive might emphasise efficiency, while regulation in a different sector with greater perceived or actual risk might focus on robustness and resilience due to the consequences of salient technical hazards. A regulator with responsibility for a number of sectors might legitimately construe its mission and purpose – and so its regulatory design – differently for different sectors. Turning to the characteristics of regulated organisation, one characteristic which has been seen as important is organisational type – often categorised as public or private; and not-for profit or for-profit. Bartlett and Phillips (1996) observed a shift in healthcare from a predominantly publicly-funded and publicly-provided system towards one based on a mixed economy – that is, with increasing levels of input from private and voluntary organisations. In addition, they noted that service provision itself has diversified, with an increasing range of specialist services. Bartlett and Phillips noted that the increase in private sector provision led to an increase in administrative control of care homes, including revised and more tightly applied regulations and the introduction of lay assessors. However, at the time this was not matched by any change to the regulation of NHS long-stay wards, which was comparatively less comprehensive. Hence, a differentiation in regulation between public and private care developed; that this was apparently driven by concerns about exploitation and standards of care in the private sector (Bartlett and Phillips, ibid.) suggests that the regulatory approach was (possibly implicitly or unintentionally) a response to perceived organisational characteristics of the providers. 24 Item 5 Paper No: CM/01/13/04 Another characteristic of organisations which may be important is organisational size or scale. For example Lindøe, Engen & Olsen (2011) noted that the Norwegian petroleum industry comprises a limited number of large and bureaucratic organisations; as such, it lends itself well to cooperation with legal authorities and it is easy to identify the partners with whom regulators should work. Coastal fishing in Norway is somewhat different – there are large numbers of small fishing boats that are autonomous, informally organised, and traditionally risk-takers. In addition, there is greater public awareness of adverse events in the petroleum industry than there is of adverse events in coastal fishing. Hence, coastal fishing is more resistant to the efforts of state regulators to impose safety standards. While Lindøe et al. compared different industries, Nielsen (2006) compared different regulatory areas in Denmark (county environmental regulation; municipal environmental regulation; fire precautions; occupational safety and health). This study found that inspectors in each area used data in different ways when making decisions about the level of sanction to apply to a given breach: for example, inspectors in occupational safety and health, and those in fire precautions, placed more weight on the gravity of a regulatory breach than did inspectors in other areas. Meanwhile, inspectors in county environmental regulation and fire precautions placed more weight than did the others on the company’s “track record” of breaches and its “will to improve”. Nielsen attributes these patterns to differences in the type of responsiveness applied by inspectors across the different areas. For example, inspectors were employing different degrees of “short-memory” responsiveness (to proximal issues concerning the instance of the breach under consideration) versus “longmemory” responsiveness (to distal issues concerning previous interactions with the other organisations). A question that Nielsen poses, which is pertinent for the current discussion but not taken much further in that study, is how these differences might be further explained by the characteristics of each regulatory area – for example, the types of institutions that are involved. Another characteristic of organisations that might affect regulatory design is their systemic complexity and risk. In other words: some organisations involve technically sophisticated tasks, performed by specialized professionals who are organised into highly interdependent groups and departments (in effect, a subset of the wider system described earlier by Rasmussen). As Wiig and Lindoe (2009) explain, such characteristics could be ascribed to hospitals, meaning that risk in these settings needs to be viewed in the context of these interconnected elements. Lynxwiler et al. (1983) also noted that in the mining sector, large, complex organisations were believed to have more resources to deal with regulatory demands, and so were more likely to benefit from leniency on the part of inspectors. Hence, regulators need to be capable of recognising and working with the complex interactions that may occur within a regulated organisation. For example, hospitals may have greater complexity than nursing homes. In a study of risk factors in pharmacy practice (Phipps et al., 2010), found that community pharmacies were perceived to have particular risks due to their being commercial enterprises and less “institutionalised” than hospital 25 Item 5 Paper No: CM/01/13/04 pharmacies, and a review of the pharmacy regulator’s disciplinary hearings (Phipps et al., 2011) offered qualified support for this view – referred pharmacists were more likely to be from community than hospital pharmacies. However, it was not clear whether this was because community pharmacy was intrinsically “riskier” or because there were more effective systems for risk control in hospitals. When regulators deal with large and complex organisations, there may also be a trade-off between the regulatory activity that could be carried out and the regulatory activity that the organisations involved are able or willing to fund. The more a regulator relies on material and human resources to conduct its activities (for example, because of the administrative burden associated with regular use of inspection and enforcement measures) the higher the cost will be (Emery et al., 2000). It may be the case that a more complex organisation requires more resources to carry out an inspection to the same level of thoroughness as is conducted in a less complex organisation. Netten et al. (1999) recommends that regulatory fees should be set in a way that reflects cost variations and is transparent to providers. While their argument was made with specific regard to care home regulation, it would presumably be just as valid (and possibly even more so) for a regulator that was covering different types of healthcare provider. Given that regulatory interactions and organisational characteristics might differ between professions and sectors of work, the question arises of whether a single regulatory approach can be adopted for a sector as diverse as healthcare. Comparing hospital with nursing home regulation in the United States, Walshe & Shortell (2004) found that both were similar with regard to the regulatory objectives (to promote high quality care), method of direction (a manual containing the required standards and assessment methods) and method of detection (a periodic inspection visit during which the provider is assessed against the standards). However, they differed in terms of the regulatory models and approaches to enforcement; the hospital regulator is more focused on compliance, emphasising educational activities and drawing from the professional expertise of providers and inspectors to facilitate improvement. Nursing home regulators, though, are more focused on deterrence, emphasising the threat or actual application of sanctions in order to force providers to make improvements. As these studies suggest, different types of provider appear to lend themselves to different approaches. However, whether these differences reflect differences in regulatory requirements per se between the two sectors, or are a product of differences in their social and political context, is an interesting question to consider. For example, care homes appear in general to have been regulated in a more authoritarian manner than hospitals. However, is this difference a response to greater difficulties in controlling risks or maintaining quality in care homes, or is it a product of a greater inequality of power between regulatory agencies and healthcare providers in this setting? Or might it be due to 26 Item 5 Paper No: CM/01/13/04 neither of these, but instead to a belief that private sector providers are more risky than public sector providers? In summary, it seems that a range of organisational, service and system and regulatory agency characteristics, as well as aspects of the wider social and political context, are likely to result in legitimate and necessary regulatory differentiation. Some of the key dimensions are set out in table 3.1 below. Overall, while a single sector regulator, dealing with a relatively homogeneous set of regulated organisations, might well adopt a single regulatory design, it seems likely that single sector regulators dealing with highly heterogeneous regulated organisations and multi-sector regulators dealing with diverse sectors will differentiate to at least some degree, and the extent to which they do this will reflect tradeoffs between Table 3.1. A typology of factors resulting in differentiation in regulatory design Social and political context Regulatory agency Industry, service or system characteristics Organisational characteristics Public views/attitudes to the sector(s) Past history of quality/service performance Professional and other Relative power of stakeholders – providers, users, funders and others Regulatory philosophy Mission and purpose and how it is construed Level of resourcing for regulation Organisational form and powers Governance and accountability Nature of service and service users like observability/specification of service, technicality, user autonomy/empowerment Nature and form of any market or competition/ contestability in services Extent of other forms of control and accountability like choice, voice and exit for users, democratic oversight etc Numbers of providers – degree of diversification or concentration in service provision Heterogeneity among providers – degree of variation in organisational characteristics below Size or scale Ownership and governance Complexity and level of risk Capacity of internal management, organisational development and change capabilities Comparisons with other regulators Table 3.2 below sets out a structured summary of the way that four comparator regulators – the Joint Commission for the Accreditation of Healthcare Organisations, the Dutch Healthcare Inspectorate, OFSTED and the Homes and Communities Agency – use differentiation in their regulatory design. It is immediately evident that all, with the exception of HCA (which is really a single sector regulator), have quite highly differentiated regulatory arrangements. They have different 27 Item 5 Paper No: CM/01/13/04 regulatory standards for different sectors, though some note that it is important to have consistency of approach to standard setting, language/terminology, and other aspects especially where some regulated organisations work across multiple sectors. All produce guidance on the interpretation of those standards which, again, is sector specific. There tends to be somewhat greater commonality in their approaches to regulatory processes – with broad similarities for example in methods for undertaking surveys and inspections and for reporting, though again there is differentiation in response to organisational scale/complexity and risk – with differences in the intensity and periodicity of inspection activities between sectors. There is also rather greater commonality in their approaches to enforcement, perhaps because the enforcement powers are generally defined in statute, and so relate to the regulatory agency as a whole rather than to an individual sector. However, the two English regulators, OFSTED and HCA, both have limited enforcement powers for public sector organisations and rather greater formal regulatory power over private sector organisations. For example, while OFSTED regulates independent schools according to a set of regulations in statute, it inspects (not regulates) maintained schools, for which that set of regulations do not apply. Having said that, the inspection manuals and processes for maintained and independent schools have much in common, and OFSTED’s lack of legislative regulatory powers over maintained schools does not appear to constrain its ability to secure change where it is needed. It is notable that all the comparator regulators use regulatory staff who have content expertise in the sectors in which they work. Some further specialise – for example having more experienced staff who deal with the more complex or difficult provider organisations, or having some inspection staff who undertake particular kinds of activities such as handling investigations or enforcement. 28 Item 5 Paper No: CM/01/13/04 Table 3.2. Comparison of approaches to regulatory differentiation JCAHO All in healthcare, but include hospitals, long term care, behavioural healthcare, home care, labs, ambulatory care etc IGZ All in healthcare, but include hospitals, long term care, public health, pharmaceuticals/ devices, and health professions OFSTED Yes – schools, further education, adult learning, early years/childcare, childrens homes, childrens social care, adoption/fostering Do they have different regulatory standards for different sectors? Yes, though aim for some commonality of design, consistency in definition and numbering. Yes – quite separate inspection manuals and standards for different sectors. Limited commonality. Do they produce guidance for different sectors? Yes – separate accreditation manuals and guidance notes for sectors Yes – some commonality but standards set by sector and overseen by IGZ. History is that IGZ formed from merger of 4 inspectorates in 1995 Yes – again sector take lead in setting standards, guided and overseen by IGZ. Do they have different regulatory processes for monitoring or inspection for different sectors? Yes – broadly similar survey process but different indicator sets, and different periods between survey and survey length Main formal power is to refuse or put conditions on accreditation. Matters as accreditation deemed to satisfy Medicare/Medicaid participation requirements. Yes – surveyors and HQ staff in sector have to have background in that content area. Do they regulate different sectors or types of organisations? Do they have different enforcement powers for different sectors? Do they have different regulatory/ inspection staff for different sectors? HCA All in same sector – social housing - but large variations in organisational scale, and include public (LAs), not for profit (housing associations) and for profit companies. No, though standards applied differently for larger and smaller RSLs (cut off is <1000 units). Yes – emphasis on driving improvement and spreading good practice through guidance Yes – similarities in overall inspection process but content, timing and length vary. No – focus is on safety-net regulation, not on driving improvement. Yes – powers vary across sectors (organisational, device/ pharmaceutical and professional regulation). Yes – note that some sectors are regulated and others inspected (eg no regulations in statute and no formal enforcement powers) No – same enforcement powers though use focused on larger RSLs Yes – inspectors have content background and some specialise in particular inspection or investigation activities Yes – inspectors have background in content area. Regulatory staff mostly have background in social housing or in financial analysis (qualified accountants). Yes – broadly similar inspection processes but different indicator sets and used differently 29 Yes for larger and smaller RSLs. Smaller ones are not inspected routinely. Item 5 Paper No: CM/01/13/04 30 Item 5 Paper No: CM/01/13/04 Using existing data We can examine the extent to which CQC’s current generic regulatory model varies in the way it is used across the sectors it regulates through a range of existing data from inspections, enforcement actions and other sources. The purpose of such an examination is really to raise questions about how much variation there is, and where differences are found, what their causes might be. Such variations in the application of the current generic regulatory model might help to inform decisions about how and where to introduce greater differentiation. Does the use made of inspection vary across sectors? For example, does the frequency of inspection or the use of different types of inspection (planned, responsive or followup) vary across sectors? Do rates of compliance with the 16 essential standards vary across sectors? Are some sectors more or less likely to demonstrate overall compliance? Does the use of enforcement (compliance actions, warning notices, and other enforcement actions) vary across sectors? How does the response to enforcement, in terms of return to compliance, vary by sector? These questions might also be asked within sectors – as noted above, some regulators differentiate their regulatory design based on organisational characteristics such as size/scale, risk, or ownership type. For example, existing data could be used to examine whether compliance rates vary within a sector such as adult social care on factors such as service/client group type, ownership (public/private, for-profit/not-for-profit, single provider/chain of providers, etc). Table 3.3 below provides an initial exploratory analysis of data on rates of inspection and rates of compliance across six sectors, using data for inspections carried out between October 2010 and September 2012. It can be seen that there is considerable variation in both the use of inspections and in rates of compliance. For example, the intensity of inspection (in terms of number of inspections per location in the period), and the use of responsive and follow-up inspections varies substantially, and does not necessarily seem related to rates of compliance. Overall compliance rates vary from 75% to 94% by sector, and there are also obvious variations in compliance rates for some standards. Of course, the sectors themselves vary in size and one question to consider might be whether there is substantial variation within the largest sector of adult social care, and if so how it might be helpful to subdivide or analyse this sector in more detail. 31 Item 5 Paper No: CM/01/13/04 Table 3.3. Comparison of inspection and compliance rates across sectors for inspections carried out between October 2010 and September 2012 Social care No of regulated locations as at 30/9/12 No of inspections undertaken Inspections per location % planned inspections % responsive inspections % followup inspections Overall % compliance 1 2 4 5 6 7 8 9 10 11 12 13 14 16 17 21 NHS 25234 2259 Independent healthcare 2896 25152 1500 1636 0.99 0.66 68.1 Independent ambulance 327 Dental care All sectors 10118 40834 55 2242 30585 0.56 0.17 0.22 0.75 56.4 78.5 56.4 94.6 70.0 16.3 26.5 10.0 30.9 1.6 15.4 15.6 17.1 11.6 12.7 3.8 14.6 84.2 88.0 90.4 75.2 94.4 85.2 100.0% 100.0% 90.5% 100.0% 93.3% 84.0% 76.3% 58.3% 85.7% 66.7% 44.8% 71.4% 71.0% 73.8% 100.0% 44.4% 99.4% 92.3% 97.7% 100.0% 100.0% 93.0% 93.1% 79.4% 73.1% 82.8% 77.1% 82.1% 92.8% 89.7% 94.3% 72.2% 92.5% 87.2% 82.8% 87.1% 96.1% 88.8% 83.5% 73.2% 78.6% 91.5% 86.9% 83.0% 83.7% 84.6% 94.5% 73.2% 91.7% 86.6% 81.1% 86.6% 96.0% 88.4% 78.3% 72.5% 77.8% 91.1% 86.9% 82.6% 83.0% 83.9% 94.3% 71.1% 90.9% 88.3% 80.9% 89.2% 95.6% 89.4% 94.0% 80.7% 84.6% 95.0% 95.2% 85.6% 87.0% 87.3% 96.1% 85.1% 95.8% 90.6% 88.9% 98.6% 98.0% 88.8% 92.5% 79.1% 89.5% 95.1% 85.4% 89.5% 89.8% 92.6% 95.1% 83.5% Conclusions Most regulators – particularly those who are responsible for multi-sector regulation – make much more use of differentiation in regulatory design than CQC does at present. However, the way that differentiation is used varies, and it seems that regulators often have different regulatory standards, guidance and staff for different sectors, but try to keep basic regulatory processes, methods and powers common across sectors. It is clearly important to have a robust underlying logic to differentiation, which both explains and justifies the differences in use of regulatory resources, intensity of scrutiny and oversight, and response to non-compliance. Table 3.1 provides a useful starting point for that logic, and identifies that differentiation is perhaps necessary not just to deal with different sectors (such as nursing homes, hospitals, or homecare providers) differently, but also to differentiate within 32 Item 5 Paper No: CM/01/13/04 a sector on the basis of organisational characteristics like ownership, size/scale, complexity, risk and known past performance or performance trajectory – a theme which we return to in chapter 5 which examines risk-based or proportionate regulation. 33 Item 5 Paper No: CM/01/13/04 Chapter 4 Regulatory standard setting Introduction In chapter 2 we discussed the way that CQC sets and uses regulatory standards, and the place of standards within the regulatory model. We highlighted the generic nature of those standards, which are intended for use across all health and social care providers, and the absence of sector-specific guidance or description. We noted the indirect relationship between the statutory regulations, CQC’s outcomes, and the 16 essential standards which feature prominently in the compliance process. We discussed at what level the standards were set (whether they were minimal, median or maximal standards) and how dichotomous judgements of compliance were reached. In this chapter, we first review the research literature on regulatory standard setting, and seek to use it both to provide a framework for analysis and to identify important common themes in standard setting. We explore how our four comparator regulators set and use standards in regulation, and then we turn to asking whether CQC’s existing data could be used to explore how the current standards are set and used. We conclude by considering the implications for future standard setting and identifying some areas for future research and evaluation. Learning from research The setting of standards – sometimes also called rules, regulations, directives or other terms – is a central feature of regulation (Black 1997). It is the main mechanism by which the regulatory agency communicates its requirements or expectations of the regulated community, both to regulated organisations individually and collectively and to other stakeholders including the wider public whom regulation is usually there to protect (Hood et al 1999). Cornock (2012) considers it important that a regulatory scheme has a clear and achievable purpose that is understood by both regulated organisations and those that regulation seeks to protect, and standards are an essential tool in defining that purpose. For example, advocates of regulation may assume that standard setting will led to improved performance and value for money, and that the application of standards will lead to improved accountability (Braye and Preston-Shoot, 1999; Sutherland and Leatherman, 2006). Wiener (2003) notes a different assumption underlying “traditional” regulatory models, that there is a known minimal acceptable level of care, and that this is represented by standards, in order to ensure that providers do not fall below it. 34 Item 5 Paper No: CM/01/13/04 Before examining the nature and characteristics of regulatory standards, it is worthwhile first reflecting on their purpose and place in regulation. First, some regulators adopt a “control” approach, in which the focus is on monitoring an organisation’s compliance with predefined standards of practice and applying sanctions if compliance is not found. Others adopt a “support” approach, in which the focus is on advising and assisting an organisation to set and improve its own standards of practice. Although they may seem to be alternative approaches which are likely to result in quite different types and uses of standardds, and some regulatory policies favour one over the other, they can be used together in practice (Bruhn and Frick, 2011). According to Dodds and Kodate (2011), regulatory approaches in healthcare are the manifestation of two “institutional logics”. One is a logic of accountability in which healthcare professionals and organisations are accountable to the state and public, and as such should be subject to administrative controls in which explicit standards and requirements are likely to be central. The other is a logic of organisational learning in which organisations should learn from previous mistakes and near misses, and as such they should be encouraged to develop methods of organisational inquiry and learning, such as prospective and retrospective hazard analysis, and it is rather harder to see a substantial role for explicit standards here. Regulators adopting the “control approach” often place considerable emphasis on compliance – assessing the extent to which a practitioner or organisation meets set standards of practice. This carries the advantage of being a standardised approach, so that people and organisations involved in regulation know what it is expected of those that are being regulated and it is, in theory, possible to ensure that regulatory decisions are consistent. However, Wiener (2003) highlights some limitations to a “compliance regime” of this kind: The standards are not always evidence-based or seen to reflect what is “important” (for example, they may look at administrative capacity rather than care processes or outcomes); The “rules” might be applied inconsistently across inspectors; It can lead providers to focus their energies on meeting minimum requirements rather than striving for excellence; It can lead to an adversarial relationship between providers and the regulator, which in turn can create a negative public image of the sector; It requires a large amount of resources to execute; Regulatory sanctions can have a further negative effect on the care that the provider is able to provide to service users. 35 Item 5 Paper No: CM/01/13/04 Wilpert (2008), commenting on compliance regimes in nuclear regulation, adds that it is difficult to achieve compliance without detailed and frequent regulatory input. In large or complex organisations, or in a sector containing many organisations, regulatory inspectors can find themselves with a high workload as they attempt to cover as many of the compliance topics as possible, checking compliance with large numbers of standards. Furthermore, as outlined earlier, quality and safety concerns can be rooted within an organisation’s systemic complexity – the danger of attempting to reduce these concerns into a set of static, narrowly-defined compliance topics is that they can obscure one’s understanding of the problems at hand (Nichols and Wildavsky, 1987; Wiig and Lindoe, 2009). Similarly, Carroll (1995) cautions that an overemphasis on compliance can limit the value and impact of regulation. Writing in relation to health and safety at work, he argues that the inspector can be overwhelmed by the burden of regulatory measurement against standards, and constrained by limited resources may respond by only “doing the minimum”. The concern is that this turns the safety professional, highly skilled in problem solving and preventing accidents at work, into a checklist-driven operative of a regulatory regime which is so focused on measuring compliance it may miss many opportunities for improvement. Brennan (1998) makes a similar case in the healthcare sector, arguing that most regulatory attention is devoted to what he calls “culling” – that is, removing defects from the system through a standards-based, compliance focused approach. He highlights four other regulatory tasks – tackling improvement, disseminating best practice, stimulating learning and encouraging creativity – which he argues are usually de-emphasised in regulation, and this may be why external regulation and internal quality improvement activities are not aligned or integrated. Black and Baldwin (2012) list a range of regulatory methods that could be used which include both “compliance-focused” methods (such as inspections and audits) and more facilitative methods (such as advisory visits and incentive strategies). The latter ideas are explored by Wilpert (2008) who suggests that regulated organisations could be encouraged, with oversight from the regulator, to set their own goals for performance improvement. Quality and safety could be assessed by means of self- or peer-assessment against performance indicators which might be linked to regulatory requirements but could also go beyond them if appropriate (Power, 2012). Regulators could further facilitate organisational learning by encouraging organisations to use feedback, reflection and experience to evaluate their current performance standards (Wiig and Lindøe, 2009). This idea of the regulator as a member of and contributor to a community of practice (Wenger 1998) rather than as an external agent is developed by Waring (2007) who found that doctors were more willing to embrace patient safety initiatives which came from their own community of practice rather than from outside. This might mean participants setting and evolving their own standards in partnership with the regulator and other stakeholders. The regulator might contribute by facilitating the learning processes of the community, and 36 Item 5 Paper No: CM/01/13/04 promoting the sharing of information and experiences within and between providers (Macrae, 2008). This alternative approach has the advantage of being more participatory, in that the regulated organisation collaborates in the standard setting and performance monitoring. Having discussed the purpose of standards and standard-setting in regulation, we now turn to consider the nature of regulatory standards, and for this we use a typology which identifies some of their key characteristics (see table 4.1). If standards exist to communicate the regulator’s expectations or requirements, then it is important that their meaning is communicated clearly and that providers understand and accept them. This will be facilitated if there is good evidence to demonstrate that the standards are valid measures of performance, applicable to the provider’s context and setting. It may not always be necessary for standards to be accompanied by measurement and monitoring – the standards can still act to communicate requirements. This can be an advantage in setting standards for diffuse or abstract attributes, or a regulator may choose to set far more standards than they might feasibly measure and monitor through, for example, inspection. However, if standards are to be measured, it is clearly essential that measurement conforms to the conventional expectations of reliability within and between raters or observers, especially where measurement is used to make judgements or decisions about individual organisations. In seeking to frame standards so that they are measurable, regulators often face a tension between generalisability (avoiding standards which are particular to contexts or settings) and specificity (setting standards to a level of detailed definition which permits accurate measurement). The focus or subject of standards is an essential concern, not only because it communicates what the regulator thinks is important but also because, indirectly, it may signal that other aspects of service performance which are not the subject of standards are not (as) important. For this reason, regulators often seek to set comprehensive sets of standards covering many domains. The table distinguishes between standards which focus on aspects of structure, process and outcome – an issue which was touched on in chapter 2 in discussing the balance between standards which require direct observation of care processes or standards which measure structures or systems of care. Standards communicate not just the fact of the requirement, but the level of expectation of the regulator – what the table terms “stringency”. Again, we discussed in chapter 2 the idea that standards might be set at minimal, median or maximal levels, and this would affect both their likely impact on performance and their value in measurement and differentiation between providers. A connected concern is the form of measurement used – whether it takes the form of a dichotomous judgement (compliant or not compliant; or met/not met), or an ordered or likert-style interval rating scale (such as excellent, good, satisfactory, poor). 37 Item 5 Paper No: CM/01/13/04 Table 4.1. A typology of regulatory standards (Walshe 2003). Characteristic Description Validity What evidence is there that the attributes or behaviours which are the subject of the standard actually contribute towards the intended objective of improved performance? Measurability To what extent can the attributes or behaviours which are the subject of the standard actually be measured or assessed, by collecting or using qualitative or quantitative data? Reliability If these attributes or behaviours are measured, how consistent is the measurement process and to what extent will differences in ratings be a function of the regulated organisation or of the measurement process? Generalisability How far can the attributes or behaviours which are the subject of the standard be said to be universally applicable, or will there be differences in organisational or environmental context which mean that the standard is not applicable in some settings? Specificity In how much details are the attributes or behaviours which are the subject of the standard specified? Is the standard a broad statement of overall principles or aims, or a detailed and prescriptive statement of required systems, processes and structures? Subject What is the subject of the standards? Do they focus on structure – organisational arrangements, facilities or the environment; or on process – the systems of care delivery themselves; or on outcomes – the results of healthcare services and their impact on patients or users? Stringency How hard is it for organisations to comply with the standard? Is IT set at a minimal or safety-net level, at which all but a few organisations would comply; a normative level, at which the average organisation would be in compliance; or at a maximal or aspirational level, at which few if any organisations are currently able to comply? Comparisons with other regulators Table 4.2 below sets out a structured summary of the way that our four comparator regulators – the Joint Commission for the Accreditation of Healthcare Organisations, the Dutch Healthcare Inspectorate, OFSTED and the Homes and Communities Agency – go about setting standards, and the form that those standards take. It is a complex picture, and no single approach predominates. JCAHO, perhaps the longest established healthcare regulator in the world, has a formal, complex and highly routinised approach to standard setting which is partially driven by the wider legal context (its accreditation decisions are used by government healthcare funding programmes and so it needs to ensure that its standards meet those legislative requirements). Notably, as the 38 Item 5 Paper No: CM/01/13/04 only private rather than public regulator in our comparison, JCAHO does not have to follow the sometimes onerous legislative processes laid down for rule-making by government agencies, though it still consults widely on changes to its standards. It has a comprehensive accreditation manual for each of its main areas of activity (hospitals, behavioural healthcare, long term care, home care, laboratories, etc) which sets out detailed standards and how they are to be measured through elements of performance. It looks like an extremely detailed set of checklists, but descriptions of the survey process suggest that surveyors exercise professional judgement, and use the standards more as a framework for their findings than as a checklist for measurement. In contrast, Dutch law gives responsibility for setting standards not to the regulator, IGZ, but to the professional associations of healthcare organisations and professional groups operating in each sector, though IGZ has the remit to oversee and facilitate standard setting. This means that approaches to standard setting vary across and within sectors. OFSTED does not describe its inspection framework as a set of standards, but it does publish a very detailed inspection framework and manual – again specific to each sector which it inspects or regulates. For maintained schools, that framework sets out the four main areas of assessment – achievement, quality of teaching, behaviour and safety, and leadership – and provides “grade descriptors” for the four point scale used to rate schools in each area. These can be seen as four standards, set and assessed at a relatively high level. OFSTED provides quite extensive guidance and examples of good practice in relation to each of these areas. As was noted earlier, OFSTED does not have formal regulations governing its operation in all sectors, but where it does (for example in early years/childcare and in independent schools) its inspection manual is not necessarily closely based on those regulations. HCA sets out a regulatory framework with economic standards focused on governance, financial viability, value for money and rent setting and consumer standards dealing with issues like tenant involvement, standards of homes, neighbourhood and community relations, etc. For each it provides a fairly detailed narrative description of requirements. It does not routinely monitor or measure performance on the consumer standards and will only act in that area if it thinks there is “serious detriment” to tenants. It does monitor all larger RSLs against its governance and viability standards annually, and rates them on a four point scale though in practice most are rated 1 (meeting all requirements). The standards are seen as minimal, defining a base level of performance for all organisations, and HCA does not have a remit for seeking improvement beyond this basic compliance. 39 Item 5 Paper No: CM/01/13/04 Table 4.2. Comparison of approaches to regulatory standard setting How are regulatory standards set? Are standards minimal, median or maximal? JCAHO JCAHO sets multiple very detailed standards each containing a number of elements of performance. How are organisations rated or measured against each standard? Standards are intended to be achievable for all organisations, but focus is on promoting improvement and excellence Elements of performance rated compliant, partially compliant or not compliant Is there a defined threshold or required level for compliance for each standard? All must be compliant or be addressed in evidence to be accredited Are all organisation measured against standards in the same way? Yes, though standards are specific to sector How are results of measurement against standards aggregated to give overall assessment? Accreditation decision made on basis of whole report – accredited, accredited with followup, contingent accreditation, preliminary denial, or denial IGZ IGZ does not set standards itself, but works with professional association and sector for them to set standards which are then used in inspection visits Standards are set by professional association and so level/stringency varies. IGZ will facilitate standard setting. Indicators developed and used in each sector are based on the standards. Not necessarily – indicators may measure proportion of compliance with elements of standards. Within a sector, the indicator data set will be used to collect information uniformly from organisations. There is no defined connection between indicator data set performance and decisions made by the regulator, though serious noncompliance is likely to result in regulatory intervention or further scrutiny. 40 OFSTED Schools assessed in four domains – achievement; teaching; behaviour and safety; and leadership. HCA Regulatory framework sets out economic standards (focus on governance and viability) and consumer standards. Standards designed to be meaningful across whole performance distribution – show what is outstanding to what is inadequate. Rated in each area on four point scale – outstanding, good, requires improvement or inadequate Standards are minimal – all RSLs should clear the bar. No, but ratings of 3/4 (requires improvement, inadequate) lead to further regulatory action Yes, though standards are specific to sector Implicit consensus process across inspection team – professional judgement – no rubric for combining scores. Rated against governance and viability standards on 4 point scale. Consumer standards not measured. Ratings of 1, 2 are compliant; 3, 4 are non-compliant. In practice noncompliance rare. Yes Need to get G1/G2 and V1/V2 to be deemed overall compliant. G2 an V2 ratings will result in increased regulator scrutiny. Item 5 Paper No: CM/01/13/04 Table 4.3 below provides a short summary of the standards for each of the four regulators, and it illustrates the divergence in relation to the typology of standard setting set out earlier. Some standards are clearly set at a very high and abstracted level, while others are far more detailed and precise. Some seem to try to over all the important domains of service while others are focused on fewer, key topics. Some measure structures, systems and policies while others assess directly the nature and quality of service provision. Finally, some are clearly set at a minimal level and focused on compliance, while others are more demanding and intended to improve or differentiate performance. Table 4.3. Summary of standards for comparator regulators. JCAHO IGZ OFSTED HCA The Comprehensive Accreditation Manual for Hospitals sets out 18 areas for standards listed below – and in each defines a set of standards and elements of performance: Accreditation Participation Requirements (APR) Environment of Care (EC) Emergency Management (EM) Human Resources (HR) Infection Prevention and Control (IC) Information Management (IM) Leadership (LD) Life Safety (LS) Medication Management (MM) Medical Staff (MS) National Patient Safety Goals (NPSG) Nursing (NR) Provision of Care, Treatment, and Services (PC) Performance Improvement (PI) Record of Care, Treatment, and Services (RC) Rights and Responsibilities of the Individual (RI) Transplant Safety (TS) Waived Testing (WT) List of standards not available in English at this time. Overall effectiveness - Inspectors evaluate the quality of the education provided in the school. In doing this, they consider all the evidence gathered to support the judgements they must make. These cover four areas: Achievement of pupils at the school - inspectors have regard both for pupils’ progress and for their attainment. Particular consideration is given to the progress that the lowest attaining pupils are making. Quality of teaching in the school - Inspectors consider the planning and implementation of learning activities across the whole of the school’s curriculum, together with teachers’ marking, assessment and feedback to pupils. Behaviour and safety of pupils at the school - This judgement takes account of a range of evidence about behaviour and safety over an extended period. Quality of leadership in, and management of, the school - Inspection examines the impact of all leaders, including those responsible for governance, and evaluates how efficiently and effectively the school is managed. Economic standards: Governance and Financial Viability Value for Money Rent Consumer standards: Tenant Involvement and Empowerment Home 41 Item 5 Paper No: CM/01/13/04 Tenancy Neighbourhood and Community Using existing data We can examine some of the issues raised in this chapter through the existing data from CQC’s inspection processes, but many would require some further data collection. For example, using the framework set out in table 4.1 to structure areas of potential inquiry there are perhaps three main sets of concerns: How valid and meaningful are the standards, as measures of the quality of care? This is difficult to assess from existing data sources, though work could be done to explore the relationship between compliance with the standards and other routinely available measures of performance, where they exist, on the presumption that some association would be seen as evidence of validity. It would be relatively straightforward to assess the face validity of the existing standards through surveys of key stakeholders (users of services, provider organisations, health and social care professionals, and so on) and it would not be difficult to incorporate such data collections into a routine survey process. How measurable are the standards – in other words, do they conform to the conventional expectations of any measurement instrument in terms of their inter and intrarater reliability, temporal stability, and applicability across the domains in which they are used? How is compliance with the standards best measured – through a dichotomous judgement as at present, or through some form of rating scale? Here, the existing data is probably of limited value, but it would be straightforward to undertake conventional tests of issues like rater reliability, scale validation, and so on. Some work already undertaken by CQC suggests that inspector judgements about compliance are complex and probably quite variable. How stringent are the standards – how difficult is it for organisations to meet them? Here existing data (presented in table 3.3) suggests that compliance with the standards is generally quite high – averaging about 85%, but with compliance with some standards (such as 1, 11 and 17) well over 90%. This seems on the face of it to mean the standards as set – or as currently measured – are minimal ones. However, it might be argued that the regulations on which the essential standards are based actually allow substantial room for the regulator to define what constitutes compliance, through its guidance and its judgement processes. So it might be quite feasible for CQC to develop more discriminating or demanding measurement processes associated with these existing regulations and standards. 42 Item 5 Paper No: CM/01/13/04 Conclusions As was observed in chapter 2, in the discussion of the standard setting component of the compliance process in the regulatory model, the form and nature of the standards to be used flows from the regulatory mission and purpose, and the resulting regulatory model adopted. We noted that a “safety net” model of regulation requires simple, minimal nd fairly generalisable standards which are easy to measure and enforce, while a more improvement-focused model requires more detailed and specific standards, set maximally, and accompanied by detailed interpretive guidance. The variation in approach to standard setting observed across our four comparator regulators in this chapter at least in part reflects differences in regulatory purpose – both JCAHO and OFSTED have explicitly improvement-oriented missions, while CQC and HCA have less ambitious regulatory aims, largely focused on detecting and dealing with poor performance. This means that the shift in CQC’s regulatory purpose signalled in its strategic review probably requires a fundamental review of the current “essential” standards, and a revisiting of its approach to measurement. Finally, it is worth noting that regulators do not necessarily have to set the standards which they then use in the regulatory process – IGZ being an example of this approach being embedded in the legislative framework for Dutch healthcare regulation. In the context of CQC, this might lead one to consider the roles of Royal Colleges, professional associations, and other agencies such as NICE. 43 Item 5 Paper No: CM/01/13/04 Chapter 5 Risk based regulatory approaches Introduction In chapter 2, we set out the way that CQC has sought to gather and use information about the performance of health and social care providers in the regulatory process, and particularly to guide when regulatory interventions like inspections are used and how they are focused or deployed. We noted that since April 2012 CQC has been seeking to undertake annual inspections of most health and social care providers, and so has to some extent retreated from risk-based regulation, though the recently published strategic review indicates that the organisation still wishes to adopt a more risk-based or proportionate inspection regime in the future. In this chapter we first review the published research literature on risk based, proportionate or intelligent regulation – the idea has been an important theme in the regulatory field for at least twenty years – and then examine the practices of four other regulators in this area. We then draw on some of CQC’s existing data sources to examine how the risk assessment models set out in chapter 2 seem to work, and we conclude by identifying some future directions for research and evaluation in this area. Learning from research Adil (2008) notes that several regulatory organisations in the UK have shown an interest in risk-based regulation – that is, a regulatory system in which the level of scrutiny applied to individuals or organisations under regulation is proportionate to the level of risk that each is judged to pose. Adil (2008) and Hampton (2005) argue that a particular advantage of riskbased regulation is that it provides a way of targeting limited regulatory resources to the areas that are most in need of them. Ross and Hannan (2007) note that it can reduce the problem of information overload due to “defensive reporting” by registrants; rather than routinely requesting the same amount of information from all registrants, making requests based on the perceived level of risk can help to focus the gathering and reporting of information onto that which is of actual intelligence value. Black & Baldwin (2010) make a similar point – rule-based regulation (where regulatory activity is driven by rules rather than risk) can cause inspectors to become overburdened if there are many rules. Finally, Ross and Hannan argue that risk-based regulation has greater flexibility and sensitivity in responding to complex regulatory problems – rule-based regulation is rendered ineffective by any mismatch between the regulations and the risky activity. For example, the 1974 Health and Safety at Work Act (HASAWA) specifies a general responsibility for employers and employees to take reasonable steps to ensure health, safety and welfare at work, while 44 Item 5 Paper No: CM/01/13/04 the health and safety legislation that preceded HASAWA attempted to prescribe safety requirements for specific items of factory equipment, but in doing so quickly became obsolete due to technological developments and changes in practice. What does it mean to have “risk-based regulation”? Black and Baldwin (2010) see as its key characteristic it been driven primarily by a consideration of risks rather than rules. Such a regime does not render rules irrelevant in regulatory activity, but it subordinates them to the risks that the regulator intends to manage. According to Black and Baldwin, there are five core elements to risk-based regulation: A definition of the risk(s) to be controlled; A determination of the types and level of risk that the regulator is prepared to tolerate; An assessment of the risk(s); A standardised risk scoring or ranking of organisations and/or activities; A means of linking supervisory, inspection and enforcement requirements to the score or rank (for example, what level of risk would trigger information gathering, inspection and/or enforcement activities respectively?) In a discussion of how risk-based regulation might apply to pharmacists, Phipps et al (2010; 2011a) noted that a key issue to consider is how risk is defined – the first requirement in Black and Baldwin’s framework. For example: what is the risk that is to be managed? Who is it that creates the risk, and who is subject to it? Who defines and controls the risk? With regard to the latter question it is worth noting that in healthcare, “risk” can mean different things to different stakeholders – for example, service users and healthcare professionals might have different perceptions of what constitutes a risk (Phipps et al., 2010; Rudkin, 2009). There are different ways of assessing the risk associated with a given provider (Phipps et al., 2011a). Some methods rely on quantitative data, whether objective (such as incident rates) or subjective (such as probability and severity estimates). Others are more qualitative in nature, based for example on analysis of critical tasks and the processes conducted to achieve these tasks. Allsop and Jones (2006) examined methods for detecting poorly performing medical practitioners, and identified three general models for identifying who should be given greater scrutiny: The investigation and learning model, in which doctors who are the subject of a complaint or report are reactively scrutinised; The performance assessment model, in which indicator variables (such as demographic factors or deviations from practice norms) are used to single out for greater scrutiny doctors that are likely to fall short of required standards; The surveillance model, in which doctors that have been the subject of complaints, negligence claims or disciplinary action are subsequently given greater scrutiny. 45 Item 5 Paper No: CM/01/13/04 Any of these models could be adopted, but the second and third lend themselves particularly to the use of risk assessment methods. Within the performance assessment model, for example, an organisation could be evaluated against “proactive indicators” (process measures that precede the occurrence of an adverse event) and “reactive indicators” (process measures that follow an adverse event). Phipps et al. (2011a) suggest some indicators for the assessment of pharmacies. Meanwhile, an argument for the surveillance model in healthcare comes from the work of Papadakis and colleagues (Papadakis et al., 2005; Papadakis, Arnold, Blank, Holmboe & Lipner, 2008) who found that, amongst US physicians, previous poor performance or unprofessional conduct, both at medical school and during training, was a predictor of subsequent behaviour resulting in disciplinary action. Phipps et al. (2011b) also found that, amongst a sample of registered pharmacists in the UK who had been before a disciplinary hearing, one fifth had previously been disciplined by the professional regulator. The healthcare studies cited thus far are focused on the regulation of individual registrants. However, substantially the same arguments apply to the regulation of healthcare organisations. For example, several studies (James et al., 2009; Fogarty & McKeon, 2006; Ashcroft et al., 2005) have identified characteristics of organisations that increase the risk of patient safety incidents. Griffiths (2004) lists characteristics that were found to be common to mental health trusts that performed well or poorly in Commission for Health Improvement inspections; these are summarised in Table 5.1. Table 5.1. Characteristics of trusts performing well or poorly in clinical governance reviews (adapted from Griffiths, 2004) Trusts performing well Trusts performing poorly Lower vacancy rates or active attempts to resolve vacancy problems High staff morale Good progress with developing national service frameworks/NHS plan services and the care programme approach Leadership is cohesive, visible and well-regarded Strong relationships between clinicians and managers Cohesive structures between different parts of the organisation Strong structures to support clinical governance Well developed clinical information systems and progress with performance management Good progress on organisational and operational integration with social care Effective communication systems Serious problems with staff recruitment Low staff morale Limited or partial developments of new services and limited implementation of the care programme approach Leadership perceived as remote or weak Lack of engagement of clinicians in management Disconnection between different parts of the organisation Limited structures to support governance Fragmented information systems and little development in performance management Limited progress with organisational and operational integration with social care Poor communication systems 46 Item 5 Paper No: CM/01/13/04 An alternative way to target risk-based regulation is on the basis of high- versus low-risk activities or services. For example, the Financial Services Authority selects organisations for inspection not just on the basis that they give cause for concern, but also in order to obtain a representative sample of organisations involved in a particular activity. In fact the FSA seeks out organisations within the sampling frame that it believes will demonstrate best practice as well as those it has concerns about (Ojo, 2010). In effect, then, it operates a form of risk-driven “thematic” inspection. Cohen et al. (2012) demonstrate how risk assessment methods – in particular, event trees with error probability estimates – can be used to identify high-risk activities in primary care medicines administration. Whether the subject of risk-based regulation is individuals, organisations or activities, its effectiveness depends crucially on the quality of data available to the regulator (LloydBostock and Hutter, 2008). An obvious source of data is the record of enforcement actions held by each regulator. Unfortunately, studies that have examined such records (e.g. LloydBostock, 2010; Strong, 2011; Phipps et al., 2011b) have found that the dataset often does not lend itself to making predictions about risk. For example, it is at an inappropriate level of aggregation (Strong, 2011) or provides incomplete coverage of predictor variables across the sampling frame (Phipps et al., 2011b). A fundamental problem is that existing data sources were usually intended for purposes other than inferring risk factors (Lloyd-Bostock, 2010). Even if a comprehensive dataset is available, there are potential technical limitations to the determination of predictive models. For example, there may be statistical artefacts within the dataset, such as non-linear interdependencies between variables (Daníelsson, 2003; Nebeker et al., 2007). If the outcome variable has a low likelihood of occurring, then the predictive model may be weakened by small cell sizes, outliers or restriction of range in the variables when compared to the population. Hence, the definition of risk at the outset – from which one presumably defines the outcome variable – is crucial. While the investigation and learning model is less reliant on retrospective data analysis, it still (like the other models to some extent) depends on there being a reliable mechanism for making complaints or reporting adverse events. In that respect, it is worth noting the findings of studies that look at reporting patterns or the factors that influence reporting behaviour. Phipps et al. (2011b) found that the most frequent source of reporting to the regulator was someone who had direct oversight of the registrant such as their employer, followed by the police, regulatory inspectors and members of the public. Boyle et al. (2010) and Williams et al. (2013) identify a range of factors that could determine whether or not an individual reports a patient safety incident including individual factors (such as one’s ability to cope with the challenge of reporting an incident), group factors (such as one’s perceived relationship with colleagues), technical factors (such as the accessibility and confidentiality of incident reporting systems), and organisational factors (such as a culture that encourages incident reporting). Smith et al. (2006) observed that, amongst anaesthetists, formal incident reporting was constrained by considerations about what constituted a “critical” incident and a desire to emphasise the informal learning value of incidents rather than 47 Item 5 Paper No: CM/01/13/04 invoke formal sanctions. These studies all suggest issues to be considered when designing reporting and complaints procedures for use in a regulatory regime. Boyle et al. (2010) suggest that characteristics and actions of the regulator can also influence reporting behaviour. These include the efforts made to inform registrants about legislative changes and an orientation towards supporting, rather than punishing, registrants. Meanwhile, Miceli et al. (2012) examine the practice of whistle-blowing about organisational wrongdoing and found that employees were more likely to do this when they felt that they had sufficient evidence, a belief that they had leverage over the situation and the moral support of their colleagues. While risk-based regulation has potential benefits for the use of regulatory resources, Nichols and Wildavsky (1987) noted an unintended consequence of risk-based regulation for nuclear inspectors in the United States. They found that inspectors were unable to predict the level of workload, and a critical incident would create a demand for reactive inspections, which had the knock-on effect of diverting manpower from routine inspections. Meanwhile, Bruhn and Frick (2011) noted that the health and safety regulator in Sweden did not have a clear policy with regard to how inspectors should divide their efforts (that is, whether inspectors they should focus on in-depth scrutiny of a few high-risk workplaces or a less comprehensive examination of more workplaces). The lesson to draw from these examples is that any regime for risk-based inspection should be both clear and workable. Bardsley et al. (2009) demonstrated how a risk-based inspection programme could be applied to healthcare providers. They created a dataset from the then available quantitative and qualitative information about NHS trusts, from which they computed a set of standardised scores for each trust. The scores corresponded to the care standards then in use. Bardsley et al. inspected a sample of trusts that had been identified as “high risk” on the basis of the scores, and a sample of randomly selected trusts. They found that the inspections of risk-selected trusts identified more undeclared qualifications of standards than did the inspections of randomly selected trusts. Notwithstanding the lack of a blinded allocation of inspectors to trusts, these findings demonstrate the potential for a risk-based inspection scheme to be developed. Finally, Hampton (2005, p.31) provides a set of recommendations for risk-based regulation, which reflects the points made in this review and so serves as a useful summary. His recommendations are that risk assessment should: be open to scrutiny; be balanced in including past performance as well as potential future risk; use all available good quality data; be implemented uniformly and impartially; be expressed simply, preferably mathematically; be dynamic, not static; 48 Item 5 Paper No: CM/01/13/04 be carried through into [regulatory] decisions; incorporate deterrent effects; and always include a small element of random inspection. Comparisons with other regulators Table 5.2 below sets out a structured summary of the way that our four comparator regulators – the Joint Commission for the Accreditation of Healthcare Organisations, the Dutch Healthcare Inspectorate, OFSTED and the Homes and Communities Agency – gather information and use it in risk assessment or as part of their regulatory regime. It is worth noting that while both the two healthcare regulators collect and publish a great deal of performance information about providers, and have invested heavily in developing sophisticated performance measures and establishing bespoke data collection and analysis systems, neither of them then uses this information to reach a determination of risk which is then used to vary the intensity or periodicity of inspection. Our understanding is that they feel that the information and performance measures are important parts of the regulatory regime and are used by them in inspections and by other stakeholders, but they do not believe it is not possible to use them quantitatively to assess risk and determine whether or when an inspection is needed, for two reasons. Firstly, the measures assess many characteristics of different service areas within providers, and their aggregation into a single risk measure is problematic and of questionable validity. Secondly, the predictive value of the measures is thought likely to be limited – and insufficient for making judgements at the level of the individual organisation of whether or not to inspect. In particular, there was scepticism about the ability to predict low-frequency major service failures/problems and concern about the risk of “overpromising” what risk-based surveillance could deliver. It is worth noting that both organisations do use risk-based regulation at the bottom end of the performance distribution, with providers who have failed or qualified inspection/survey results. But here, the data used to determine risk is data from inspection itself, and it is generally used to determine whether or when there should be a follow-up inspection. 49 Item 5 Paper No: CM/01/13/04 Table 5.2 Comparison of approaches to risk-based regulation Is regulation intended to be risk based or proportionate? What information is used to assess risk/performance and how is it measured? How are regulatory arrangements varied to take account of risk/ performance? JCAHO Only marginally – all organisations have a 3 year (36 month) survey cycle. If not fully accredited, followup survey in 30 days to 6 months. Since 1997, have been collecting extensive set of performance measures through ORYX. This data is also published online. IGZ To some degree. All hospitals subject to a visit annually. If problems are identified then followup visits are arranged. OFSTED Yes – schools rating determines period to reinspection (up to 5 yrs but outstanding schools not routinely inspected now) Has had a hospital performance indicators programme for 12 years. Data published by IGZ and by hospitals. Inspection judgement from last inspection, exam/results analysis, parent complaints, other intelligence Only to a limited degree. In the process of introducing “intracycle monitoring” at 12 and 24 months between surveys. IGZ states that it uses indicators and inspector judgement to identify providers to visit, but in practice regime is only weakly proportionate. Schools in good and outstanding categories desk reviewed annually after 3 yrs to decide whether to trigger inspection. HCA Limited focus on risk based regulation. More regulatory engagement (annually at least) with larger RSLs (>1000 homes) Data gathered routinely on economic standards thru web portal from RSLs – consumer standards not routinely monitored. Financial analysis, assessment of governance and viability, and other data used to decide when and whether to take regulatory or enforcement action. JCAHO is in the process of introducing a new system of “intra cycle monitoring” which will involve a telephone meeting/conference call with each provider at 12 and 24 months after their last survey. This will involve a review of accreditation status, a discussion of risk areas raised either by JCAHO or the provider, and a review of performance measurement data. However, this is not risk-based – it will apply to all providers, and will simply serve as an additional regulatory intervention of lower intensity than a full survey. In comparison, the Homes and Communities Agency which regulates social housing providers (RSLs or registered social landlords) places some emphasis on risk-based regulation in its regulatory framework. It focuses most of its effort on the larger providers with over 1000 homes, and undertakes an annual regulatory engagement and assessment of governance and viability of each of these providers. Based on this assessment, it may then seek further information on aspects of the provider’s performance and on compliance with the economic standards (which concern rent setting and related issues). It sets consumer standards but does not monitor these routinely, only engaging with providers when it finds evidence of “serious detriment” to consumers. It monitors financial returns from providers quarterly, and can intervene or seek further information if these returns give cause for concern. 50 Item 5 Paper No: CM/01/13/04 OFSTED in its oversight of maintained schools does have a broadly risk-based approach to inspection. Schools that are judged outstanding do not have to be routinely inspected again within a defined period; those judged good will be inspected again within 5 years; those judged to require improvement are inspected again within two years; and those judged inadequate are subject to monitoring and are inspected again within 18 months. For good and outstanding schools, a desk review of data such as exam results, attendance, parent views/complaints and any significant concerns is undertaken annually from 3 years after an inspection and may lead to an inspection. Using existing data We can examine the use of risk-based or proportionate regulation through CQC’s own data – information about provider performance contained in the Quality and Risk Profile, information about the use of regulatory interventions, primarily the different forms of inspection, and information about compliance with the essential standards gathered through inspections. There are three basic questions we might want to tackle, related to the models for risk assessment outlined in chapter 2: Does the data that CQC has about providers in the QRP give a sufficiently valid and reliable assessment of current provider performance, and/or prediction of future performance trajectory, such that it can be used to determine whether or when to use a regulatory intervention such as an inspection? Does the data that CQC has about providers in the QRP help compliance inspectors to determine the focus of their inspections, by selecting as the standards against which they will inspect a provider those they are most likely to find non-compliance? How much does the frequency of use of regulatory interventions such as inspections vary across provider organisations, and is there a relationship between provider performance (as measured by the QRP or by inspection judgements) and inspection frequency or periodicity? Taking the first of these issues – whether the QRP gives a sufficiently valid and reliable assessment of current provider performance and/or prediction of future performance trajectory such that it can be used to determine whether or when to use a regulatory intervention such as an inspection – we can test this by looking at the relationship between the “z-scores” and risk categories in the QRP and subsequent inspection judgements on scheduled inspections. If the z-score categories are valid and reliable indicators, we would expect providers with higher z-scores to be less compliant at inspection. Table 5.3 below contains an analysis of data for adult social care providers inspected between May 2011 and 51 Item 5 Paper No: CM/01/13/04 March 2012. It takes the provider z-score ratings immediately before inspection, and analyses all inspection judgements by risk category. Table 5.3. Analysis of compliance rates by QRP risk category for adult social care providers inspected between May 2011 and March 2012 Risk category Low Green High Green Low Yellow High Yellow Low Amber High Amber Low Red High Red Total z-score range < -1.6 -1.6 to < -.12 -1.2 to < 0 0 to < 1.2 1.2 to <1.6 1.6 to < 2.0 2.0 to < 2.3 2.3 or more Total inspection judgements 269 600 11679 10422 715 391 247 198 24521 Compliant % 77 76 76 68 62 65 66 65 72 Not compliant % 23 24 24 32 38 35 34 35 28 Minor Concern % 10 13 13 15 19 19 17 11 14 Moderate Concern % 10 8 10 13 14 13 11 12 11 Major Concern % 3 3 2 4 4 4 6 12 3 Odds ratio 0.82 0.86 0.86 1.14 1.36 1.25 1.21 1.25 1.00 If the QRP risk category was a valid and reliable measure/predictor of performance, we would expect to see a strong gradient in non-compliance across the risk categories. In fact, we see a relatively weak and non-linear relationship. About a quarter of judgements are non-compliant in the lower risk categories (low green to low yellow) while about a third of judgements are non-compliant in the higher risk categories (high yellow to high red). The odds ratio in the right hand column provides a useful measure of how likely an inspection judgement is to find non-compliance compared with the average. Inspection judgement at providers categorised as red are about 25% more likely to be found non-compliant, while those at providers categorised as green are about 18% less likely to be found non-compliant. In short, it seems that the QRP risk category is measuring something that is related to provider performance, but it is not – even at the extremes of the scale – sufficiently predictive to use it to decide whether or not a provider requires inspection. It is also worth noting that although there are eight QRP risk categories, it is not a particularly discriminating scale – about 90% of providers are categorised as low/high yellow – and to be useful, a risk rating tool needs to differentiate between providers. A further analysis of this data set is contained in table 5.4 below, which breaks down the analysis by outcome. It shows non-compliance rates for each QRP risk category for each outcome separately. For some outcomes, the numbers of inspection judgements are quite small and so caution in interpreting the results is needed. However, again it can be seen that there is often little apparent relationship between the QRP risk category and the rate of non-compliance. 52 Item 5 Paper No: CM/01/13/04 Table 5.4. Analysis of compliance rates by outcome and QRP risk category for adult social care providers inspected between May 2011 and March 2012 Outcome 1 2 4 5 6 7 8 9 10 11 12 13 14 17 21 Total inspection judgements 3777 83 5800 639 17 4469 571 1231 1061 196 827 1943 3098 228 581 Overall % Low green High green Low yellow 17 25 31 28 18 19 47 48 49 27 22 29 28 24 57 5 0 46 0 9 15 37 17 45 17 17 4 15 29 100 20 17 57 29 28 0 17 20 49 36 22 22 26 26 33 63 41 32 50 17 33 0 17 26 50 % not compliant High Low yellow amber 17 26 34 27 17 22 49 48 52 24 24 34 29 22 59 32 0 41 29 0 22 34 48 51 35 23 29 40 33 58 High amber Low red High red 18 14 60 38 0 21 57 54 65 14 18 31 26 27 33 23 20 50 57 50 32 28 41 50 29 19 35 41 25 17 0 33 70 14 0 17 52 52 67 40 0 13 47 50 38 The data from table 5.4 can also be used to shed some light on the second question raised above – does the data that CQC has about providers in the QRP help compliance inspectors to determine the focus of their inspections, by selecting as the standards against which they will inspect a provider those they are most likely to find non-compliance. If this were the case, then we would plausibly expect there to be some variation in the frequency with which different standards were chosen for inspection, with inspectors “targeting” areas of likely non-compliance so that standards which are inspected more frequently would have non-compliance rates that are at least as high as, if not higher than, those which are inspected less frequently. As the graph in figure 5.5 shows, there is a great deal of variation in how frequently standards are chosen for inspection, but there is no apparent positive relationship between frequency of inspection and non-compliance. Indeed, it appears that some of the standards which are inspected most frequently (for example outcomes 1 and 7) have relatively low non-compliance rates. It seems that whatever is guiding or influencing inspectors’ decisions about which standards to inspect, valid estimates of the likelihood of noncompliance (either based on the QRP or on other intelligence) are not being used. 53 Item 5 Paper No: CM/01/13/04 % non-compliance Figure 5.5. Graph comparing non-compliance rates with inspection judgement frequency by outcome for adult social care providers inspected between May 2011 and March 2012 No of inspection judgements on regulation These brief analyses are intended really as a demonstration of how CQC’s routine data sources can be used to examine the functioning of risk based regulation, and to decide to what extent regulatory interventions can be deployed on the basis of information gathered for risk assessment. Further analyses could examine how risk and performance vary in different sectors, whether providers categorised as higher or lower risk are inspected at different frequencies, whether providers categorised as higher or lower risk are inspected against different standards, how poorly performing providers in particular are dealt with through inspection and how their performance changes over time, and so on. Conclusions Most regulators seek to be risk-based or proportionate in their approach to regulation, and it has been recommended as good practice by many taskforces and groups looking at better regulation. However, in practice risk-based regulation has been problematic because measuring and predicting risk is complex. Risk-based regulation requires valid, reliable and timely data about provider performance, but it seems unlikely that quantitative methods for calculating and predicting risk can be designed that work sufficiently well to make decisions about the regulation of individual providers. Valid risk assessment probably needs to make 54 Item 5 Paper No: CM/01/13/04 better use of qualitative data, soft intelligence, and the professional judgement of regulatory staff. Perhaps mistakenly, risk-based regulation has often been interpreted as meaning that lowrisk or high performing providers get less or even no regular regulatory scrutiny, and that exposes the regulator to challenge and criticism if (or rather when) a provider it has deemed low risk and so has not scrutinised turns out to be performing poorly. It may be more helpful to conceive of risk-based regulation as the use of a differentiated and responsive regulatory regime, which tailors regulator interventions to the performance and responses to regulation of the provider. This implies two things. Firstly, the regulator needs to have differentiated regulatory interventions to deal with high risk/low risk or high performing/low performing providers. For example, the process and content of an inspection might be quite different with providers at different ends of the performance distribution. Second, the regulator needs to have a graduated range of regulatory interventions so that it can respond to information about risk in ways that are appropriate to the level of risk and resource, and which will allow it to gather more information and make a better and more robust assessment of risk and performance. If the only regulatory intervention available is an inspection, which is quite a resource intensive exercise, it is difficult to be responsive. A range of regulatory responses might go from requests for further information or reports, to telephone meetings, to face to face meetings at the regulator’s offices, through informal visits to the provider, brief inspections, normal inspections and extended inspection, through to full scale investigations, or an ongoing onsite regulatory presence/monitoring. 55 Item 5 Paper No: CM/01/13/04 Chapter 6 The competencies of the regulatory workforce Introduction In chapter 2, we noted in discussing several components of the regulatory model the importance of some characteristics of the regulatory workforce – particularly of the regulatory staff who undertake regulatory interventions like inspections and interact with providers. For example we discussed their role in interpreting and using information from the Quality and Risk Profile and making decisions about whether and when to use regulatory interventions; their central role in the inspection process, particularly in making judgements; and their role in both deciding on and implementing enforcement actions. Towards the end of the chapter we noted some of the principal changes in the regulatory model set out in CQC’s strategic review, which have significant implications for the roles of compliance inspectors and for the competencies they need. In this chapter we first review the published research literature on the roles, competencies, training and development of regulatory staff, and then examine how our four comparator regulators select, train, develop and assess their inspection workforce. Next we examine how CQC could use its own data to explore the performance of compliance inspectors, though we are not able to present any example analyses because no data is currently collected routinely about inspectors. Finally, we conclude by identifying some future directions for research and evaluation in this area. Learning from research The knowledge, skills, attitudes and attributes of the workforce that is responsible for carrying out regulatory duties is crucial to the effective execution of regulatory policy. A number of regulators have considered what knowledge, skills, attitudes and attributes the workforce requires, and whether is it possible, or desirable, to standardise the characteristics of inspectors. Table 6.1 lists some examples of competency frameworks for regulatory staff, from the nuclear industry (International Atomic Energy Authority, 2001), from quality management system auditing and from healthcare (Plebani, 2001). The competencies listed in Table 6.1 could be summarised as “technical skills”, focusing on the application of knowledge, theories and methods, and “non-technical” skills, which are the personal and interpersonal skills such as situation awareness, decision-making, communication, team working, leadership, managing stress, and coping with fatigue (Flin et al., 2008). The technical skills 56 Item 5 Paper No: CM/01/13/04 are possibly the easier of the two to codify, and the issues associated with them are relatively straightforward – the challenge is to ensure that inspectors have sufficient knowledge of the legal powers and responsibilities, the regulatory techniques, and the domain being regulated. Table 6.1. Examples of competency frameworks for regulatory staff Nuclear Quality management Healthcare Understanding of legal powers and responsibilities Understanding of the activity being regulated Understanding of regulatory methods Personal and interpersonal effectiveness Quality-specific areas of competence Environment-specific areas of competence General areas of competence Personal attributes Administrative capacity Human (interpersonal) relations Credibility (professional authority and credentials) Personal attributes Knowledge and commitment Analytical capacity Communication and consulting skills Sources: IAEA (2001); ISO/CD.2 19011:2001 (cited by Plebani, 2001); Canadian Council of Health Services Accreditation (cited by Plebani, 2001) It is evident that the technical skills of inspectors are very important to their – and the regulator’s – ability to fulfil their remit. For example, Bruhn and Frick (2011) describe an attempt by the Swedish health and safety regulator to introduce a psychosocial inspection programme. The existing regulatory workforce had little experience of dealing specifically with psychosocial issues, and so the regulator recruited additional inspectors with the relevant technical knowledge. However, these inspectors had less expertise in the organisational inquiry and quality management that is fundamental to health and safety improvement, which limited their effectiveness. A similar problem was described by Hoel and Einarsen (2010), again concerning Swedish occupational health regulation. This study looked at the regulator’s attempt to enforce anti-bullying regulations; this too proved to be less successful than expected, partly because the inspectors were not given a clear set of strategies, methods and procedures to address any problems that they did find. A third study, of health and safety regulators’ use of enforcement in cases of workplace fatalities, found that few HSE inspectors had experience of being involved in manslaughter prosecutions, fewer still of prosecutions that were successful, and that police officers (who are expected to work alongside HSE inspectors in criminal investigations) had little experience of dealing with work-related fatalities (Almond 2006). Nevertheless, stakeholders saw such prosecutions as being within the regulator’s remit. 57 Item 5 Paper No: CM/01/13/04 There is not a simple dividing line between technical and non-technical skills – the two overlap to at least some degree. For example, Macrae (2009) examined the process by which flight safety investigators identify risks. He noted that investigators used four main strategies: making patterns of failure; drawing connections between minor incidents and broader safety issues; seeing discrepancies or inconsistencies in operational processes; and perceiving novel occurrences. Underlying these strategies are three areas of knowledge or expertise. These are knowledge of aviation practice, knowledge of the organisation being investigated and knowledge of risk. Interestingly, the decision-making behaviour of the investigators appears to be closer in spirit to risk-based than to rule-based regulation, in the sense of Black and Baldwin’s (2010) distinction in the chapter on risk-based regulation. Macrae’s study also leads to some interesting questions with regard to the knowledge that inspectors should either have on entry to, or develop during, their role (and which will be revisited later in this chapter). To what extent is it necessary for regulators to have had “first-hand” experience of the area that they are regulating in order to work effectively? Alternatively, are there aspects of regulatory expertise that are independent of any domainspecific experience – for example, generic skills in investigation and risk assessment? Much research illustrates the important – but often rather unmeasured and unobserved – place of interpersonal skills in the inspection workforce. May and Wood (2003) describe the regulatory inspector as a “street-level bureaucrat”, who makes on-the-spot decisions about how best to get the desired behaviour from whoever he or she is dealing with. According to May and Wood, an individual inspector may use a variety of approaches to achieve this goal, which could be broadly classified in terms of formalism (the rigidity with which rules are applied) and facilitation (willingness to help participants). Gormley (1998) investigated the enforcement style of child care inspectors (that is, their leniency, flexibility in the use of enforcement measures, and inclination to offer technical support) across the United States. He found that older inspectors and those with previous experience of child care were more lenient and more likely to offer technical support to providers. Those with a generally positive perception of the child care industry were more likely to show flexibility in the use of measures, while those reporting high job satisfaction also tended to be more lenient. The main message of these studies is firstly that inspectors can adopt a variety of styles, and secondly that the demographic and professional background of inspectors can influence the style that they adopt. Interestingly, though, May and Wood found only an indirect effect of inspector style on homebuilders’ compliance with building regulations – it influenced compliance insofar as it affected homebuilders’ knowledge of the regulations and the degree of cooperation between the homebuilder and the inspector. If a street-level bureaucrat has to be “alternatively informative, cajoling, educating or punitive as needed to produce the desired levels of cooperation and compliance” (May and Wood, 2003, p.118), then one issue to consider is how regulatory staff relate to those individuals and organisations that they regulate. Currie et al. (2009) in studying medical 58 Item 5 Paper No: CM/01/13/04 device use and regulation contrast two schools of thought. One they call “control and surveillance”: this is favoured by managers, and places reliance on administrative controls such as reporting, audit and practice guidelines. The other is “clinical judgement”: this tends to be favoured by clinicians, and is centred on notions about what the clinicians believed be in the best interests of the patient and the efficient use of organisational resources. So, for example, while a control and surveillance narrative would consider reuse of single-use devices unacceptable in any circumstance if a guideline said so, the clinical judgement narrative would consider it acceptable if it maximises the benefit to the patient or the organisation without incurring any obvious risk to either. The point of this comparison for the purpose of the present discussion is that, while a “control and surveillance” approach to risk regulation has its merits, an over-reliance on it may not be well received by those being regulated, a situation that could hinder a working relationship between regulator and regulated. There is a parallel and informative strand of research in the area of policing. Brown (1981) notes that, when using discretion in the application of their legal powers (that is, deciding whether to resort to these powers or to deal with the situation informally) police officers are trading off two imperatives. One is to meet the demands of the administrators that impose bureaucratic controls upon them; the other is to meet the demands of the diverse, and often unpredictable, situations that they encounter. In practical terms, this leads to a tension between impersonal authority (which derives from legally enacted powers and sees the officer as a dispassionate civil servant) and personal authority (which derives from the officer’s sensitivity to social norms and conventions, and emphasises his or her relationship with the community being policed). It is argued that a balance between these two forms of authority is optimal: impersonal authority encourages a consistent and standardised approach to enforcement decisions; exercising personal authority can generate goodwill and cooperation from members of the public. In fact, the officers in Brown’s study varied in their style – specifically, how selectively and how aggressively they applied enforcement actions. The most aggressive and least selective officers appeared to be the most adversarial and legalistic in their interactions with the public, while the least aggressive and most selective appeared to be the least adversarial and legalistic. In effect, the former type of officer is demonstrating what Bardach and Kagan (1982) in writing about regulation call “adversarial legalism” – a tendency to deal with problems by recourse to aggressively applied formal controls. This does, according to Bardach and Kagan, have its place, in that it provides a mechanism by which institutions can be challenged for providing a substandard service. However, it carries the disadvantage of being potentially a costly and long-winded diversion, as judicial decisions are contested and appealed by the opposing parties. A further problem is that it does not lend itself well to cooperation between the different parties involved – rather, it generates stress, conflict and resistance between them (Marshall et al. 2004). 59 Item 5 Paper No: CM/01/13/04 The point to be made here is that there are different “ways and means” by which inspectors can encourage high standards amongst the organisations being regulated (Lynxwiler et al., 1983; Almond, 2006). Inspectors could adopt a dispassionate, distant, formal approach, or they could adopt a personal, involved and informal approach; either or both may be felt to be appropriate depending on the inspector and the regulated organisation. A case in point is the studies of Lynxweiler et al. (1983) and Nielsen (2006), both described earlier. The challenge for a regulatory organisation is how to reconcile detailed regulations and standardised inspections, which may be required to make inspections consistent and defensible, with allowing inspectors to use their own judgement and initiative in the pursuit of their aims (e.g. May and Wood, 2003; Bruhn and Frick, 2011). We turn now to the issue of what knowledge and skills regulatory staff require in the regulatory domain or content area they regulate. Some have argued that the movement of personnel between a regulated industry and its regulator leads to the latter’s activities favouring the industry’s interests over those of other stakeholders (such as the general public) – what Meghani and Kuzma (2010) label the “revolving door effect”. Others have asserted that inspectors need content expertise in order to make informed and valid judgements, and to have the necessary credibility and authority with regulated organisations. For example, if health and social care inspection were seen as a branch of healthcare practice, then that would suggest that inspectors should come from health and social care organisations, if not from the health and social care professions. This would carry the advantage of inspectors having domain knowledge and credibility with providers, which might help to gain their cooperation (Bohigas et al., 1998). On the other hand, domain knowledge may be less relevant if the task of inspectors is simply to “check off” organisations against a checklist of basic standards, and there is also the possibility that inspectors that are closely associated with the practitioners and health or social care organisations being regulated may be insufficiently detached to apply formal enforcement measures when necessary (in other words, they fail to exercise impersonal authority). If health and social care inspection were seen as a lay activity, then inspectors could come from outside health and social care organisations. This would be likely to result in inspectors who were more detached from providers, relied more on formal methods and rules, and were less embedded within the practice community and so would need to work harder to establish a working relationship with those parties that they are regulating (ref needed). Plebani (2001) argues that the decision as to whether to take inspectors from within or from outside healthcare is ultimately dictated by the purposes of the regulatory scheme. If the emphasis of the scheme is on quality improvement, education or the facilitation of selfregulation, then he suggests that inspectors should come from the healthcare professions as they will need domain knowledge. He suggests that currently practising healthcare 60 Item 5 Paper No: CM/01/13/04 professionals (brought in on a part-time or voluntary basis) would be of particular value because they have a first-hand understanding of contemporary practice. However, there would need to be measures in place to prevent a conflict of interest between their inspection work and their “day jobs”. But he argues that where the inspection work is more administrative in nature – for example, registration or compliance – then the work would lend itself to being performed by full-time staff. Whether these staff need to be healthcare professionals or not depends on the required trade-off between technical knowledge and independence. One option could be to deploy a team of part-time or voluntary inspectors from the professions that is supervised by a full-time lead inspector. Plebani (2001) and Bohigas et al. (1998) describe the inspector training regimes of several healthcare accreditation programmes. These typically include initial training of between 1 and 15 days, followed by continuation training of between 1 and 25 days per year (the number of days varies between organisations, and often assumes a certain level of knowledge on the part of those recruited as assessors). The training includes both classroom teaching and the observation of assessments. Some of the accreditation programmes also require assessors (where they are not full-time employees) to commit a minimum number of days per year to assessment work. The International Atomic Energy Authority (2001) recommends that, in order to obtain the workforces that they need, regulatory organisations use job analysis and training needs analysis to distil relevant staff competencies. This information can then be used to inform staff selection, training and development systems. By way of illustration, the IAEA’s report provides examples from organisations in the UK and internationally. The examples are from the nuclear sector, but the principles are relevant to regulators in other settings. Finally, it should be noted that while this chapter deals primarily with the competencies of the inspection workforce, the activity of inspectors takes place in and is mediated by the organisational context. Hutter (1989) and May (1993) argue that regulatory style is influenced by organisational, political and social factors. These include the agenda set by the regulator through organisational policies and strategies and the nature and frequency of interaction between regulators and regulated organisations. Rothstein (2003) cautions about the risk of “institutional attenuation” – a lack of awareness on the part of the regulatory workforce about the risks that they should be controlling. He suggests three organisational factors which might lead to such attenuation, to which we have added a fourth: Responsibility for controlling risks is divided between different people or parts of the regulatory agency, which leads to either “too many cooks” (all with their own agendas) or “no cook at all” Incentives and rewards for the regulatory workforce are misaligned with regulatory requirements (e.g. inspectors receive no credit for investigating certain hazards 61 Item 5 Paper No: CM/01/13/04 despite their apparent importance, or are given targets for activity which perversely lead them to focus on low-risk but easy regulatory tasks) The organisational culture more broadly discourages inspectors from investigating hazards or problems properly (for example, certain beliefs about sources of risk may be promulgated, or the organisation shows a preference for “doing the doable” rather than “doing what needs to be done”). The organisation focuses on rule-based regulation in search of consistency and certainty of process, and leaves insufficient space for regulatory staff to exercise discretion and use their judgement in assessing and managing risk Comparisons with other regulators Table 6.2 below sets out a structured summary of the way that our four comparator regulators – the Joint Commission for the Accreditation of Healthcare Organisations, the Dutch Healthcare Inspectorate, OFSTED and the Homes and Communities Agency – recruit and develop the people who act as their inspection workforce. There is considerable commonality of approach across the four regulators in the area of inspector expertise. All require the people they recruit as inspectors to have substantial and senior experience in the content area they are regulating. The two healthcare regulators use senior medical and nursing staff, complemented by senior healthcare managers. OFSTED requires school inspectors to have substantial teaching and senior leadership experience – lead inspectors in particular are likely to have been headteachers in one or more schools. Most HCA regulatory engagement staff have a background in social housing, and the financial analysts who work with them on economic regulation are qualified accountants. The argument for using such staff in inspection was generally that they needed to be able to make – and stand by – complex professional judgements about performance, and to bring a depth of knowledge and understanding to what they observed or found during inspections. It was also argued that regulated organisations would not respect or give credence to inspectors who did not have strong content expertise. This makes the inspection workforce an expensive resource – all reported providing remuneration commensurate with market rates for the senior leaders and professionals they wished to recruit. 62 Item 5 Paper No: CM/01/13/04 Table 6.2. Comparison of approaches to the inspection workforce JCAHO Senior people with 10-15 yrs experience in field – CEO, board director, medical director, chief nurse etc IGZ 10-15 years in the field as doctor or specialist. OFSTED Senior leadership experience in one or more schools – usually at head level for lead inspector. What are the main roles and responsibilities of inspectors? Undertaking surveys and writing reports. Undertaking inspections and writing reports. What kind of people are recruited to roles as inspectors? Most surveyors part-time – have substantive ongoing posts in healthcare. Full-time field directors oversee surveyors work. All inspectors employed by IGZ. They have all worked in the sectors they regulate. How are inspectors trained and developed? Surveyors trained in classroom for 5 days with 2 days prep. Surveyors then go on 3 precepted surveys, last of which is an evaluation of their performance. IGZ has own academy. Inspectors do one year initial training (1 day pw) under supervision of senior inspector plus ongoing CPD. How is workload allocated to inspectors and what is a typical workload? Varies widely – part-time inspectors commit 25% of their time minimum Varies by sector. Hospitals – caseload 5 or 6 orgs. Nursing homes – caseload 50-100. Undertaking inspections, and writing reports. HMIs do oversight of decisions and QA of inspectors and inspections Most inspectors employed by inspection providers not OFSTED. Small number of HMIs employed by OFSTED Inspectors undergo training by their inspection provider – assessed 6 month course, with face to face and online learning. Followed by mentored inspections then sign-off by an HMI. Varies very widely and depends on how many days they are contracted for. How is performance of inspectors assessed? Ongoing performance monitored by field directors. Annual evaluations use peer ratings, onsite evaluations, provider evaluations and survey data. CPD, oversight of inspections, checking of decisions like enforcement. What competencies are required of those who undertake regulation/ inspection? 63 HMI observe insoections, feedback from school after each inspection, oversight of decisions and evidence HCA Most staff in regulatory engagement have background in social housing. Financial analysts are qualified accountants Analysts and engagement staff work together with portfolio of RSLs – reviewing data and engaging with RSLs. Very little recruitment since recent reorganisations. All regulatory staff employed by HCA. No formal inhouse training programme in HCA – noted above very little recruitment in last 3 years. More senior staff get more complex RSLs. Typical portfolio of 5-10 RSLs per member of staff. Through normal staff line management arrangements Item 5 Paper No: CM/01/13/04 However, there was more variation in the way that people were recruited to the inspection workforce, and the level of commitment they were expected to have. Here there seem to be two competing concerns. On the one hand, inspectors need to be part of the regulatory agency, have a good understanding of its processes and procedures, and have some critical distance from the organisations they regulate – all of which suggests a full-time employed inspector model. On the other hand, inspectors need current, up to date content expertise, and credibility with regulated organisations – which suggests the use of inspectors who still work in the sector. JCAHO, having experimented with different models over the years, now has a mostly part-time inspection workforce (between 25% and 50% of their time spent working for JCAHO) most of whom have ongoing senior roles in the healthcare sector, but has full-time field directors who oversee surveyors work. In contrast, IGZ has moved to a fully employed inspector workforce, and has tried to make the inspector role an attractive one which can form part of a medical or nursing professional’s career development. OFSTED uses a hybrid model – it has a relatively small number of employed, full-time inspectors (Her Majesty’s Inspectors or HMI), with a much larger number of “additional inspectors” who are employed or contracted by the inspection providers who do most inspections on behalf of OFSTED. Traditionally most additional inspectors were retired senior school leaders, but OFSTED is now aiming to have at least half of the additional inspectors working part-time, and still holding a senior role in a school. HMI inspectors undertake quality control and assurance of inspectors and inspections, and directly oversee critical inspection decisions. HCA has a relatively small number of regulatory staff, who are all employed directly by the organisation. The issues of inspector expertise and employment status/level of commitment affect how the comparator regulators use their inspection workforce, and particularly the issues of workload and specialisation. For example, IGZ inspectors specialise in one or more of its regulated areas (acute care, nursing homes, homecare, etc) and there are also inspectors who specialise in functions like undertaking investigations or developing thematic inspections. Although they are all full-time, as a result inspectors carry very different workloads – an acute care inspector might have a caseload of 5-6 acute hospitals while a nursing home inspector might have a caseload of 50-100 nursing homes. JCAHO and OFSTED, who both use part-time inspectors, require them to commit to undertaking a certain volume of inspections each year, usually expressed in terms of a commitment to working days. In both cases, full-time regulatory agency staff oversee the performance of part-time inspectors and quality assure their work. All the comparator regulators apart from HCA have selection and training programmes for new inspectors, which generally combine some classroom/distance learning instruction and engagement with some inspection observation and mentored inspection activity. Some of these are formal educational programmes with summative assessments (for example OFSTED’s provides 30 credits towards a postgraduate qualification). They also generally have continuing professional development requirements for inspectors, designed to 64 Item 5 Paper No: CM/01/13/04 maintain their skills and to update them on new regulatory developments. HCA – which has been through a number of reorganisations and staff reduction – has a relatively small number of regulatory staff and does not currently run a formal training programme. In conclusion, one of the senior regulatory agency staff we spoke to said that there were three essential components to the regulatory regime – the standards, the inspection process, and the inspectors themselves – but that in their view the latter was both the most important component and the one which was most difficult to get right. Using existing data CQC does not have any routine data available about the 900+ people who work as compliance inspectors and compliance managers, and this means we are not able to provide any example analyses of available data. This is unfortunate, because the inspection data set does contain a field identifying the inspector (though we think this is the lead inspector only – and if more than one inspector takes part in an inspection, it would not identify other inspectors). This means that we could use that data to explore many of the issues to do with inspector expertise, training, development and performance which have been raised in this chapter. In addition, CQC has two other relevant data sets, though they are much more recent developments. First, it has started to collect activity data from inspectors, which could be used to examine differences in workload and productivity. Second, it is starting to routinely survey providers after inspection, and this data could be used to examine providers’ views of inspector performance. There are four areas in which information about the inspection workforce is needed, which could be collected through a survey relatively easily: Education, professional qualifications and training – education level and area, whether they have a professional qualification in health and social care, and if so what qualifications, and what other training (non-qualification based) in relation to health and social care services have they undertaken Sector expertise – whether they have worked in one or more of the sectors that CQC regulates (NHS, independent healthcare, adult social care, general practice, dentistry etc) and if so for how long, in what role (seniority and responsibility) and how long ago Regulatory expertise – how long they have worked for CQC and/or its predecessor health and social care regulators, and in what role (seniority and responsibility) 65 Item 5 Paper No: CM/01/13/04 Regulatory training and development – what training and continuing professional development in health and social care regulation have they had during their time working for CQC This would then allow us to examine empirically some important and interesting areas. For example, we could find out to what extent inspectors already “specialise” in particular sectors or carry a mixed portfolio of providers, and we could see how often inspections are undertaken by inspectors who have relevant content/sector expertise. We could also look at how much longitudinality there is in the inspection process, and whether providers deal with the same inspector each time there is a regulatory intervention or with different ones. Perhaps more centrally, we could start to examine whether the educational/professional background, content expertise and regulatory expertise of the inspector is associated with any differences in inspection processes or judgements. We might hypothesise that inspectors with more experience and expertise would reach more valid judgements, but we could test empirically whether they find more or less compliance. We could also explore whether there are relationships between inspector characteristics and the views of providers on the quality of inspection. We could also use this data set, alongside the inspector activity data, to start to examine the variations in inspector productivity and, for example, what impact greater inspector specialisation is likely to have on inspection time taken and workload. Conclusions It is striking that neither the research literature nor the comparisons with other regulators provide much if any support for the use of a generic inspection workforce, of the kind that CQC has sought to create over the last three years, even with the current regulatory model. It is also notable that the content of this chapter suggests that the changes signalled in CQC’s recent strategic review require substantial changes to the inspection workforce. The consensus – from the literature and from practice in other regulators – seems to be that the quality of the inspection workforce – in three main domains, of content expertise, regulatory expertise, and interpersonal effectiveness – is very important. For the regulatory agency to have credibility and authority in the sectors it regulates, and for people to give credence to its judgements and assessments, it seems necessary to have a high quality inspection workforce which performs consistently well. This is not a straightforward undertaking. Firstly, it is expensive, as experienced and senior inspectors need commensurate remuneration to attract them. Second, it requires careful initial selection/screening and training and development. Third, it demands ongoing performance appraisal and development. 66 Item 5 Paper No: CM/01/13/04 Chapter 7 Conclusions This report is intended to help CQC to bring research and evidence to bear on its remit and purpose – the regulation of health and social care in England. We have used logic modelling in chapter 2 to set out an explicit description of the current regulatory model and to examine the underlying assumptions and likely consequences of key decisions in that regulatory design. We have then explored four areas in more depth and detail – differentiation in regulatory design, standard setting, risk-based regulation, and the regulatory or inspection workforce. In each we have tried to bring together what is known about the area from a number of sources – research, the practice of other regulators and CQC’s own practice – both to review the current regulatory model and to help inform decisions about its future. The purpose of this concluding chapter is not to repeat or summarise the findings from chapters 2 to 6, but to draw some conclusions about what they tell us about CQC’s current regulatory model, and how it might change. CQC has already signalled some quite fundamental changes in its current strategic review, and other developments arising from the national health reform agenda and from the Francis Inquiry report are likely to bring further change. We draw three main conclusions. First, it seems that the current regulatory model, which was designed largely to achieve “safety net” regulation in health and social care, is probably not sustainable and requires change and development. There seems to be an absence of good evidence to support the current regulatory model, and to back up important changes to that model which have been made over time. It may be that we are unaware of the full background to, for example, the decision to return to 12 monthly inspections of all regulated organisations in most sectors; or the decision to inspect against a limited subset of the essential standards rather than against all the standards at each inspection, but it appears to us that those changes were made without strong supporting evidence in advance, and without much subsequent evaluation of their effects or impact. This has left CQC in a difficult position when, for example, explaining the rationale behind decisions, justifying necessary levels of investment in the regulatory process, or demonstrating value for money. Second, given that changes to the regulatory model seem both desirable and inevitable, our most important recommendation is that those changes should be well grounded in an empirical analysis of current practice and wider regulatory knowledge; that their intended mechanisms should be spelt out explicitly so that they can be tested and challenged; and that they should be implemented experimentally so that they can be evaluated properly 67 Item 5 Paper No: CM/01/13/04 before they are used more widely. We think there is great potential to design and execute evaluations alongside the piloting and implementation of such initiatives, and the size of CQC’s regulatory programme and the range of sectors which it regulates means evaluations at scale could be undertaken relatively quickly. We think this would require CQC to develop or allocate more internal capacity in research and evaluation, and to create a culture of decision-making in which empirical evidence is more central. Third, because the nature of regulation is complex and the interaction between regulatory agencies, regulated organisations and other interests is a dynamic and evolving one, we would predict that the effectiveness and impact of regulatory initiatives will vary over time, and between sectors and organisations. This does not mean that research is not needed or helpful, but it does mean that it should not be assumed that interventions which work well during evaluations will continue to work in the same way when they are used in practice. There is a need to design the regulatory process itself to be “autoevaluative” – so that the data routinely collected through the regulatory processes like compliance and enforcement provides an ongoing evaluation of impact and effectiveness. This means, for example, routinely evaluating inspections through post-inspection provider surveys, which CQC is starting to do; and routinely evaluating the impact of follow-up inspections in cases of noncompliance, which CQC has also begun to do. But taking this further means aiming to ensure that each component of the regulatory model has data to demonstrate the validity and reliability of measurement and the impact or effectiveness of intervention on the quality of health and social care. 68 Item 5 Paper No: CM/01/13/04 References Adil, M. (2008). Risk-based regulatory system and its effective use in health and social care. Journal of the Royal Society for the Promotion of Health, 128, 196-201. Allsop, J. & Jones, K. (2006). Quality assurance in medical regulation in an international context. Unpublished report, University of Lincoln. Almond, P. (2006). An inspector’s eye view: the prospective enforcement of work-related fatality cases. British Journal of Criminology, 46, 893-916. Ashcroft, D.M., Quinlan, P., & Blenkinsopp, A. (2005). Prospective study of the incidence, nature and causes of dispensing errors in community pharmacies. Pharmacoepidemiology and Drug Safety, 14, 327-332. Ayres, I., & Braithwaite, J. (1992). Responsive Regulation: Transcending the deregulation debate. Oxford: Oxford University Press. Bardach, E., & Kagan, R.A. (1982). Going By The Book. Philadelphia: Temple University Press. Bardsley, M., Spiegelhalter, D.J., Blunt, I., Chitnis, X., Roberts, A., & Bharania, S. (2009). Using routine intelligence to target inspection of healthcare providers in England. Quality and Safety in Health Care, 18, 189-194 Bartlett, H.P., & Phillips, D.R. (1996). Policy issues in the private health sector: examples from long-term care in the UK. Social Science and Medicine, 43, 731-737. Bevan, G., & Hood, C. (2006). Have targets improved performance in the English NHS? British Medical Journal, 332, 419-422. Black, J., & Baldwin, R. (2010). Really responsive risk-based regulation. Law & Policy, 32, 181-213. Black, J., & Baldwin, R. (2012). When risk-based regulation aims low: approaches and challenges. Regulation & Governance, 6, 2-22. Bohigas, L., Brooks, T., Donahue, T., Donaldson, B., Heidemann, E., Shaw, C., & Smith, D. (1998). A comparative analysis of surveyors from six hospital accreditation programmes and a consideration of the related management issues. International Journal for Quality in Health Care, 10, 7-13. Boyle, T.A., Scobie, A.C., MacKinnon, N.J., & Mahaffey, T. (2012). Implications of process characteristics on quality-related event reporting in community pharmacy. Research in Social and Administrative Pharmacy, 8, 76-86 Braye, S. & Preston-Shoot, M. (1999). Accountability, administrative law and social work practice: redressing or reinforcing the power imbalance? Journal of Social Welfare and Family Law, 21, 235-256. Brennan, T.A. (1998). The role of regulation in quality improvement. Millbank Quarterly, 76, 709-731. Brown, M.K. (1981). Working the Street: Police discretion and the dilemmas of reform. New York: Russell Sage. 69 Item 5 Paper No: CM/01/13/04 Bruhn, A., & Frick, K. (2011). Why it was so difficult to develop new methods to inspect work organization and psychosocial risks in Sweden. Safety Science, 49, 575-581. Campbell, A.C., Foggin, T.M., Elliott, C.T., & Kosatsky, T. (2011). Health promotion as practiced by public health inspectors: the BC experience. Canadian Journal of Public Health, 102(6), 432-436. Carroll, M.M. (1995). Is our role changing? Professional Safety, 40(4), 7. Cohen, M.R., Smetzer, J.L., Westphal, J.E., Comden, S.C., & Horn, D.M. (2012). Risk models to improve safety of dispensing high-alert medications in community pharmacies. Journal of the American Pharmaceutical Association, 52, 584-602. Cornock, M. (2012). Not another missed opportunity: regulation of health care professionals. Journal of Commonwealth Law and Legal Education, 8(1), 1-8. Currie, G., Humphreys, M., Waring, J., & Rowley, E. (2009). Narratives of professional regulation and patient safety: the case of medical devices in anaesthetics. Health, Risk & Society, 11(2), 117-135. Daníelsson, J. (2003). On the feasibility of risk regulation. CESifo Economic Studies, 49, 157179. Day, P., & Klein, R. (1987). The regulation of nursing homes: a comparative perspective. Millbank Quarterly, 65, 303-347. Dodds, A., & Kodate, N. (2011). Accountability, organisational learning and risks to patient safety in England: conflict or compromise? Health, Risk & Society, 13(4), 327-346. Elvey, R.E., et al. (in press). Who do you think you are? Pharmacists’ perceptions of their professional identity. International Journal of Pharmacy Practice. Emery, R.J., Charlton, M.A., & Mathis J.L. (2000). Estimating the administrative cost of regulatory noncompliance: a pilot method for quantifying the value of prevention. Radiation Protection Journal, 78 (Suppl. 2), S40-S47. Flin, R., O’Connor, P., & Crichton, M. (2008). Safety at the Sharp End. Aldershot: Ashgate. Fogarty, G.J. & McKeon, C.M. (2006). Patient safety during medication administration: The influence of organizational and individual variables on unsafe work practices and medication errors. Ergonomics, 49, 444-456 Foley, M., Fan, Z.J., Rauser, E., & Silverstein, B. (2012). The impact of regulatory enforcement and consultation visits on workers’ compensation claims incidence rates and costs, 1999-2008. American Journal of Industrial Medicine, 55, 976-990. Gormley, W.T. (1998). Regulatory enforcement styles. Political Research Quarterly, 51(2), 363-383. Hampton, P. (2005). Reducing administrative burdens: effective inspection and enforcement. Norwich: HMSO. Health Foundation (in press). Using safety cases in industry and healthcare. London: Health Foundation. Healthcare Commission (2007). Is anyone listening? A report on complaints handling in the NHS. London: Healthcare Commission. 70 Item 5 Paper No: CM/01/13/04 Hirsch, D. (2009). Does litigation against doctors and hospitals improve quality? In J. Healy & P. Dugdale (Eds.), Patient Safety First: Responsive regulation in health care (pp. 254-272). Crows Nest, NSW: Allen & Unwin. Hoel, H., & Einarsen, S. (2010). Shortcomings of antibullying regulations: the case of Sweden. European Journal of Work & Organizational Psychology, 19, 30-50. Hofstede, G. (1994). Cultures and Organizations. HarperCollins Business. Hood, C. (1991). A public management for all seasons? Public Administration, 69, 3-19. Hutter, B.M. (1989). Variations in regulatory enforcement styles. Law & Policy, 11(2), 153174. Hutter, B.M. (1997). Compliance: Regulation and enforcement. Oxford: Clarendon Press. International Atomic Energy Agency (2001). Training the staff of the regulatory body for nuclear facilities: a competency framework. Report IAEA-TECDC-1254. Vienna: IAEA. James, K.L., Barlow, D., McArtney, R., Hiom, S., Roberts, D., & Whittlesea, C. (2009). Incidence, type and causes of dispensing errors: a literature review. International Journal of Pharmacy Practice, 17, 9-30 Lindøe, P.H., Engen, O.A., & Olsen, O.E. (2011). Responses to accidents in different industrial sectors. Safety Science, 49, 90-97. Lloyd-Bostock, S., & Hutter, B.M. (2008). Reforming regulation of the medical profession: the risks of risk-based approaches. Health, Risk & Society, 10, 69-83. Lloyd-Bostock, S. (2010). The creation of risk-related information: the UK General Medical Council’s electronic database. Journal of Health Organization and Management, 24, 584596. Lodge, M. (2011). Risk, regulation and crisis: comparing national responses in food safety regulation. Journal of Public Policy, 31, 25-50. Lynxwiler, J., Shover, N., & Clelland, D.A. (1983). The organization and impact of inspector discretion in a regulatory bureaucracy. Social Problems, 30, 425-436. Macrae, C. (2008). Learning from patient safety incidents: creating participative risk regulation in healthcare. Health, Risk & Society, 10(1), 53-67. Macrae, C. (2009). Making risks visible: identifying and interpreting threats to airline flight safety. Journal of Occupational and Organizational Psychology, 82, 273-293. Macrory, R.B. (2006). Regulatory justice: making sanctions effective. London: Better Regulation Executive. Marshall, B.K., Picou, J.S., & Schlichtmann, J.R. (2004). Technological disasters, litigation stress, and the use of alternative dispute resolution mechanisms. Law & Policy, 26, 289-307. Masso, M., & Eagar, K. (2009). Do public inquiries improve health care? In J. Healy & P. Dugdale (Eds.), Patient Safety First: Responsive regulation in health care (pp. 318-340). Crows Nest, NSW: Allen & Unwin. Griffiths, H. (2004). From CHI to CHAI: what a difference an ‘A’ makes. The Psychiatrist, 28, 235-237. May, P.J. (1993). Mandate design and implementation: enhancing implementation efforts and shaping regulatory styles. Journal of Policy Analysis and Management, 12(4), 634-663. 71 Item 5 Paper No: CM/01/13/04 May, P.J., & Wood, R.S. (2003). At the regulatory front lines: inspectors’ enforcement styles and regulatory compliance. Journal of Public Administration Research and Theory, 13(2), 117-139. Meghani, Z., & Kuzma, J. (2011). The “revolving door” between regulatory agencies and industry: a problem that requires reconceptualising objectivity. Journal of Agricultural and Environmental Ethics, 24, 575-599. Miceli, M.P., Near, J.P., Rehg, M.T., & Van Scotter, J.R. (2012). Predicting employee reactions to perceived organizational wrongdoing: demoralization, justice, proactive personality, and whistle-blowing. Human Relations, 65, 923-954. Nebeker, J.R. et al. (2007). Developing indicators of inpatient adverse drug events through non-linear analysis using administrative data. Medical Care, 45 (Suppl. 2), S81-S88. Netten, A., Forder, J., & Knight, J. (1999). Costs for regulating care homes for adults. PSSRU Discussion Paper 1496/2. Canterbury: Personal Social Services Research Unit. Nichols, E., & Wildavsky, A. (1987). Nuclear power regulation: seeking safety, doing harm? Regulation, 11(1), 45-53. Nielsen, V.L. (2006). Are regulators responsive? Law & Policy, 28, 395-416. Ojo, M. (2010). The growing importance of risk in financial regulation. Journal of Risk Finance, 11, 249-267. Papadakis, M.A., Arnold, G.K., Blank, L.L., Holmboe, E.S., & Lipner, R.S. (2008). Performance during internal medicine residency training and subsequent disciplinary action by state licensing boards. Annals of Internal Medicine, 148, 869-891. Papadakis, M.A., Teherani, A., Banach, M.A., Knettler, T.R., Rattner, S.L., Stern, D.T. et al. (2005). Disciplinary action by medical boards and prior behavior in medical school. New England Journal of Medicine, 353, 2673-2682 Phipps, D.L., Noyce, P.R., Walshe, K., Parker, D., & Ashcroft, D.M. (2010). Risk assessment in pharmacy practice. Report to the RPSGB. University of Manchester. Phipps, D.L., Noyce, P.R., Walshe, K., Parker, D., & Ashcroft, D.M. (2011a). Risk-based regulation of healthcare professionals: what are the implications for pharmacists? Health, Risk & Society, 13(3), 277-292 Phipps, D.L., Noyce, P.R., Walshe, K., Parker, D., & Ashcroft, D.M. (2011b). Pharmacists subjected to disciplinary action: characteristics and risk factors. International Journal of Pharmacy Practice, 19, 367-373. Plebani, M. (2001). Role of inspectors in external review mechanisms: criteria for selection, training and appraisal. Clinica Chimica Acta, 309, 147-154. Power, I. (2012). Inspection: are we content with simple compliance or do we want real school improvement? Accessed from http://www.hmc.org.uk/hmc-blog/inspection-are-wecontent-with-simple-compliance-or-do-we-want-real-school-improvement/ on 24th Oct 2012. Prosser T (1999). Theorising utility regulation. Modern Law Review 62(2):196-217. Rosen, R., & Dewar, S. (2004). On Being A Doctor: Redefining medical professionalism for better patient care. London: King’s Fund. 72 Item 5 Paper No: CM/01/13/04 Ross, S., & Hannan, M. (2007). Money laundering regulation and risk-based decision-making. Journal of Money Laundering Control, 10, 106-115. Rothstein, H.F. (2003). Neglected risk regulation: the institutional attenuation phenomenon. Health, Risk & Society, 5(1), 85-102. Rudkin, D. (2009). Healthcare regulators and risk: assessing the evidence. Clinical Risk, 15, 61-63. Smith, A. F., Goodwin, D., Mort, M., & Pope, C. (2006). Adverse events in anaesthetic practice: qualitative study of definition, discussion and reporting. British Journal of Anaesthesia, 96(6), 715-721. Strong, D.E. (2011). Access to enforcement and disciplinary data: information practices of state health professional regulatory boards of dentistry, medicine and nursing. Journal of Health and Human Services Administration, 33, 534-570. Sujan, M.A., Koornneef, F., Chozos, N., Pozzi, S., & Kelly, T. (in press). Safety cases for medical devices and health IT: involving healthcare organisations in the assurance of safety. Health Informatics. Sutherland, K., & Leatherman, S. (2006). Regulation and quality improvement: a review of the evidence. London: The Health Foundation. Uzzi, B., & Spiro, J. (2005). Collaboration and creativity: the small world problem. American Journal of Sociology, 111, 447-504. Walshe, K., & Shortell, S.M. (2004). Social regulation of healthcare organizations in the United States: developing a framework for evaluation. Health Services Management Research, 17, 79-99. Waring, J. (2007). Adaptive regulation or governmentality: patient safety and the changing regulation of medicine. Sociology of Health and Illness, 29(2), 163-179 Wenger, E. (1998). Communities of Practice: Learning, meaning, and identity. Cambridge: Cambridge University Press Wiener, J.M. (2003). An assessment of strategies for improving quality of care in nursing homes. The Gerontologist, 43 (Special issue II), 19-27. Wiig, S., & Lindøe, P.H. (2009). Patient safety in the interface between hospital and risk regulator. Journal of Risk Research, 12(3-4), 411-426. Williams, S.D., Phipps, D.L., & Ashcroft, D.M. (2013). Understanding the attitudes of hospital pharmacists to reporting medication incidents: a qualitative study. Research in Social and Administrative Pharmacy, 9, 80-9. Wilpert, B. (2008). Regulatory styles and their consequences for safety. Safety Science, 46, 371-375. Bibliography Dewing, I.P., & Russell, P.O. (2012). Auditors as regulatory actors: the role of auditors in banking regulation in Switzerland. European Accounting Review, 21(1), 1-28. Hunter, S., & Waterman, R.W. (1992). Determining an agency’s regulatory style: how does the EPA water office enforce the law? The Western Political Quarterly, 45(2), 403-417 73 Item 5 Paper No: CM/01/13/04 Hughes, G., Mears, R., & Winch, C. (1997). An inspector calls? Regulation and accountability in three public services. Policy and Politics, 25(3), 299-313 Johnson, K. (2012). Reducing administration errors: a case study. Nursing & Residential Care, 14(6), 296-299. Kagan, R.A. (1999a). Adversarial legalism: tamed or still wild? NYU Journal of Legislation & Public Policy, 179, 217-245 Kagan, R.A. (1999b). Trying to have it both ways: local discretion, central control, and adversarial legalism in American environmental regulation. Ecology Law Quarterly, 25, 718732. Lindøe, P.H., & Olsen, O.E. (2009). Conflicting goals and mixed roles in risk regulation: a case study of the Norwegian Petroleum Directorate. Journal of Risk Research, 12(3-4), 427-441. Putnam, M., Tang, F., Brooks-Danso, A., Pickard, J., & Morrow-Howell, N. (2007). Professionals’ beliefs about nursing home regulations in Missouri. Journal of Applied Gerontology, 26, 290-304. Tuijn, S.M., Robben, P.B.M., Janseens, F.J.G., & van den Bergh, H. et al. (2011). Evaluating instruments for regulation of health care in the Netherlands. Journal of Evaluation in Clinical Practice, 17, 411-419. 74