Predictive Modelling – Frequently Asked Questions Q Why is MSD undertaking this work? A The Government’s 2012 White Paper for Vulnerable Children included a range of proposals aimed at preventing child maltreatment. One of the proposals was to use predictive modelling tools to assist professionals in identifying which children are most at risk of abuse or neglect, subject to feasibility study and trialling. MSD undertook the feasibility study, commissioned the ethical reviews, and is beginning the process of testing and trialling as part of its contribution to the implementation of the White Paper. Implementation of the White Paper has been led by the Children’s Action Plan Directorate, with support from a range of agencies including Government Departments and Non-Government Organisations. Q Were any other White Paper proposals subject to feasibility study and trialling? A No. The predictive modelling proposal was the only one that was subject to feasibility study and trialling. At the time, the approach looked promising, but the Government wanted more certainty about the benefits and potential risks. Q Has trialling started? A No. All the work to date has been in a research capacity. The focus has been on examining the feasibility and ethics of the approach in order to inform decisions on whether to proceed to testing and trialling. Q What was the advice given to the Minister for Social Development in the November 2014 briefing : Vulnerable Children Predictive Modelling: Design for Testing and Trialling? A The advice identified three applications of predictive modelling that could be progressed towards trialling: 1: Use in triage (after concerns have been raised about a child) 2: Use in early identification, subject to capacity (potentially before concerns have been raised about a child) 3: Use in determining neighbourhood-level service needs. An inter-agency Working Group advised that any operational use of predictive modelling in early identification (ie. the use of predictive modelling outlined in Volume II of the White Paper) should be deferred until there was capacity to respond appropriately to the children referred, and there was stronger evidence that predictive modelling could add value in early identification and help improve outcomes. In the meantime, the Working Group had developed a proposal for building some of the evidence needed. The proposal involved testing and trialling in three phases: Phase 1: Completion of technical design by December 2014 Phase 2: Parallel streams of testing and piloting by the end of 2016 Phase 3: Large scale trial of an operational PM starting by the beginning of 2017 The Phase 2 parallel streams of testing and piloting in included: 1: a 2 year prospective observational study that would aim to provide stronger evidence that predictive modelling could add value in early identification 2: user testing and pilot testing use of predictive modelling in triage. The advice noted that the Vulnerable Children’s Board (comprising social sector Chief Executives and the National Children’s Commissioner) had agreed to proceed with Phase 1 and with preparation for Phase 2, with a report back in December 2014. The Board was awaiting further feedback and ethical advice in the December 2014 report back before agreeing that Phase 2 should proceed. In the briefing, the Board expressed concerns about both the ethical risks of the observational study and about the proposed timing of the next phases. Q Why did the Working Group advise that operational use of predictive modelling in early identification should be deferred? A This reflected concern that predictive modelling used in early identification for Children’s Teams: would refer children and their families and whānau into a system for which the benefits were as yet unknown - it was considered important for the work of the Children’s Teams to become better embedded, for early learnings from the demonstration sites to be put into practice, and for positive impacts to be established before a PM was used to proactively refer children may not add value because a range of new systems may have improved early identification (including ‘vulnerable pregnant women’ initiatives working in many District Health Boards, and heightened awareness as a result of the Children’s Action Plan) would risk referring children before there was sufficient capacity to serve the needs of children and their families and whānau, or sufficient services that are acceptable to families and whānau. Q What decision was made following the November 2014 briefing and the Minister’s reaction to it? A The proposed studies were discussed with the new Minister of Social Development in November 2014. The Minister considered the ethical risks of undertaking a prospective observational study unacceptably high and instructed that all work on the study should stop. Her concern centred on the risk associated with generating risk scores in real time and taking no action over and above provision of business as usual services. In December 2014 a decision was made to proceed with user testing use of predictive modelling in triage. Q Had the proposal for a prospective observational study got as far as an ethics committee? A Yes. Q What would the prospective observational study have involved? A A proposal for the study was submitted to the Central Region Health and Disability Ethics Committee for approval but was withdrawn following the Minister’s instruction and before it was considered by the Committee. The prospective observational study would have simulated application of predictive modeling to new cohorts of children aged under two and i) pre-tested close-to-live operational systems ii) investigated whether the children identified as most vulnerable were already being identified as needing intensive preventive services via existing early identification pathways in order to establish whether predictive modeling would add value in early identification, and iii) followed the children’s administratively recorded outcomes to establish whether predictive accuracy found in the feasibility study was matched in a prospective test. The aim was to provide empirical evidence that would help inform later decisions about whether and how to proceed with the White Paper proposal to use predictive modelling for early identification and referral to Children’s Teams of children who may be at high risk of abuse or neglect but who have not yet been reported to agencies. The study would have had a strong focus on monitoring and addressing possible over-identification of Māori and other subpopulations of children relative to known shares of adverse outcomes. It would have been carried out under research exceptions to the Privacy Act. An independent Data and Safety Monitoring Board would have been established. The role of the Board would have been to advise on the continuing safety of the study for participants and the continuing validity and scientific merit of the study. Families would not have received any additional services as a result of the study. Nor would they have been denied any services. However the Data and Safety Monitoring Board would have had the ability to recommend and require intervention in eg. cases of very high risk. This would have occurred with reference to all relevant guidelines, and in consultation with the Office of the Privacy Commissioner. High-level research questions the study would have addressed were as follows: Can predictive modelling be successfully operationalized? Is administrative data available quickly enough for it to be usefully included in an operational predictive modelling? Can administrative data be linked to identify unique individuals with sufficient accuracy? Can the potential for over-prediction relative to shares of known adverse outcomes for selected population sub-groups be successfully mitigated? What proportion of infants identified by predictive modelling as high needs could already be identified as needing intensive preventive services via existing early identification pathways? Can predictive modelling scores be used to prospectively risk stratify children aged under two according to their likelihood of going on to have a report of concern or other outcomes that indicate high needs before age two with good accuracy? Q What is happening now? A Use of predictive modelling in triage is about to be tested within the Child Youth and Family Contact Centre. We will be testing whether predictive model information can enhance decision-making at intake for children who have already been reported to Child Youth and Family because of concerns about abuse or neglect. The plan is to test in a non-operational intake setting, using historic cases, not current cases, before any decisions are made about trialling in a real-life setting. This reflects the care with which MSD is advancing on this front. Some National Contact Centre intake social workers participating in the testing will be trained in predictive modelling (how it was developed, what it can and cannot tell us, how the information provided by predictive modelling can be used in decision-making). Development of training materials will begin soon. Participants will be divided into 2 groups and asked to make intake decisions on each of the case studies: 1) the control group will receive the case studies only; 2) the experimental group will receive the case studies plus the training and the predictive modelling score/information for each case. The decisions made by participants in both groups will be compared with a consensus view of the optimal decision in each of the case studies. The consensus view will be developed by an expert panel. It is currently envisaged that we would have 20 case studies based on historical cases will be developed: 8 for use in training, 8 in the testing proper and 4 to be kept as reserves. These numbers may be subject to change. Interviews will be used to explore the decision-making process for participants in each of the groups, and to examine the role and usefulness of the predictive modelling score/information and training in decision-making. If the Minister decided to proceed following the first non-operational trial, the next step may be a small trial involving a small number of cases in the contact centre. There would need to be several small trials to build a clearer picture of the usefulness of predictive model information before it could be useful in a live business as usual environment. At this time, there are no active plans to test or trial use of predictive modelling to support proactive early identification of vulnerable families for referral to the Children’s Teams (ie. where potentially no report of concern has yet been made about a child, as outlined in Volume II of the White Paper). The Children’s Teams are still bedding in, and associated changes at the local level may already be strengthening systems of early identification of vulnerable families. Q Who is overseeing the new work? A The testing is being undertaken as a joint effort between Insights MSD and Child, Youth and Family, with the governance by the Proof of Concept Trials Governance Group. Q Has this kind of approach been tried before as part of child maltreatment prevention or child welfare decision-making? A Not as far as we know, but agencies in the United States are beginning to develop and test the approach to see if it can help improve decision-making and service prioritisation there. See: Los Angeles County Department of Children and Family Services: http://www.scpr.org/news/2015/01/13/49191/can-an-algorithmpredict-child-abuse-la-county-chi/ https://chronicleofsocialchange.org/news/preventive-analytics/8384 Allegheny County: http://www.alleghenycounty.us/2015/20150210.pdf Q Has it been tried in other Government services? A Yes. Predictive models are being used in health to, for example, predict the risk of re-hospitalisation so that preventive services can be offered. MSD has two predictive models already in operation. One predicts the risk of long-term benefit receipt and helps case managers decide how to allocate more intensive employment services. The other assigns a score to each 15-17 year old student leaving school which represents the predicted risk that they will enter the benefit system within three years. Young people with scores above a certain threshold are encouraged to join the Youth Services programme, which has recently been evaluated and found to have positive impacts on outcomes for young people. Q Who has been consulted on the vulnerable children predictive modelling work? A A range of stakeholder groups and experts have been consulted. Plans for the work were reviewed by the Central Region Health and Disability Ethics Committee and the National Ethics Advisory Committee. A Senior Officials Group, Reference Group and Māori Reference Group provided oversight and advice as the work progressed. The Children’s Action Plan established He Korowai Tamariki, the Advisory Expert Group on Information Security (AEGIS), to provide assurance that the information-sharing arrangements in place for the Children’s Action Plan, including Predictive Modelling, appropriately balance the public interest in safeguarding children with the protection of individuals’ rights to privacy. AEGIS was briefed and consulted on the feasibility and ethics review work at regular intervals. At the end of the process, AEGIS formed a recommendation that predictive modelling should be used by the Children’s Teams on a trial basis, subject to the mitigations recommended by the Dare ethics review being put in place. Their report is available on the Children’s Action Plan website. http://www.childrensactionplan.govt.nz/assets/Uploads/CAP-AEGISReport.pdf International and local experts provided independent reviews of the work. These reviews are being released alongside the feasibility study and ethics review reports. Groups consulted since the work was completed include: Children’s Action Plan Expert Advisory Group (EAG) The Whangarei Children’s Team (in March 2014) Representatives of Non-Government Organisations (at the January 2014 Children’s Action Plan NGO workshop) Participants at a round table hosted by the Institute for Governance and Policy Studies at Victoria University of Wellington, chaired by the Privacy Commissioner (October 2014). Q How would predictive modelling work? A A predictive model would look at administrative information held for a child, their parent or caregiver, and the other children in the family. It would provide a risk score or priority estimation based on the statistical relationships seen in the past between this information and the likelihood that a child will have a certain outcome. Because it would miss some children who should be assessed as high risk or high priority, and because it would over-state risk or priority in some cases, a predictive model would need to inform, but not replace, practitioners’ professional judgement. Q How accurate is it? A No predictive modelling tool is perfect in its assessment of future risk and priority for services. At the same time, no front-line practitioner has perfect foresight in the assessments of risk and priority they make. Evidence shows that tools based on research tend to be more accurate than professional judgement. But tools can also over-and under-state. What we want to test is whether a predictive modelling tool used in combination with professional judgement results in improved decision-making. The feasibility study found that, based on historical data, the accuracy of a predictive model was good compared to other tools developed to assess the risk of future harm for new-born children. E.g. Looking at the 5% of new-borns assessed as being the highest priority for preventive services in the feasibility study: under the status quo, by age 10 an estimated 7 in 10 would have been the subject of a report of concern to Child Youth and Family and 4 in 10 would have substantiated findings of maltreatment (see figure) these children would account for 40 percent of children with substantiated findings of maltreatment in infancy. Expected outcomes for the 5% of new-borns assessed as highest priority by given ages Report of concern to CYF Substantiated finding of maltreatment (Based on actual outcomes to age 5 for children born in 2007 who were retrospectively risk scored soon after birth in the feasibility study.) An important initial stage of testing and trialling use of predictive modelling at intake within Child Youth and Family will be developing potential models tailored to this purpose, and assessing their accuracy using historical data. Q What information would go into the modelling? A In the feasibility study, we looked for administrative information already held by Government agencies that maps to known risk and protective factors for abuse and neglect (for a listing of risk and protective factors, see http://www.cdc.gov/violenceprevention/childmaltreatment/riskprote ctivefactors.html). These pieces of administrative information included records that indicated: parents, caregivers or other children in the family with a history of contact with Child Youth and Family Police family violence call-outs duration of benefit receipt (as a proxy for exposure to persistent poverty) substance abuse or poor mental health location in a deprived area single parenthood history of imprisonment the presence of a non-biological caregiver frequent address changes high parenting demands (small birth intervals, large numbers of children, multiple births). These are the types of data items that could be used in the development of models for testing and trialling within Child Youth and Family. Q What would be missed by the modelling? What would practitioners need to watch for? A Cases that might be missed by predictive modelling include those for whom: sources of vulnerability have emerged very recently or cannot be observed using administrative information (eg. the mother has severe post-natal depression or problems bonding with the child, or an abusive unrelated adult has recently moved into the household) administrative information that would indicate vulnerability cannot be used by a predictive model because records are held under another name (eg. the sibling or caregiver history of contact with Child Youth and Family is under another name, or cannot be linked in because names or dates of birth are not accurately recorded). Cases that might be given over-stated risk or priority by predictive modelling include those for whom: parents’ past problems that are indicated in administrative information have resolved (eg. addiction or mental health problems have resolved) there are strong protective factors that offset the vulnerabilities indicated in administrative information (eg. whānau members or local service providers are providing positive practical support). Practitioners would need to understand the limitations of the modelling and the potential to under- and over-state. Training will be key. One of the objectives of testing in a simulated intake setting first is to user-test training materials. Q Who would see the results of the modelling for a child? A In the testing that is about to start, the results would be seen by Child Youth and Family social workers who make decisions about how to respond when a report of concern is made about a child. These social workers already gather and view a wide range of often sensitive information to inform their decision-making. Dissemination of the results would be limited to these trusted individuals. And predictive model scores would be generated only when there has already been a report of concern in respect of the child, either from the public or from another agency.