ADAM draft May 17

advertisement
WHAT KILLED ADAM?
Mark A.R. Kleiman
Draft May 17, 2004
Not knowing about the actual patterns of illicit drug abuse and drug
distribution cripples policymaking. As the subtitle of a National Academy
report put it four years ago, “What We Don’t Know Keeps Hurting Us.” It
hurts more when the most cost-effective data collection programs are
killed, as just happened to the Arrestee Drug Abuse Monitoring (ADAM)
program of the National Institute of Justice.
Knowing about the actual patterns of illicit drug abuse is hard, because
the people chiefly involved aren’t lining up to be interviewed.
The great bulk of illicit drugs is consumed by heavy users, rather than
occasional users. And the vast majority of heavy users are criminally
active. (About three-quarters of heavy cocaine users get arrested for
felonies in the course of any given year.)
Yet the largest and most expensive of the our drug data collection efforts,
the National Survey on Drug Use and Health (NSDUH, formerly the
National Household Survey on Drug Abuse, or NHSDA) captures only a
tiny proportion of the criminally active heavy drug users. An estimate of
total cocaine consumption derived from the NSDUH would account for
only about 10% of actual cocaine consumption (about 30 metric tons out
of about 300).
So when the National Drug Control Strategy issued by the Office of
National Drug Control Policy bases its quantitative goals for reducing the
size of the drug problem on changes in self-reported drug use in the
NSDUH (or the Monitoring the Future survey aimed at middle-school and
high-school students), it’s mostly aiming at the wrong things: not the
number of people with diagnosable substance abuse disorders, or the
volume of drugs consumed, or the revenues of the illicit markets, or the
crime associated with drug abuse and drug dealing. All of those things
are arguably more important than the mere numbers of people who use
one or another illicit drug in the course of a month, but none of them is
measured by MTF or NSDUH.
If most drugs are consumed by heavy users, and most heavy users are
criminally active – well-established facts, but not facts visible in the big
surveys – then to understand what’s happening on the demand side of
the illicit drug business we need to study criminally-active heavy drug
users.
The obvious place to look for criminals is the jails and police lockups
where they are taken immediately after arrest. So if most of the cocaine
and heroin in the country is being used by criminals, why not do another
survey, specifically focused on arrestees?
That was the question that led to the data collection effort first called
Drug Use Forecasting (DUF) and then renamed Arrestee Drug Abuse
Monitoring (ADAM).
Response rates were consistently above 90%,
compared to less than 80% for the household-based NSDUH. (When
you’re trying to measure behaviors, such as heavy illicit drug use,
engaged in by about two percent of the population, missing 20% of your
intended sample is a catastrophe.) Because ADAM was done in a few
concentrated locations, it was able to incorporate what neither of the big
surveys has ever had: “ground truth” in the form of drug testing results
as a check on the possible inaccuracy of self-report data on sensitive
questions.
The good news about DUF/ADAM was that it was cheap (at $8 million
per year, about 10% of the price of either of the two big surveys) and
produced lots of useful information. The bad news is that the program
has now been cancelled.
The proximate cause of the cancellation was the budget crunch at the
sponsoring agency, the National Institute of Justice. The NIJ budget, at
around $50 million per year, is about 5% of the budget of the National
Institute on Drug Abuse (NIDA), which sponsors Monitoring the Future,
and about 10% of the budget of the Center for Substance Abuse
Treatment, which funds the National Survey on Drug Use and Health.
That’s part of a pattern commented on by Peter Reuter: more than 80%
of the actual public spending on drug abuse control goes for law
enforcement, but almost all of the research money is on the prevention
and treatment side of the enterprise. (Private and foundation donors are,
if anything, even less generous to research into the illicit markets.)
But the picture is even worse than that, because most of the NIJ budget
is earmarked by Congress for “science and technology” projects (mostly
developing new equipment for police). When Congress chopped the total
NIJ budget from $60 million in Fiscal 2003 to $47.5 million in Fiscal
2004, it cut the amount available for NIJ’s behavior sciences research
from $20 million to $10 million. While the $8 million spent on ADAM
looks like a pittance compared to the two big surveys, NIJ clearly
couldn’t spend four-fifths of its total crime-research budget on a single
data-collection effort.
In addition to that budgetary problem, ADAM had a problem of
inadequate scientific respectability owing to its unconventional sampling
process.
ADAM was a sample of events – arrests – rather than a sample of people.
The frequency of arrest among a population of heavy drug users varies
from time to time in unknown ways, and for causes that may be
extraneous to the phenomenon of drug abuse. For example, if the police
in some city cut back on prostitution enforcement to increase
enforcement against bad-check passers, and if the drug use patterns of
bad-check passers differ from those of prostitutes, the ADAM numbers in
that city might show a drop in, e.g., cocaine use that didn’t reflect any
change in the underlying drug markets. So it isn’t possible to make
straightforward generalizations from ADAM results to the population of
heavy drug users, or even the population of criminally active drug users.
Moreover, since arrest practices and the catchment areas of the lockups
where ADAM took its samples varies from one jurisdiction to another –
Manhattan and Boston, for example, are purely big-city jurisdictions,
while the central lockup in Indianapolis gets its arrestees from all of
Marion County, which is largely suburban – the numbers aren’t strictly
comparable from one jurisdiction to another. (If Indianapolis arrestees
are less likely to be cocaine-positive than Manhattan arrestees, the
difference can’t be taken as an estimate of the difference in drug use
between Indianapolis-area criminals and New York-area criminals.)
In the real world, those are manageable problems, at least compared with
the fact that the big surveys miss most of what’s actually going on in the
markets. But in the world of classical statisticians and survey research
experts, the absence of a known sampling frame – and consequently of
well-defined standard errors of estimate – is the Unforgiven Sin.
“ADAM,” sniffed one of them in my presence, “isn’t actually a sample of
anything. So it doesn’t actually tell you anything.”
(In my mind’s ear I hear the voice of Frederick Mosteller, speaking in the
name of the Rev. Thomas Bayes, saying, “Nothing? Surely it doesn’t tell
you nothing. The question is, what does it tell you? How does your
estimate of what you’re interested in change in the presence of the new
data?”)
What to the guardians of statistical purity looks like a defense of
scientific standards looks to anyone trained in the Bayesian tradition like
the man looking for his lost keys under the lamppost, rather than in the
dark alley where he lost them, because the light is better there.
Unfortunately, the OMB Office of Information and Regulatory Review
(OIRA), which by law must sign off on every federally-funded survey
research effort, is a bastion of classical statisticians and survey
researchers, and OIRA’s dim view of ADAM (shared by the staff of the
National Institute on Drug Abuse) seems to have rubbed off on the Office
of National Drug Control Policy, which has been pushing for years to
bring ADAM closer and closer to being a stratified sample of arrestees
nationally by adding more and more jails and lockups to the original 15city sample.
In an ideal fantasy world, of course, we’d have a panel of criminally
active heavy drug users to interview at regular intervals. But that project
is simply not feasible. ADAM was a rough-and-ready substitute, and
provided useful information about both local and national trends.
Outside the classical statistics textbooks, not having a proper sampling
frame isn’t really the same thing as not knowing anything.
But in the face of the NIJ budget crunch, ADAM’s deficiencies in
statistical purity made it impossible to fund. Neither NIDA nor SAMHSA
wanted to pay for a data collection process without a sampling frame,
especially since their basically biomedical focus makes them somewhat
indifferent to illicit-market issues in the first place. ONDCP, which
recognizes the need for some measure of drug use among arrestees, was
too fixated on the need for national representativeness to go to bat for
ADAM as it was.
The situation isn’t hopeless. Both NIJ and ONDCP have expressed their
intention to get some sort of program to measure illicit drug use among
arrestees back in operation. But no one seems quite sure yet what form
that system will take, or who will pay for it. In the meantime, the
national drug control effort continues to fly mostly blind.
Trying to make ADAM into a nationally representative sample of
arrestees is probably more trouble than it’s worth. On the other hand,
the spread of methamphetamine and of illicit markets in prescription
narcotics – both of them substantial problems in rural areas – means
that looking at the lockups at the centers of SMSAs alone, as the old
DUF program did, is no longer adequate.
That suggests a hybrid approach: get quarterly numbers from lockups in
the twenty or so biggest drug-market cities, and run a separate program
to conduct interviews and collect specimens once a year from a much
larger and more diverse subset – not necessarily an equal-probability
sample – of lockups nationally. That won’t give you a computable
standard error of estimate, but it will provide some cheap and highly
useful data to make and evaluate drug control policies.
Download