The Source for Crime and Justice Data
Result 0 of 0

Multi-Site Adult Drug Court Evaluation (MADCE), 2003-2009 (ICPSR 30983) RSS

Principal Investigator(s):

Summary:

The Multi-Site Adult Drug Court Evaluation (MADCE) study included 23 drug courts and 6 comparison sites selected from 8 states across the country. The purpose of the study was to: (1) Test whether drug courts reduce drug use, crime, and multiple other problems associated with drug abuse, in comparision with similar offenders not exposed to drug courts, (2) address how drug courts work and for whom by isolating key individual and program factors that make drug courts more or less effective in achieving their desired outcomes, (3) explain how offender attitudes and behaviors change when they are exposed to drug courts and how these changes help explain the effectiveness of drug court programs, and (4) examine whether drug courts generate cost savings.

Offenders in all 29 sites were surveyed in 3 waves, at baseline, 6 months later, and 18 months after enrollment. The research comprises three major components: process evaluation, impact evaluation, and a cost-benefit analysis. The process evaluation describes how the 23 drug court sites vary in program eligibility, supervision, treatment, team collaboration, and other key policies and practices. The impact evaluation examines whether drug courts produce better outcomes than comparison sites and tests which court policies and offender attitudes might explain those effects. The cost-benefit analysis evaluates drug court costs and benefits.

Access Notes

  • One or more files in this study are not available for download due to special restrictions ; consult the restrictions note to learn more. You can apply online for access to the data. A login is required to apply for access.

    Access to these data is restricted. Users interested in obtaining these data must complete a Restricted Data Use Agreement, specify the reasons for the request, and obtain IRB approval or notice of exemption for their research.

Dataset(s)

DS0:  Study-Level Files
Documentation:
DS1:  Nationwide Drug Court Survey Data
Documentation:
Download:
No downloadable data files available.
DS2:  Offender Data
DS3:  Cost Benefit Analysis Data
Documentation:
Download:
No downloadable data files available.

Study Description

Citation

Rossman, Shelli B., John K. Roman, Janine M. Zweig, Michael Rempel, and Christine H. Lindquist. Multi-Site Adult Drug Court Evaluation (MADCE), 2003-2009. ICPSR30983-v1. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2012-11-05. doi:10.3886/ICPSR30983.v1

Persistent URL:

Export Citation:

  • RIS (generic format for RefWorks, EndNote, etc.)
  • EndNote XML (EndNote X4.0.1 or higher)

Funding

This study was funded by:

  • United States Department of Justice. Office of Justice Programs. National Institute of Justice (2003-DC-BX-1001)

Scope of Study

Subject Terms:   cost effectiveness, court system, courts, crime control, crime control policies, crime control programs, drug abuse, drug courts, drug law enforcement, drug law offenses, drug offenders, drug related crimes, outcome evaluation, process evaluation

Smallest Geographic Unit:   geographic area (Dataset 1, Nationwide Drug Court Survey Data)

Geographic Coverage:   Florida, Georgia, Illinois, New York, North Carolina, Pennsylvania, South Carolina, United States, Washington

Time Period:  

  • 2004-02--2004-06
  • 2005-03--2006-06
  • 2005-08--2006-12
  • 2006-09--2008-01
  • 2006-09--2008-01

Date of Collection:  

  • 2004-02--2004-06
  • 2005-03--2006-06
  • 2005-08--2006-12
  • 2006-09--2008-01
  • 2006-09--2008-01

Unit of Observation:   adult drug court (Dataset 1, Nationwide Drug Court Survey Data), individual (Dataset 2, Offender Data), individual (Dataset 3, Cost Benefit Analysis Data)

Universe:  

Dataset 1, Nationwide Drug Court Survey Data: All active adult drug courts that had been in operation for at least one year between February and June of 2004.

Dataset 2, Offender Data: All offenders enrolled in the selected 23 drug courts and 6 comparison sites between March 2005 and June 2006.

Dataset 3, Cost Benefit Analysis Data: All offenders enrolled in the selected 23 drug courts and 6 comparison sites between March 2005 and June 2006.

Data Types:   administrative records data, clinical data, survey data

Data Collection Notes:

Users of this data are encouraged to review the Final Report for more detailed information on all aspects of site selection, data collection, and sampling methods.

Methodology

Study Purpose:  

The purpose of the study was to:

  • Test whether drug courts reduce drug use, crime, and multiple other problems associated with drug abuse, in comparision with similar offenders not exposed to drug courts.

  • Address how drug courts work and for whom by isolating key individual and program factors that make drug courts more or less effective in achieving their desired outcomes.

  • Explain how offender attitudes and behaviors change when they are exposed to drug courts and how these changes help explain the effectiveness of drug court programs.

  • Examine whether drug courts generate cost savings.

Study Design:  

The MADCE study entailed several components, including a survey of Adult Drug Courts, a rigorous site selection process for the evaluation, and the collection of process, outcome, impact, and cost data across the evaluation sites. These elements are described in more detail below.

Nationwide Adult Drug Court Survey

The first task of the Multi-site Adult Drug Court Evaluation (MADCE) was to conduct a nationwide survey of Adult Drug Courts (Dataset 1, Nationwide Drug Court Survey). Between February and June 2004, researchers conducted a Web-based survey of drug courts that primarily served adults and had been in operation for at least one year. Courts were contacted numerous times and through many avenues (i.e. standard and electronic mail, telephone) to request participation in the study. This task served two purposes: (1) it provided information on adult drug court characteristics and operations throughout the United States, and identified similarities and differences in how the programs work, and (2) findings from the survey were used to guide the selection of the 23 drug courts included in the multi-site impact evaluation.

Site Selection

The MADCE was designed to compare drug court participants to offenders with similar drug use and criminal history profiles in comparison jurisdictions. The comparison jurisdictions either did not offer drug courts or had a greater number of drug-involved offenders than could be enrolled in drug court or who did not meet the criteria for the local drug court, but may have met the criteria of drug courts in other areas of the country. The evaluation framework was designed to maximize the number of court- and individual-level observations, while minimizing the costs associated with survey data collection. Researchers identified key court-level components which allowed substantial heterogeneity across courts while at the same time including geographically-clustered courts. An extensive site selection process was undertaken to identify drug courts and comparison sites that met basic evaluability criteria, and that collectively reflected substantial variation in court-level characteristics identified as critical to the definition of drug court operations.

Offender Interviews

One of the key data sources for the MADCE was interview data gathered from drug court participants and comparison offenders. The interviews allowed detailed and timely information to be obtained on participants' attitudes and perceptions of the program, court and supervision experiences, and treatment received over the entire time period that participants were followed. The content of the instruments was similar across the three interviews. The instruments were comprehensive and covered a diverse set of outcomes, background characteristics, "in program" experiences, attitudes and perceptions.

Administration

Enrollment into MADCE took place on a rolling basis between March 2005 and June 2006 (Dataset 2, Offender Data). A computerized case management system was used to assign case to field interviewers and to track the status of fielded cases. With each assigned case, an end date (the date by which the interview must either be completed or coded as ineligible) was listed in the case management system. A six week cut off was established in order to ensure that the respondents were interviewed as close to the beginning of their drug court participation (or comparison condition) as possible.

For the baseline interviews, the majority of respondents were interviewed in the community, though some were incarcerated or in a residential treatment facility. For the 6- and 18-month follow up interviews, increasing numbers of respondents were in correctional facilities or residential treatment facilities.

The majority of interviews were conducted via computer-assisted personal interviewing (CAPI). Pencil and paper interviews were conducted with a small number of respondents who were incarcerated in facilities that prohibited laptop computers. The interviewer went over a brochure about the study with the respondent and answered any questions. Individuals who indicated they were willing to participate were read (and signed) the consent form. Throughout the interviews, interviewers read the questions and recorded the respondents' answers. After the interview questions were completed, respondents were administered a separate consent form for the release of administrative criminal justice data.

Oral Fluid Tests

Oral swab drug tests were conducted in conjunction with the 18-month interviews, for non-incarcerated respondents. Field interviewers completed training for the collection, packaging, and mailing of the oral swabs to a drug testing laboratory. The chosen test was a six-panel oral fluid screen for amphetamines, cannabinoids, cocaine, methamphetamines, opiates, and phencyclidine.

Site Visits

Two rounds of site visits were conducted at the participating sites. During the site visits, in-person semi-structured interviews were conducted with as many key stakeholders affiliated with the drug court as possible, including program coordinators, judges, prosecutors, defense attorneys, treatment liaisons, research staff, probation officers, and law enforcement officers.

The first round of site visits was conducted at the conclusion of the site selection process and primarily focused on confirming the viability of the site for inclusion in the impact evaluation and negotiating logistical details pertaining to data collection for the offender interviews. During the initial site visits, program organization and operations were documented including program structure and key staff; enrollment and case flow; availability of administrative data; the intake process; phases and requirements for court hearings, treatment attendance, case management, drug testing, and supervision; and sanctions and rewards. In addition, details such as local research approvals, the need for interviews in languages other than English, and the transfer of contact information for newly enrolled clients were arranged.

Administrative Data

MADCE also collected respondents' official records from the National Crime Information Center (NCIC) at the Federal Bureau of Investigation (FBI) and from germane state-level criminal justice agencies. Two waves of NCIC criminal history records were requested from the FBI. The first request for data was in March 2008, roughly 22 months after the last case entered the sample. The second and final wave of NCIC data was prepared by the FBI in October 2008, roughly 30 months after sample recruitment ended. This allowed for a standard minimum 24-month follow-for all subjects in the sample and ample time for any new criminal justice contacts during that time frame to have been logged into NCIC's database.

Cost Data

Cost-Benefit data (Dataset 3, Cost Benefit Analysis Data) was drawn largely from the latter two waves of the offender interviews and the administrative criminal justice records. Where available, information on salaries, treatment costs, and drug testing costs were gathered directly from the sites through telephone interviews. Often, however, the sites were unable to estimate some or all of this information. The remaining prices were collected from a wide range of extant research and official, publicly available reports. For instance, many salaries were collected from the Occupational Employment Statistics database maintained by the Bureau of Labor Statistics, incarceration costs were collected from financial reports from the sites' Department of Corrections, and many drug treatment costs were collected from other studies of drug treatment.

Sample:  

Nationwide Adult Drug Courts Selection

To identify courts meeting the criteria of primarily serving adults and of being in operation for at least one year as of February 2004, an initial list of drug courts was developed from reports on active drug courts compiled by the Office of Justice Programs' Drug Court Clearinghouse and Technical Assistance Project at American University. A total of 635 drug courts were identified as meeting these criteria. Contact information for courts was provided by American University, the National Association of Drug Court Professionals (NADCP), and through direct communication with state court administrators. State drug court coordinators also were contacted to verify operational drug courts and clarify any remaining issues. Through this process, a total of 42 courts were determined to have either ceased operation at that time or had been in operation for less than one year at that time. Those courts were dropped from the sample, yielding a final count of 593 active adult drug courts across the United States in February 2004 that were invited to participate in the survey. A total of 380 drug courts completed the adult drug court survey (Dataset 1, Nationwide Drug Court Survey).

Offender Data - Drug Court and Comparison Sites Selection

An extensive site selection process was undertaken to identify drug court and comparison sites (Dataset 2, Offender Data, and Dataset 3, Cost Benefit Analysis Data) that met basic evaluability criteria and that collectively reflected substantial variation in court-level characteristics identified as critical to the definition of drug court operations. Basic evaluability criteria included courts and jurisdictions keeping basic information about clients in management information systems, a regular flow of new clients, and a willingness to participate in the evaluation. Three drug court components were identified to be the focus of site selection procedures: (1) Provision of substance abuse treatment, (2) Leverage the court has in monitoring clients, and (3) Predictability of sanctioning policies of the court.

Using a combination of HotSpot mapping and subjective criteria about how geographically close courts were, 16 potential geographic clusters of drug courts were identified for consideration. From there, researchers prioritized which clusters to pursue for inclusion in the study by doing the following:

  • Examined each cluster closely for proximity of drug courts, client case flow in drug courts to ensure steady study enrollment, and potential for identifing nearby comparision jurisdictions;

  • Identified drug court clusters that reflected geographic diversity across the United States; and

  • Eliminated drug court clusters in California because of state proposition 36 that required all jurisdictions to provide substance abuse treatment to drug-involved offenders as an alternative to incarceration, thus effectively making these jurisdictions different than any others in the country.

Evaluation team members contacted drug courts in the identified clusters to explore drug court practices regarding treatment, sanctioning, and leverage; to begin identifying willingness to participate and possible comparison sites; and to assess evaluability. For each key component, researchers asked if courts implemented a series of operational strategies and, based on the answers, scored each court for each key component. After initial phone conversations and negotiations, the team members visited each potential court and comparison site to finalize the design and logistical considerations related to participation. These visits included meeting with all relevant site personnel, compling all written policies and materials used for the drug courts, and reviewing court management information systems.

The final sample included 23 drug courts and six comparison jurisdictions located in seven geographic clusters. The participating sites included two courts in Illinois, eight courts in New York, two courts in Pennsylvania, two courts in Florida, six courts in Washington, one court in South Carolina, and two courts in Georgia.

Recruitment of Respondents

Recruitment procedures for all waves of offender interviews (Dataset 2, Offender Data) entailed the following steps. First, potential respondents who were not currently incarcerated were mailed a lead letter describing the study and indiciting that an interviewer would be contacting them to determine their interest and potentially schedule interviews. For incarcerated respondents, a lead letter was not sent. The interview was scheduled between the field supervisor and the facility contact. During the appointment, the field interviewer went over the study with the inmates in person. Contact was maintained with respondents between interview waves. This process entailed interviewers attempting phone contact with each respondent approximately three months prior to the 6- and 18-month follow up interviews in order to determine if the contact information on file would be out of date once it was schedule the next interview. If the respondent could not be located for this mid-wave contact, the interviewer began in-depth field tracing to locate the potential respondents.

Weight:   To improve the comparability between the drug court and comparison groups, the offender data (Dataset 2, Offender Data) includes four 'super weight' variables (SUPER6, SUPER18, SUPER68, and SUPERAD). Researchers implemented standard propensity score modeling procedures using an array of baseline characteristics to predict each offender's statistical probability of falling into the drug court as opposed to the comparison sample. An analogous retention model was developed using baseline characteristics and state cluster variables to predict each offender's probability of retention at follow-up. The models were separately implemented for each follow-up period: retained at six months, retained at 18 months, retained at both periods, and retained for the oral fuilds test. Researchers then differentially weighted cases based on the product of their propensity and retention scores at each period.

Mode of Data Collection:   record abstracts, computer-assisted personal interview (CAPI), web-based survey

Data Source:

The MADCE research used a variety of data sources, including:

  • Field Visits: multiple site visits were conducted at all 29 drug court and comparison locations to document program characteristics and operations. These visits included interviews with key stakeholders and structured observations of courtroom proceedings.

  • Self-Report Surveys: a sample of 1,781 offenders (1,156 drug court participants and 625 comparison group members) was interviewed at three intervals: (1) baseline, (2) 6 months after baseline, and (3) 18 months after baseline. The interviews lasted between 1.5 and 2 hours and covered a wealth of domains spanning background characteristics, offender perceptions, in-program experiences and compliance, and outcomes.

  • Oral Fluids Test: a Buccal swab oral fluids drug test was administered during the 18-month interview for respondents who were not incarcerated or in residential treatment at that time.

  • Administrative Records: official criminal history and recidivism data were obtained from state administrative data sources and the National Crime Information Center (NCIC) of the Federal Bureau of Investigation (FBI) at 24 months after study enrollment.

  • Costs and Benefits: the self-report surveys and official recidivism records were used to estimate the amount of each program activity in which each offender participated (quantities), which were then multiplied by the price of each activity (prices). The prices were developed from a combination of stakeholder interviews, review of official budget and other administrative records, and national estimates of prices.

Description of Variables:  

Dataset 1 (Nationwide Drug Court Survey Data) contains a total of 326 variables including general information, program structure (program characteristics, eligibility criteria, substance abuse assessment), program operations (Management Information Systems, entry into drug court program, program staffing, case management, and program contacts), treatment/drug testing (substance abuse treatment services, drug testing), and courtroom processes (courtroom practices, infractions and sanctions, achievements, graduation, other issues).

Dataset 2 (Offender Data) includes variables on demographics, incarceration status and street time, criminal history, current offense, drug court program entry, substance use history and addiction severity, treatment motivation, supervision status and intensity, drug test received, violations, sanctions and rewards received, court hearings and contact, criminal behavior and victimization, substance abuse treatment, support services received, family relationships, physical and mental health, employment, income, and housing.

Dataset 3 (Cost Benefit Analysis Data) draws on variables from the 6- and 18-month follow up waves of offender data focusing on a wide range of resources used, ranging from program inputs (such as hearings and meetings with case managers) to program outcomes (such as use of government support and public services). Cost variables include information on drug court and comparision site salaries, treatment and drug testing costs. Administrative variables were used to estimate the number of arrests, the number of crimes committed, and the length and frequency of incarcerations.

Response Rates:  

Of the 593 active adult drug courts across the United States in February 2004 that were invited to participate in the adult drug court survey (Dataset 1, Nationwide Drug Court Survey), a total of 380 drug courts completed the survey between February and June of 2004, resulting in a 64 percent response rate.

The original offender sample (Dataset 2, Offender Data) was 1,156. Study attrition rates were low at 13 percent at the 6-month wave, and 18 percent at the 18-month wave. Overall, 76 percent of the original sample participated in all 3 interview waves. Ninety-five percent of eligible (non-incarcerated) respondents consented to the oral fluids drug test, with minimal differences in consent rates between drug court (94.5 percent) and comparison group members (95.5 percent).

Official records data (Dataset 3, Cost Benefit Analysis Data) were collected for only those study participants who provided written consent. Of the 1,781 individuals recruited for the sample, approximately 11 percent declined to provide researchers with access to their criminal justice records, resulting in a final sample size of 1,578.

Presence of Common Scales:  

The following scales were used to construct the data collection instruments for Dataset 2, Offender Data:

  • The Addiction Severity Index (Gavin, Ross, and Skinner, 1989);

  • Texas Christian University (TCU) Treatment Motivation Scales (Knight, Holcom, and Simpson, 1994);

  • The Stages of Change Readiness and Treatment Eagerness Scale (SOCRATES) (Miller and Tonigan, 1996);

  • CES-D short from depression scale (Andresen, Malmgren, et al. 1994); and

  • Anti-Social Personality Disorder (ASPD) and Narcissism scales dervived from the structured clinical interview for the DSM-IV-TR (American Psychiatric Association 2000).

Extent of Processing:  ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection:

  • Standardized missing values.
  • Checked for undocumented or out-of-range codes.

Version(s)

Original ICPSR Release:  

Related Publications

Variables

Utilities

Metadata Exports

If you're looking for collection-level metadata rather than an individual metadata record, please visit our Metadata Records page.

Download Statistics