Evaluation of Multi-Jurisdictional Task Forces in the United States, 1999-2000 (ICPSR 3865)
Principal Investigator(s): Hayeslip, David W., Abt Associates; Dunworth, Terry, Abt Associates; Russell-Einhorn, Malcolm L., University of Maryland
Since the inception of the Edward Byrne Memorial State and Local Law Enforcement Assistance Program in 1988, a large proportion of formula grant program funds has been allocated by state administrative agencies (SAA) to support multi-jurisdictional drug task forces (MJTFs). MJTFs are a subset of law enforcement task forces that were created in order to target the illegal distribution of drugs at the local and regional levels. While many policymakers, researchers, and practitioners express confidence in the task force approach generally, there remains insufficient understanding of the possible community and organizational impact of individual MJTFs and the kinds of evaluation methodologies that can elicit such information. The goal of this project was to identify several methodologies that could be used by state planning agencies, task forces, and others to assess the work of MJTFs. This project consisted of two surveys that were designed to ascertain the extent to which state administrative agencies (SAAs) and multi-jurisdictional drug task forces (MJTFs) collected various kinds of process and outcome information and conducted evaluations of task forces.
These data are freely available.
Hayeslip, David W., Terry Dunworth, and Malcolm L. Russell-Einhorn. Evaluation of Multi-Jurisdictional Task Forces in the United States, 1999-2000. ICPSR03865-v1. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2004. http://doi.org/10.3886/ICPSR03865.v1
Persistent URL: http://doi.org/10.3886/ICPSR03865.v1
This study was funded by:
- United States Department of Justice. Office of Justice Programs. National Institute of Justice (99-DD-BX-0034)
Scope of Study
Smallest Geographic Unit: states
Geographic Coverage: United States
Date of Collection:
Unit of Observation: Parts 1 and 2: agencies. Parts 3-5: task forces.
Universe: Parts 1 and 2: All State Administrative Agencies in the United States. Parts 3-5: All multi-jurisdictional task forces in the United States.
Data Types: survey data
Study Purpose: Since the inception of the Edward Byrne Memorial State and Local Law Enforcement Assistance Program in 1988, a large proportion of formula grant program funds has been allocated by state administrative agencies (SAA) to support multi-jurisdictional drug task forces (MJTFs). MJTFs are a subset of law enforcement task forces that were created in order to target the illegal distribution of drugs at the local and regional levels. By sharing personnel, equipment, intelligence, and legal/jurisdictional authorities, law enforcement agencies in regions across the country have joined together to expand their investigative and prosecutorial reach. Rigorous studies of task force activities are scarce, and the diversity of task force organization is underappreciated. While many policymakers, researchers, and practitioners express confidence in the task force approach generally, there remains insufficient understanding of the possible community and organizational impact of individual MJTFs and the kinds of evaluation methodologies that can elicit such information. To encourage further development of such MJTF evaluation methodologies and a better understanding of individual task force implementation and operations across the United States, the National Institute of Justice funded the evaluation of this project. This data collection reflects the first phase of this evaluation. The goal was to identify several methodologies that could be used by state planning agencies, task forces, and others to assess the work of MJTFs. The principal goal of the project was not to develop a universal means of comparing task force effectiveness across MJTFs, but to bolster efforts to track changes in individual task force effectiveness and accomplishments over time based on alterations in policies, resources, tactics, and the external environment. At the same time, the project sought to have SAAs and task forces develop a broader set of objectives and measures and encourage a closer fit between these objectives and actual achievements. To achieve these goals, the researchers surveyed SAAs and MJTFs. The SAA survey (Parts 1 and 2) was designed to meet three objectives: (1) to identify active Byrne-funded MJTFs, (2) to identify the kinds of information collected by SAAs to monitor program implementation and impact, and (3) to identify characteristics of current or past evaluations of MJTFs. The MJTF survey (Parts 3-5) was also designed to meet three objectives: (1) to identify the functions and organizational characteristics of individual Byrne-funded task forces, (2) to identify what kinds of operational and crime-related information are collected by such task forces, and (3) to identify what kinds of task force evaluations have been conducted, and how the findings were used.
Study Design: This project consisted of two surveys that were designed to ascertain the extent to which state administrative agencies (SAAs) and multi-jurisdictional drug task forces (MJTFs) collected various kinds of process and outcome information and conducted evaluations of task forces. The SAAs were surveyed first in order to define the universe of existing task forces and to obtain information about broader evaluations and studies of MJTFs. The survey was mailed to each of the directors of 56 SAAs on September 9, 1999. The recipients of the mailing included all 50 states, the District of Columbia, and five territories and protectorates. The initial response rate was less than 30 percent. Staff from the Bureau of Justice Assistance (BJA) assisted in improving the response rate by sending follow-up e-mails and making telephone calls to non-responding SAAs to request that they respond to the survey. As of December 1999, after three requests from BJA, the response rate was 73 percent. The SAA survey was divided into three parts. However, data from the first part of this survey were not included in this data collection because they consisted only of a list of MJTFs. Part 1 of this collection contains data from the section of the survey regarding SAA reporting requirements. BJA required the SAAs to conduct fiscal, administrative, programmatic, and evaluative monitoring of their subgrantees and to assess progress under statewide plans. The survey asked SAAs about the types of information they regularly collected from task forces as part of this monitoring. The four main areas of reporting and monitoring examined were goals and objectives, implementation activities, operational activities, and impacts. Part 2 contains data from the section of the SAA survey that focused on evaluation requirements and activities. The SAAs were asked to describe their evaluation requirements and practices with respect to MJTFS. Each SAA completed this section of the survey for every MJTF in its state. The results of the SAA survey provided a framework for obtaining a kind of census of existing task forces, as well as an understanding of the kinds of information collected on task forces to monitor program implementation and impact. The MJTF survey (Parts 3-5) provided an opportunity to collect more detailed information about task force organization and missions and about data collection and evaluation activities. The researchers sent out surveys to individual task forces in the winter of 2000. The MJTF survey had a three-part structure that was functionally similar to the SAA survey. The survey was sent to 757 subgrantees. This survey was also divided into three sections. The first part of the MJTF survey (Part 3) sought to collect basic background information on the organization and management of each task force. In addition to asking subgrantees to identify their task forces by their official names, the survey asked respondents to state when the task force was officially formed and when it began receiving Byrne funding. It also asked the respondents to state how many years it had received such funding. The second part of the MJTF survey (Part 4) asked subgrantee task forces to summarize information about data collection and reporting. The third part of the MJTF survey (Part 5) collected detailed information about formal task force evaluations that had been completed.
Sample: Not applicable.
Description of Variables: Part 1 consists of several dummy variables indicating the types of information SAAs collected about MJTFs, including task force goals and objectives, implementation activities, operational activities, and impact measurements. SAAs also reported how they used the information from MJTFs, such as for programmatic oversight, strategic planning, development of legislation, and dissemination to task forces. Part 2 variables capture information about the evaluations conducted by SAAs, including the start and end dates, the type of evaluation, type of research designs employed, and how the evaluation was funded. Part 3 contains administrative information about the MJTFs, including when the task forces were formed, how long they had been funded, the types of crimes on which they focused, their goals and objectives, the number and types of agencies represented, the number of full and part-time positions, how they were managed, their memorandum of understanding, and reporting practices. Part 4 consists of dummy variables designating the types of operational information collected by MJTFs, whether the operational information was disseminated and to whom, the types of general crime information collected by MJTFs, and types of supporting information gathered by MJTFs. Part 5 variables collected information about the types of evaluations the MJTFs had already conducted including the type of evaluation, the type of research, how the evaluation was funded, the starting and ending periods, to whom the information was disseminated, and how the information was used.
Response Rates: The response rate for the SAA survey (Parts 1 and 2) was 73 percent. The MJTF survey was sent to 757 subgrantees. Of this number 315 answered at least Part 1 of the survey, which addressed administrative information (Part 3), for a response rate of 42 percent. A slightly lower number, 307, completed Part 1 of the survey and at least some of Part 2, which addressed information collection (Part 4). Only 41 subgrantees completed Part 3 of the survey, the evaluation detail sheet (Part 5), and an additional four subgrantees submitted copies of evaluation reports but failed to complete Part 3 for any of these reports. The principal investigators noted that caution should be used in interpreting the results of the task force surveys due to the very low response rate. The findings may not be representative of the entire identified population of task forces to which surveys were originally sent. In particular, there may be an inherent bias in the results if the task forces most experienced and knowledgeable about the benefits of evaluation were the ones more likely to take the time to respond to the survey.
Presence of Common Scales: None.
Extent of Processing: ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection:
- Standardized missing values.
- Checked for undocumented or out-of-range codes.
Original ICPSR Release: 2004-04-28
- 2006-03-30 File UG3865.ALL.PDF was removed from any previous datasets and flagged as a study-level file, so that it will accompany all downloads.
- 2006-03-30 File CQ3865.ALL.PDF was removed from any previous datasets and flagged as a study-level file, so that it will accompany all downloads.
- 2005-11-04 On 2005-03-14 new files were added to one or more datasets. These files included additional setup files as well as one or more of the following: SAS program, SAS transport, SPSS portable, and Stata system files. The metadata record was revised 2005-11-04 to reflect these additions.
- Citations exports are provided above.
Export Study-level metadata (does not include variable-level metadata)
If you're looking for collection-level metadata rather than an individual metadata record, please visit our Metadata Records page.