Damphousse, Kelly R., Laura Pointon, Deidra Upchurch, and Rebecca K. Moore. ASSESSING THE VALIDITY OF VOICE STRESS ANALYSIS (VSA) TOOLS IN A JAIL SETTING IN OKLAHOMA CITY, OKLAHOMA, 2006. ICPSR20625-v1. Norman, OK: University of Oklahoma [producer], 2007. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2008-06-23. https://doi.org/10.3886/ICPSR20625.v1
Persistent URL: https://doi.org/10.3886/ICPSR20625.v1
- RIS (generic format for RefWorks, EndNote, etc.)
- EndNote XML (EndNote X4.0.1 or higher)
voice stress analysis
Smallest Geographic Unit:
Date of Collection:
Unit of Observation:
All men arrested within Oklahoma County between February 27 and March 24, 2006.
clinical data, survey data, and administrative records data
Data Collection Notes:
Users are encouraged to refer to the final report cited in the "Related Literature" section of this study and the studies that are part of the Arrestee Drug Abuse Monitoring (ADAM) Program/Drug Use Forecasting (DUF) Series for more detailed information regarding the study design.
The purpose of the project was to assess the validity of two Voice Stress Analysis (VSA) tools currently on the market: the Layered Voice Analysis (LVA) and the Computer Voice Stress Analyzer (CVSA). The goal of this project was to test the effectiveness of each VSA device in a jail setting by evaluating the ability of the two VSA instruments to detect deceptive answers about recent drug use among an arrestee population.
The methodology and sampling protocols for this study were derived from the pre-existing methodology and sampling techniques employed in the NIJ-funded Arrestee Drug Abuse Monitoring (ADAM) program that operated in Oklahoma County from 1998 to 2004. The data collection team had access to booking data to help select the sample and collect basic data (e.g., demographics and charge information) for each respondent. The original ADAM instruments were used as the basis for the data collection in this study. The process for completing the interviews differed slightly depending on the type of voice stress analysis system being tested. The Computer Voice Stress Analyzer (CVSA) program required that direct questions that resulted in yes or no answers be asked. The Layered Voice Analysis (LVA) program, on the other hand, required that the questions (and the responses) be more open-ended and "conversational." The interviewers were trained in the operation of the CVSA and LVA systems to ensure that the research protocol matched the use protocol of the Voice Stress Analysis (VSA) devices.
The researchers interviewed arrestees in the Oklahoma County jail about their recent illicit drug use during the months of February and March 2006. The voluntary and confidential interviews were conducted only with arrestees who had been in the detention facility for fewer than 48 hours. The researchers collected data using the CVSA program for the first 12 days and then using the LVA program for the second 12 days. In addition to recordings made by the computers, the interviewers also coded each subject's responses to each on a paper survey.
The VSA data collected using each of the software systems in this study were sent to certified examiners from CVSA and LVA for their analysis. These examiners were referred to as the "expert" examiners in this study. When the project was completed, the interviewers ("novice" examiners) and the expert examiners began to assess the computer output for each arrestee to determine if deception was indicated (DI) or not (NDI). Both sets of examiners analyzed the output for each subject to determine if software had detected that the subject was being deceptive about recent drug use. In addition, the "novice" analysts also entered into the data a prediction of whether the arrestee would test positive for a particular drug.
After the completion of the interview, the subjects were asked to complete the data collection process by supplying urine specimens. Answers from the 319 respondents were compared to the results of a urinalysis test to determine the extent to which they were being deceptive. Then, their "actual deceptiveness" was compared to the extent to which deception was indicated by the VSA programs.
The sample for this study was collected at the Oklahoma County Detention Center in Oklahoma City, Oklahoma during the months of February and March 2006. The voluntary and confidential interviews were conducted only with arrestees who had been in the detention facility for fewer than 48 hours. Using the Arrestee Drug Abuse Monitoring (ADAM) probability-based sampling plan, the total number of men arrested within Oklahoma County during this period (regardless of charge) composed the sampling frame for the VSA study. This method of sampling was devised to allow the selection of arrestees during the time of day with the highest volume of arrests and to allow the random selection of arrestees who were booked during the remaining hours in that 24-hour day.
The sample selected during the high volume time was referred to as "flow," and the arrestees from the remaining hours of the day were referred to as "stock." These terms were traditionally used because interviews in the ADAM project were conducted during the peak booking time (flow). Thus, flow samples were composed of people who "flowed" into the jail during the eight-hour period that the research team was collecting data. The "stock" referred to the sample of people who were booked during the 16-hour period that the research team was not collecting data during the times of the day when booking rates were normally low. The sample was drawn proportionately from both stock and flow throughout each data collection period to reflect the distribution of arrests each day. The site had previously been given a target number of interviews it was expected to complete each day, so the researchers used that figure. In Oklahoma City, the sampling plan required the selection of seven "stock" males and a minimum of five "flow" males per day.
The original ADAM program participants included both adult males and females. While the researchers originally intended to collect data from both males and females in the Voice Stress Analysis (VSA) study as well, circumstances at the jail precluded this possibility. Because constraints allowed only one interviewer to work at a time, and since there are about five times as many male arrestees booked each day, the research team decided to maximize the sample size by only collecting data from males.
For the VSA project, the researchers randomly selected 356 male arrestees into the sample. Of those, 331 (93 percent) agreed to an interview, and 319 (90 percent) of the interviewed arrestees agreed to provide a urine sample. This refusal rate was very similar to the original ADAM project except that a greater proportion of interviewed arrestees (96 percent) agreed to provide a urine sample. The refusal rate did not vary significantly for each VSA protocol. Indeed, there were no significant differences between the Computer Voice Stress Analyzer (CVSA) and Layered Voice Analysis (LVA) sample for age, percent who were deceptive about any drugs, or percent who tested positive for any drug.
Description of Variables:
The dataset contains (1) demographic information obtained from the official booking records, (2) responses to survey questions about recent drug use, (3) the results of a urinalysis test on the five drugs (marijuana, cocaine, heroin, methamphetamine, and pcp), (4) variables recording "deception" or "no deception" on each of the drugs, and (5) decisions by novice and expert analysts regarding the indication of deception.
Of the 356 male arrestees in the sample, a total of 319 arrestees agreed to an interview and also agreed to provide a urine sample, thus yielding a final response rate of 90 percent.
Presence of Common Scales:
Extent of Processing: ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of
disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major
statistical software formats as well as standard codebooks to accompany the data. In addition to
these procedures, ICPSR performed the following processing steps for this data collection:
Standardized missing values.
Checked for undocumented or out-of-range codes.