Measuring Perceptions of Appropriate Prison Sentences in the United States, 2000 (ICPSR 3988)

Version Date: Mar 30, 2006 View help for published

Principal Investigator(s): View help for Principal Investigator(s)
Mark A. Cohen, Vanderbilt University. Owen Graduate School of Management; Roland T. Rust, Vanderbilt University. Owen Graduate School of Management; Sara Steen, Vanderbilt University. Owen Graduate School of Management

https://doi.org/10.3886/ICPSR03988.v1

Version V1

Slide tabs to view more

This study examined the public's preferences regarding sentencing and parole of criminal offenders. It also investigated the public's willingness to pay for particular crime prevention and control strategies and tested new methods for gathering this kind of information from the public. This involved asking the public to respond to a series of crime vignettes that involved constrained choice. The study consisted of a telephone survey of 1,300 adult respondents conducted in 2000 in the United States. Following a review by a panel of experts and extensive pretesting, the final instrument was programmed for computer-assisted telephone interviews (CATI). The questionnaire specifically focused on: (1) the attitudes of the public on issues such as the number of police on the street, civil rights of minority groups, and the legal rights of people accused of serious crimes, (2) the randomized evaluation of preferred sentencing alternatives for eight different crime scenarios, (3) making parole decisions in a constrained choice setting by assuming that there is only enough space for one of two offenders, (4) the underlying factors that motivate the public's parole decisions, and (5) respondents' willingness to pay for various crime prevention strategies.

Cohen, Mark A., Rust, Roland T., and Steen, Sara. Measuring Perceptions of Appropriate Prison Sentences in the United States, 2000. [distributor], 2006-03-30. https://doi.org/10.3886/ICPSR03988.v1

Export Citation:

  • RIS (generic format for RefWorks, EndNote, etc.)
  • EndNote
United States Department of Justice. Office of Justice Programs. National Institute of Justice (99-CE-VX-0001)

state

Hide

2000-05-16 -- 2000-08-08
2000-05-16 -- 2000-08-08
  1. The user guide, codebook, and data collection instrument are provided by ICPSR as separate Portable Document Format (PDF) files. The PDF file format was developed by Adobe Systems Incorporated and can be accessed using PDF reader software, such as the Adobe Acrobat Reader. Information on how to obtain a copy of the Acrobat Reader is provided on the ICPSR Web site.

Hide

This study examined the public's preferences regarding sentencing and parole of criminal offenders. It also investigated the public's willingness to pay for particular crime prevention and control strategies and tested new methods for gathering this kind of information from the public. The study's methodology was based on that used by Wolfgang et al. (1985) and others, in which a sample of the United States public was asked to react to a series of crime vignettes. However, this study made a number of modifications to those vignettes. With regards to gathering information on the public's preferences toward sentencing and parole of criminal offenders, these changes were to: (1) focus on crimes normally encountered by local criminal justice agencies, such as burglary, robbery, and assault and (2) place the parole decision in a constrained choice setting by asking respondents to assume that there was only enough space for one of two offenders and one had to be let go. With regards to collecting information on the public's willingness to pay for crime prevention and control, the study's changes were to: (1) use paired comparisons in a constrained choice setting where, for example, respondents were asked to choose between the expansion of alternative crime prevention programs and a tax rebate and (2) elicit information on respondents' willingness to pay out of their own pockets for crime reduction strategies.

The study consisted of a telephone survey of 1,300 adult respondents conducted from May 16, 2000, to August 8, 2000, in the United States. After conducting a literature review, an initial draft questionnaire was prepared and sent to a panel of experts. The questionnaire was also subjected to a thorough pretest, which included focus groups and cognitive testing. Three focus groups were held to observe participants' reactions to the first draft of the questionnaire and to obtain feedback on how to create a more effective interview. Focus group participants were screened to obtain a cross-section of the general population. Participants were asked questions that assessed their ability to comprehend the crime scenarios included in the instrument and their ability to make an informed judgment about sanctions. After extensive revisions of the instrument based on the feedback from the focus groups, 11 cognitive interviews were conducted. These interviews allowed researchers to test the structure and content of the questionnaire on a one-on-one basis with respondents. During a cognitive interview respondents were asked to "think aloud" while determining how to answer questions. This allowed researchers to uncover how respondents interpreted questions. The final stage of survey development was to pretest the revised instrument with live telephone interviews. A total of 11 interviews were completed with an average length of 27.5 minutes. Modifications were then made, primarily to shorten the length of the interview. The final survey was programmed for computer-assisted telephone interviews (CATI). This approach allowed for complex branches, single and multiple responses, open-ended text answers, and random rotation of text insertions for the vignettes. It also reduced the frequency of invalid data by not permitting answers that were outside the scope of the options provided in the question. The interviews were conducted by the Social and Policy Division of Roper Starch Worldwide, Inc. Training sessions were held to familiarize the interviewers with the study and to teach refusal avoidance techniques. A number of quality control measures were in place during the interviewing process to ensure the accuracy of the data. These measures included the CATI system, issuing daily production reports, reviewing interim frequencies, floor supervision, and monitoring interviews.

A random-digit dial (RDD) sample, Type B, of 4,966 phone numbers in the United States, including Alaska and Hawaii, was obtained from Survey Sampling, Inc. The sample was produced by selecting randomly generated telephone numbers in proportion to the number of listed telephone numbers in each working telephone block. A block is a contiguous set of 100 telephone numbers within an active telephone area code and exchange combination, such as 555-555-3300 to 555-555-3399. This type of sample yields high efficiency, with the chance of selecting working numbers ranging between 55 percent and 75 percent, in contrast to the 24-percent chance of selecting a working number with a pure random-digit sample. Although a potential for bias could exist with the Random Digit B sample, no actual bias has been encountered. In an effort to select a representative sample of adults within each household, the last birthday method of screening was used. Upon contacting a potential respondent, interviewers asked to speak with the adult in the household over 18 who had had the most recent birthday. If the selected person was not available, the interviewer arranged a callback, for a specific date and time if possible, to speak with the eligible person. Once an eligible respondent had been identified in a specific household, there could never be a substitution.

Adults 18 and over living in the United States.

Individuals

Data were obtained through computer-assisted telephone interviews.

All seven parts of the dataset include a case identification variable that can be used to link the different parts. Part 1 and Parts 3-7 contain a weight variable. Variables in Part 1 include the actual responses to all of the items on the questionnaire. Additional variables are also provided that indicate which of several possible scenarios were read to the respondent in certain sections of the survey. In Section A, Preliminary Background Information, respondents were asked about their views on issues such as the number of police on the street, availability of programs designed to get people off drugs, civil rights of minority groups in the United States, and legal rights of people accused of serious crimes. In Section B, Test Screener, respondents were presented with two randomly-selected scenarios describing an individual convicted of certain crimes and were asked what the man's sentence should be in both scenarios. In Part 1, Imprisonment Decisions, respondents were presented with three sets of two scenarios out of a stock of eight possible scenarios. Each scenario described an individual who was convicted of certain crimes and received a specified prison sentence. Respondents were asked to compare the individuals described in each pair of scenarios and decide which of the two should be released from prison based on certain budget and prison capacity constraints. In Part 2, Criminal Response Decisions, respondents were presented with two more scenarios out of the main stock of eight and were asked what they considered appropriate punishments to be for the individuals in the scenarios, including type of sentence, length of sentence, whether a fine should be paid, how large the fine should be, whether the sentence should be reduced if a fine were paid and by how much, and who should receive the fine. In Part 3, Crime Prevention Decisions, respondents were asked to put themselves in the shoes of their local mayor and asked to suppose that the federal government had just given their city either 100 or 1,000 dollars per household. Respondents were given the option of giving it to local residents or spending it on more prisons, more drug and alcohol treatment programs for offenders convicted of nonviolent crimes, more police on the street, or more prevention programs to help keep youth out of trouble. Respondents were allowed to divide the money in any way they wanted among these five options. In Part 4, How Much Would You Be Willing to Pay?, respondents were asked a series of questions on whether they would be willing to pay certain randomly selected amounts of money out of their own pockets to reduce certain crimes. Respondents were then asked a series of questions on their own history of victimization and experiences with the criminal justice system. Respondents also supplied demographic information. Parts 2-7 of the dataset contain variables that were derived from the variables in Part 1. The primary difference between the data in Parts 2-7 and Part 1 is that Parts 2-7 do not contain a variable for every question that was asked. Instead, Parts 2-7 contain one variable for every possible scenario, which means that for every question in the survey instrument there could be several corresponding variables in Parts 2-7. Part 8 contains verbatim responses to open-ended survey items.

Out of the random-digit dial sample of 4,966 phone numbers, a total of 1,300 interviews were completed, a 43 percent response rate.

None.

Hide

2004-10-01

2018-02-15 The citation of this study may have changed due to the new version control system that has been implemented. The previous citation was:
  • Cohen, Mark A., Roland T. Rust, and Sara Steen. MEASURING PERCEPTIONS OF APPROPRIATE PRISON SENTENCES IN THE UNITED STATES, 2000. ICPSR version. Nashville, TN: Vanderbilt University [producer], 2000. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2004. http://doi.org/10.3886/ICPSR03988.v1

2006-03-30 File CQ3988.ALL.PDF was removed from any previous datasets and flagged as a study-level file, so that it will accompany all downloads.

2006-03-30 File UG3988.ALL.PDF was removed from any previous datasets and flagged as a study-level file, so that it will accompany all downloads.

2005-11-04 On 2005-03-14 new files were added to one or more datasets. These files included additional setup files as well as one or more of the following: SAS program, SAS transport, SPSS portable, and Stata system files. The metadata record was revised 2005-11-04 to reflect these additions.

Hide

Notes

  • The public-use data files in this collection are available for access by the general public. Access does not require affiliation with an ICPSR member institution.