National Evaluation of the Safe Start Promising Approaches Initiative, 2006-2010 (ICPSR 34740)

Published: Nov 27, 2013

Principal Investigator(s):
Lisa H. Jaycox, RAND Corporation; Laura J. Hickman, Portland State University; Dana Schultz, RAND Corporation

Version V2

The Safe Start Promising Approaches for Children Exposed to Violence Initiative funded 15 sites to implement and evaluate programs to improve outcomes for children exposed to violence. RAND conducted the national evaluation of these programs, in collaboration with the sites and a national evaluation team, to focus on child-level outcomes. The dataset includes data gathered at the individual family-level at baseline, 6-, 12-, 18-, and 24-months. All families were engaged in experimental or quasi-experimental studies comparing the Safe Start intervention to enhanced services-as-usual, alternative services, a wait-list control group, or a comparable comparison group of families that did not receive Safe Start services. Data sources for the outcome evaluation were primary caregiver interviews, child interviews (for ages 3 and over), and family/child-level service utilization data provided by the Safe Start program staff.

Jaycox, Lisa H., Hickman, Laura J., and Schultz, Dana. National Evaluation of the Safe Start Promising Approaches Initiative, 2006-2010. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2013-11-27.

Export Citation:

  • RIS (generic format for RefWorks, EndNote, etc.)
  • EndNote

United States Department of Justice. Office of Justice Programs. Office of Juvenile Justice and Delinquency Prevention (2005-JW-BX-0001, 2009-IJ-CX-0072)


Access to these data is restricted. Users interested in obtaining these data must complete a Restricted Data Use Agreement, specify the reasons for the request, and obtain IRB approval or notice of exemption for their research.

2006 -- 2010 (Varying timelines across sites)

2006-07-12 -- 2010-08-05

Additional contributions to the National Evaluation of the Safe Start Promising Approaches Initiative were made by the following RAND Corporation staff: Dionne Barnes-Proby, Claude Messan Setodji, Aaron Kofner, Racine Harris, Joie D. Acosta, and Taria Francois.

Each variable in the dataset corresponds to either (1) individual-level data from the child assessment battery, (2) individual-level data from the caregiver assessment battery, or (3) family/child-level service utilization data derived from Family Status Sheets provided by the Safe Start program staff. Thus, the unit of observation for each record varies across variables and is either the individual (child or caregiver) or family.

Although the 18 separate evaluations of interventions within the 15 sites all focused on children exposed to violence, they varied considerably in terms of the type of intervention, the setting in which it was offered, and the group of children and families targeted for the intervention. For more in-depth information on the Safe Start Promising Approaches (SSPA) outcomes evaluation including program-specific details, users should refer to the National Evaluation of Safe Start Promising Approaches Assessing Program Outcomes Technical Report and Results Appendices.

The purpose of the study was to evaluate promising and evidence-based programs in community settings to identify how well programs worked in reducing and preventing the harmful effects of children's exposure to violence (CEV) on children.

The outcomes evaluations were designed to examine whether implementation of each Safe Start intervention was associated with individual-level changes in specific outcome domains. The evaluation utilized an intent-to-treat approach that was designed to inform policymakers about the types of outcomes that could be expected if a similar intervention were to be implemented in a similar setting. To prepare for the evaluation, the sites worked together with the national evaluation team to complete a "Green Light" process that developed the specific plans for the intervention and assured that the evaluation plan would align well with the intervention being offered and would be ethical and feasible to implement.

As a result of the Green Light process, a rigorous, controlled evaluation design was developed at each site, either with a randomized control group (wait list or alternative intervention) or a comparison group selected based on similar characteristics. Three sites had more than one study being conducted in their setting, as they delivered two different interventions to different groups of individuals. Overall, there were 18 separate evaluations of interventions within the 15 sites. Most sites utilized an experimental, randomized design (13 of 18), with RAND standardizing and monitoring the randomization procedures. Pre-intervention baseline data were collected on standardized, age-appropriate measures for all families enrolled in the studies. Longitudinal data on families were collected for within-site analysis of the impact of these programs on child outcomes at 6, 12, 18, and 24 months post-enrollment.

The 15 sites collected data with initial training and ongoing support from RAND. Specifically, in order to standardize procedures across each of the 15 Safe Start sites, the RAND evaluation team developed detailed data collection procedures and forms. The supervisor and interviewer training manuals described each step of the data collection process. Using these manuals, the RAND team provided initial on-site data collection trainings for supervisors and research staff employed by each of the 15 Safe Start sites. The sites then implemented the data collection procedures and trained new data collection staff (when turnover occurred). The RAND team provided oversight and delivered refresher training sessions by conference call or on site, as needed. The sites mailed data on a monthly basis to RAND for data entry, cleaning, and analysis.

Data sources for the outcome evaluation were primary caregiver interviews, child interviews (for ages 3 and over), and family/child-level service utilization data provided by the Safe Start program staff. Measures for caregivers and children (ages 3 and up) were assembled into two batteries: a caregiver assessment battery and a child assessment battery. Caregivers completed a battery of instruments that comprised between 95 and 249 items, depending on the age of the child. All caregiver assessments were interviewer administered. Child assessments were interviewer administered for ages 3 through 10. Children ages 11 and older completed a self-administered assessment packet, but research staff was available to assist the child as needed. The child assessment is comprised of 36 to 165 items, depending on the age of the child. The child assessment battery also varied by the age of the child. A Family Status Sheet (FSS), which documented services received and current status of the family's engagement with services, was also completed for all families at each assessment point.

The Office of Juvenile Justice and Delinquency Prevention selected 15 program sites across the country to implement a range of interventions for helping children and families cope with the effects of children's exposure to violence. Program sites were located in the following 15 communities:

  • Bronx, New York
  • Broward County, Florida
  • Chelsea, Massachusetts
  • Dallas, Texas
  • Dayton, Ohio
  • Erie, Pennsylvania
  • Kalamazoo, Michigan
  • Miami, Florida
  • Multnomah County, Oregon
  • Oakland, California
  • Providence, Rhode Island
  • San Diego County, California
  • San Mateo County, California
  • Toledo, Ohio
  • Washington Heights/Inwood, New York

Each site recruited participants into experimental or quasi-experimental evaluation studies. Criteria for admission into the studies varied by site. Generally, convenience samples were taken of those who met eligibility criteria at each site.

Across all 15 sites, the dataset is comprised of 5,951 cases including:

  • 1,881 baseline records
  • 1,812 Wave 1 (6-month follow-up) records
  • 1,096 Wave 2 (12-month follow-up) records
  • 770 Wave 3 (18-month follow-up) records
  • 392 Wave 4 (24-month follow-up) records


All children exposed to violence and their families at 15 program sites across the United States.



Data sources for the outcome evaluation were primary caregiver interviews, child interviews (for ages 3 and over), and family/child-level service utilization data provided by the Safe Start program staff.

administrative records data

clinical data

experimental data

survey data

The study contains a total of 1,027 variables. Measures for the national evaluation were chosen to document child and family outcomes in several domains: demographics, background and contextual, child and caregiver violence exposure, child behavior problems, child post-traumatic stress symptoms, child depressive symptoms, child social-emotional competence, parenting stress, caregiver-child relationship, and child school readiness/academic achievement. The dataset also includes administrative, computed outcomes, derived age, and dosage variables.

Response rates varied by site.

To assess outcomes at each site, the research team used a set of measures that captured background and contextual factors, as well as a broad array of outcomes, including PTSD symptoms, depressive symptoms, behavior/conduct problems, social-emotional competence, caregiver-child relationship, school readiness/performance, and violence exposure.

Three measures were completed by caregivers to capture background and context.

  • Basic demographics of the caregiver, such as age, education, employment status, income, primary language, citizenship status, race/ethnicity, and marital status, were collected using the Caregiver Demographics and Service Use instrument, which was adapted from materials used in the Longitudinal Studies of Child Abuse and Neglect (LONGSCAN study; LONGSCAN, 2010), a consortium of longitudinal research studies assessing the etiology and impact of child maltreatment.
  • Basic demographics of the child, such as age, gender, race/ethnicity, primary language, citizenship status, and primary caregiver, were collected using the Child Demographics and Service Use instrument, which was adapted from materials used in the LONGSCAN study.
  • To assess problems faced in everyday life, the Everyday Stressors Index (ESI) from the LONGSCAN study was used.

The research team used two measures to assess child PTSD symptoms, one reported by caregivers for young children, and the second reported by children themselves.

  • Caregivers' perceptions of PTSD symptoms in younger children, ages 3 to 10, were collected using the Trauma Symptom Checklist for Young Children (TSCYC; Briere et al., 2001).
  • Children's own perceptions of PTSD symptoms were collected using the Trauma Symptom Checklist for Children (TSCC; Briere, 1996) among children ages 8 to 18.

Depressive symptoms in children age 8 and older were collected using one self-report instrument -- the Children's Depression Inventory (CDI; Kovacs, 1981).

To assess internalizing and externalizing behavior problems and delinquency, the research team used several measures and combined them using advanced psychometric techniques to develop a score that could be used across a broader age range.

  • To assess conduct problems for children between the ages of 1 and 3, the Brief Infant-Toddler Social and Emotional Assessment (BITSEA; Briggs-Gowan and Carter, 2002) was used.
  • To assess behavior/conduct problems for ages 3-18, the Behavior Problems Index (BPI; Peterson and Zill, 1986) along with four additional items that had been used as part of the National Longitudinal Survey of Youth (NLSY) were used.
  • To assess self-reported delinquency for children ages 11-18, the research team selected items from and modified three instruments: the National Youth Survey (NYS), the Rochester Youth Development Study (RYDS), and the Los Angeles Family and Neighborhood Survey (LA FANS).

Measures of affective strengths, school functioning, cooperation, assertion, self-control, and social and emotional competence in general were selected from four scales, two of which have different versions for different age ranges and respondents.

  • The six-item personal-social scale for children ages 0-2 from the Ages and Stages Questionnaire (ASQ; Squires, Potter, and Bricker, 1997) was used.
  • The BITSEA (Briggs-Gowan and Carter, 2002) was used for ages 1-3 to assess social-emotional competence, with the caregiver responding to statements about the child's behavior in the past month by rating how true or frequent each behavior is. In addition, the Social Skills Rating System (SSRS; Gresham and Elliott, 1990) was used to assess cooperation, assertion, and self-control.
  • A separate cooperation scale from the SSRS (Gresham and Elliott, 1990) was used with ages 3-12.
  • For children ages 13-18, the self-report version of the SSRS (Gresham and Elliott, 1990) was used to assess assertion, self-control, and cooperation.
  • Two scales from the BERS-2 (Epstein and Sharma, 1998) were used to assess school functioning and affective strengths from the perspective of both caregivers (for children ages 6-12) and children (for children ages 11-18).

Measures of parenting stress and family involvement were used to assess caregiver-child relationship.

  • To examine parenting stress, the Parenting Stress Index--Short Form (PSI-SF; Reitman, Currier, and Stickle, 2002) was used.
  • The family involvement scale from the BERS-2 (Epstein and Sharma, 1998) was used as a measure of caregiver-child relationship.

To assess the general domain of school readiness and performance, the Woodcock-Johnson III scale (WJ-III; Blackwell, 2001) was used. The research team used this measure for children between the ages of 3 and 18 and chose three tests: Letter-Word Identification, Passage Comprehension, and Applied Problems.

Three measures were used to capture violence exposure in children and caregivers.

  • To assess exposure to violence among children ages 0-12, the Juvenile Victimization Questionnaire (JVQ; Hamby et al., 2004a, 2004b) was used.
  • To assess caregiver victimization, the research team selected and modified items from the National Victimization Crime Survey (NCVS) and the Traumatic Stress Survey.



2013-11-27 Per instructions from the principal investigators, ICPSR made minor adjustments to the KQ18, KQ19, and NEW_SITE variables and added a collection note acknowledging the contributions of additional RAND project staff to the metadata and the PDF User Guide. ICPSR also updated the codebook notes and data collection instruments notes in the PDF study documentation and created the MULTNOMAH and PROVIDENCE_TIER3 dichotomous flag variables.

2013-08-01 ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection:

  • Standardized missing values.
  • Checked for undocumented or out-of-range codes.


  • The public-use data files in this collection are available for access by the general public. Access does not require affiliation with an ICPSR member institution.

  • One or more files in this data collection have special restrictions. Restricted data files are not available for direct download from the website; click on the Restricted Data button to learn more.

NACJD logo

This dataset is maintained and distributed by the National Archive of Criminal Justice Data (NACJD), the criminal justice archive within ICPSR. NACJD is primarily sponsored by three agencies within the U.S. Department of Justice: the Bureau of Justice Statistics, the National Institute of Justice, and the Office of Juvenile Justice and Delinquency Prevention.