National Evaluation of the Safe Start Promising Approaches Initiative, 2011-2016 (ICPSR 36610)

Version Date: Mar 14, 2017 View help for published

Principal Investigator(s): View help for Principal Investigator(s)
Lisa H. Jaycox, RAND Corporation; Dana Schultz, RAND Corporation

https://doi.org/10.3886/ICPSR36610.v1

Version V1

Slide tabs to view more

These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed.

The Safe Start Promising Approaches for Children Exposed to Violence Initiative funded 10 sites to implement and evaluate programs to improve outcomes for children exposed to violence. RAND conducted the national evaluation of these programs, in collaboration with the sites and a national evaluation team, to focus on child-level outcomes. The dataset includes data gathered at the individual family-level at baseline, 6-, 12-months. All families were engaged in experimental or quasi-experimental studies comparing the Safe Start intervention to enhanced services-as-usual, alternative services, a wait-list control group, or a comparable comparison group of families that did not receive Safe Start services. Data sources for the outcome evaluation were primary caregiver interviews, child interviews (for ages 8 and over), and family/child-level service utilization data provided by the Safe Start program staff.

Jaycox, Lisa H., and Schultz, Dana. National Evaluation of the Safe Start Promising Approaches Initiative, 2011-2016. Inter-university Consortium for Political and Social Research [distributor], 2017-03-14. https://doi.org/10.3886/ICPSR36610.v1

Export Citation:

  • RIS (generic format for RefWorks, EndNote, etc.)
  • EndNote
United States Department of Justice. Office of Justice Programs. Office of Juvenile Justice and Delinquency Prevention (2010-JW-FX-0001)

Site

Access to these data is restricted. Users interested in obtaining these data must complete a Restricted Data Use Agreement, specify the reasons for the request, and obtain IRB approval or notice of exemption for their research.

Inter-university Consortium for Political and Social Research
Hide

2011-11 -- 2016-06 (varies by site)
2011-05 -- 2016-06 (varies by site)
  1. These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed.

  2. Additional contributions to the National Evaluation of the Safe Start Promising Approaches Initiative were made by the following RAND Corporation staff: Lynsay Ayer, Claude Setodji, Ammarah Mahmud, Aaron Kofner, Dionne Barnes-Proby.

  3. Although the 11 separate evaluations of interventions within the 10 sites all focused on children exposed to violence, they varied considerably in terms of community size, location, age range served, type of violence exposure, and type of intervention.

  4. Each variable in the dataset corresponds to either (1) individual-level data from the child assessment battery, (2) individual-level data from the caregiver assessment battery, or (3) family/child-level service utilization data derived from services surveys completed by the Safe Start program staff. Thus, the unit of observation for each record varies across variables and is either the individual (child or caregiver) or family.

  5. The Safe Start Promising Approaches research report is available for download from the RAND Web site.
Hide

The purpose of the Safe Start Promising Approaches study was to evaluate promising and evidence-based programs in community settings to identify how well programs worked in reducing and preventing the impact of children's exposure to violence (CEV).

The outcomes evaluations were designed to examine whether implementation of each Safe Start intervention was associated with individual-level changes in specific outcome domains. The evaluation utilized an intent-to-treat approach that was designed to inform policymakers about the types of outcomes that could be expected if a similar intervention were to be implemented in a similar setting. To prepare for the evaluation, the sites worked together with the national evaluation team to complete a "Green Light" process that developed the specific plans for the intervention and assured that the evaluation plan would align well with the intervention being offered and would be ethical and feasible to implement.

As a result of the Green Light process, a rigorous, controlled evaluation design was developed at each site. Seven sites conducted randomized control trials, with two using a wait list control design. The remaining three sites had quasi-experimental designs with comparison groups formed either within the Safe Start agency or the community. One site had more than one study being conducted in their setting, as they delivered two different interventions to different groups of individuals. Overall, there were 11 separate evaluations of interventions within the 10 sites. Pre-intervention baseline data were collected on standardized, age-appropriate measures for all families enrolled in the studies. Longitudinal data on families were collected for within-site analysis of the impact of these programs on child outcomes at 6 and 12 months post-enrollment. The 10 sites collected data with initial training and ongoing support from RAND. Specifically, in order to standardize procedures across each of the 10 Safe Start sites, the RAND evaluation team developed detailed data collection procedures and forms. The RAND team provided initial on-site data collection training for the supervisors and data collection staff employed by each of the 10 Safe Start sites. The sites then implemented the data collection procedures and trained new data collection staff. The RAND team provided oversight and fielded questions once data collection began, as well as conducting retraining sessions for sites with significant staff turnover and refresher training sessions as needed. The use of the SSTAR (Safe Start Assessment Reporting) system meant that data was stored on the RAND server and evaluated by RAND research staff monthly.

Data sources for the outcome evaluation were primary caregiver interviews, child interviews (for ages 8 and over), and family/child-level service utilization data provided by the Safe Start program staff. Measures for caregivers and children (ages 8 and up) were assembled into two batteries: a caregiver assessment battery and a child assessment battery. Caregivers completed a battery of instruments that comprised between 150 and 272 items, depending on the age of the child. Children ages 8 and older completed an assessment that was comprised of 59 to 184 items, depending on the age of the child. All child and caregiver assessments were interviewer administered. Most assessments were issued in person, although occasionally some were completed by phone. The computer-assisted personal interviewing (CAPI) component was used so both caregiver and child assessments were completed either online or offline using desktop or laptop computers. The assessment packets were offered in English and Spanish.

The Office of Juvenile Justice and Delinquency Prevention selected 10 program sites across the country to implement a range of interventions for helping children and families cope with the effects of children's exposure to violence. Program sites were located in the following 10 communities:

  • Aurora, Colorado
  • Denver, Colorado
  • Detroit, Michigan
  • El Paso, Texas
  • Honolulu, Hawaii
  • Kalamazoo, Michigan
  • Philadelphia, Pennsylvania
  • Queens, New York
  • Spokane, Washington
  • Worcester, Massachusetts

Each site recruited participants into experimental or quasi-experimental evaluation studies. Criteria for admission into the studies varied by site. Generally, convenience samples were taken of those who met eligibility criteria at each site.

Across all 10 sites, the dataset is comprised of 8,968 cases including:

  • 4,155 baseline records
  • 2,784 6-month follow-up records
  • 2,029 12-month follow-up records

Longitudinal: Panel

All children exposed to violence and their caregivers at 10 program sites across the United States.

Individual, Household

Data sources for the outcome evaluation were primary caregiver interviews, child interviews (for ages 8 and over), and family/child-level service utilization data provided by the Safe Start program staff.

The study contains a total of 1,150 variables. Measures for the national evaluation were chosen to document child and family outcomes in several domains: demographics, background and contextual, violence exposure, child behavior/delinquency problems, post-traumatic stress symptoms, depressive symptoms, child social-emotional competence, school behavior/attitudes, family functioning, and caregiver mental health. The dataset also includes administrative and derived variables.

Response rates vary by site.

To assess outcomes at each site, the research team used a set of measures that captured background and contextual factors, as well as a broad array of outcomes, including PTSD symptoms, depressive symptoms, behavior/delinquency problems, social-emotional competence, family functioning, school behavior/attitudes, and violence exposure.

Five measures were completed by caregivers to capture background and context.

  • Basic demographics of the caregiver, such as age, education, employment status, income, primary language, and race/ethnicity, were collected using the Caregiver Information instrument, which was adapted from materials used in the Longitudinal Studies of Child Abuse and Neglect (LONGSCAN study; LONGSCAN, 2010), a consortium of longitudinal research studies assessing the etiology and impact of child maltreatment.
  • Basic demographics of the child, such as age, gender, race/ethnicity, primary language, and primary caregiver, were collected using the Child Information instrument, which was adapted from materials used in the LONGSCAN study.
  • To assess problems faced in everyday life, the Everyday Stressors Index (ESI) from the LONGSCAN study was used.
  • To assess caregiver satisfaction with the services received by their child or family, the Client Satisfaction Questionnaire was used.
  • To assess barriers that prevent caregivers from receiving services, the Attitudinal Barriers to Care instrument was used.

The research team used two measures to assess child PTSD symptoms, one reported by caregivers for young children, and the second reported by children themselves.

  • Caregivers' perceptions of PTSD symptoms in younger children, ages 3 to 10, were collected using the Trauma Symptom Checklist for Young Children (TSCYC; Briere et al., 2001).
  • Children's own perceptions of PTSD symptoms were collected using the Child PTSD Symptom Scale (CPSS; Foa et al., 2001) among children ages 8 to 18.

Depressive symptoms in children age 11 and older were collected using one self-report instrument -- the Reynolds Adolescent Depression Scale-Second Edition (RADS-2; Reynolds, 1987).

To assess internalizing and externalizing behavior problems and delinquency, the research team used several measures and combined them using advanced psychometric techniques to develop a score that could be used across a broader age range.

  • To assess conduct problems for children between the ages of 1 and 3, the Brief Infant-Toddler Social and Emotional Assessment (BITSEA; Briggs-Gowan and Carter, 2002) was used.
  • To assess behavior/conduct problems for ages 3-18, the Behavior Problems Index (BPI; Peterson and Zill, 1986) along with four additional items that had been used as part of the National Longitudinal Survey of Youth (NLSY) were used.
  • To assess self-reported delinquency for children ages 11-18, the research team selected items from and modified three instruments: the National Youth Survey (NYS; Elliott, 2008), the Rochester Youth Development Study (RYDS; Thornberry, Krohn, Lizotte, Smith, and Porter, 1998), and the Lost Angeles Family and Neighborhood Survey (LA FANS; Sastry and Pebley, 2003).
  • To assess substance use and gang involvement for ages 13-18, the research team selected items from sources including the National Longitudinal Study of Adolescent Health (AddHealth), Youth Risk Behavior Survey (YBRS), Monitoring the Future (MTF), National Survey of Child and Adolescent Well-Being, and the Office of Juvenile Justice and Delinquency Prevention's (OJJDP) National Youth Gang survey.

Measures of affective strengths, cooperation, assertion, self-control, and social-emotional competence in general were selected from two measures which have different versions for different age ranges and respondents.

  • The social-emotional scale for children ages 0-3 from the Ages and Stages Questionnaire (ASQ; Squires, et al., 2002) was used.
  • Three scales from the Social Skills Improvement System (SSIS; Gresham and Elliott, 2008) were used to assess cooperation, assertion, and self-control. Caregivers of children ages 3-18 completed the parent-report version of the assessment.
  • For children ages 13-18, the self-report version of the Social Skills Improvement System (SSIS; Gresham and Elliott, 2008) was used to assess assertion, self-control, and cooperation.
  • One scale from the Behavioral and Emotional Rating Scale-Second Edition (BERS-2; Epstein and Sharma, 1998) was used to assess affective strengths from the perspective of both caregivers (for children ages 6-12) and children (for children ages 11-18).

To assess school behavior/attitudes, a scale from the Behavioral and Emotional Rating Scale-Second Edition (BERS-2; Epstein and Sharma, 1998) was completed by both caregivers (for children ages 6-12) and children (ages 11-18).

Measures of parenting, family conflict, and family involvement were used to assess family functioning.

  • The Alabama Parenting Questionnaire (APQ; Frick, 1991) was used to assess parenting practices and administered to both children and caregivers.
  • The family conflict subscale from the Family Environment Scale (FES; Moos and Moos, 1974) was used to assess family conflict.
  • The family involvement scale from the Behavioral and Emotional Rating Scale-Second Edition (BERS-2; Epstein and Sharma, 1998) was used as a measure of family involvement and completed by caregivers (children ages 6-12) and children (ages 11-18).

Two measures were used to capture violence exposure in children and caregivers.

  • To assess exposure to violence among children ages 0-12, the Juvenile Victimization Questionnaire (JVQ; Hamby et al., 2004a, 2004b) was administered to caregivers (children ages 0-11) and children (ages 10-18).
  • To assess caregiver victimization, the research team selected and modified items from the National Victimization Crime Survey (NCVS) and the Traumatic Stress Survey.

Two measures were used to capture the caregivers' mental health status.

  • To assess symptoms of depression in caregivers, the Patient Health Questionnaire-Eight Item (PHQ-8; Kroenke et al., 2009) was administered.
  • To assess symptoms of PTSD in caregivers, the Primary Care PTSD Screen (PC-PTSD; Prins et al., 2003) was administered.

Hide

2017-02-28

2018-02-15 The citation of this study may have changed due to the new version control system that has been implemented. The previous citation was:
  • Jaycox, Lisa H., and Dana Schultz. National Evaluation of the Safe Start Promising Approaches Initiative, 2011-2016. ICPSR36610-v1. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2017-02-28. http://doi.org/10.3886/ICPSR36610.v1

2017-02-28 ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection:

  • Checked for undocumented or out-of-range codes.
Hide

No weights are provided.

Hide

Notes

  • These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed.

  • The public-use data files in this collection are available for access by the general public. Access does not require affiliation with an ICPSR member institution.

  • One or more files in this data collection have special restrictions. Restricted data files are not available for direct download from the website; click on the Restricted Data button to learn more.