Improvement of Elementary Fractions Instruction: Randomized Controlled Trial Using Lesson Study with a Fractions Resource Kit, United States, 2016-2018 (ICPSR 38205)

Version Date: Oct 6, 2022 View help for published

Principal Investigator(s): View help for Principal Investigator(s)
Catherine Lewis, Mills College; Robert C. Schoen, Florida State University

https://doi.org/10.3886/ICPSR38205.v1

Version V1

Slide tabs to view more

This study involved random assignment of 80 school-based teams of educators to one of four conditions: (1) lesson study supported by a fractions resource kit; (2) lesson study without the fractions resource kit; (3) fractions resource kit without lesson study; or (4) practice-as-usual (no fractions resource kit, asked to refrain from lesson study). Teams were mailed the materials for their condition and used them locally. Each educator team included at least one grade 3 or 4 teacher who agreed to video record three fractions lessons, and their students comprise the student sample. Outcome measures focus on instruction, student fractions knowledge, teachers' fractions knowledge, beliefs about teaching-learning, and professional learning perceptions.

Lewis, Catherine, and Schoen, Robert C. Improvement of Elementary Fractions Instruction: Randomized Controlled Trial Using Lesson Study with a Fractions Resource Kit, United States, 2016-2018. Inter-university Consortium for Political and Social Research [distributor], 2022-10-06. https://doi.org/10.3886/ICPSR38205.v1

Export Citation:

  • RIS (generic format for RefWorks, EndNote, etc.)
  • EndNote
United States Department of Education. Institute of Education Sciences (R305A150043)

state

Inter-university Consortium for Political and Social Research
Hide

2016-01-01 -- 2018-12-31
2017-11-15 -- 2018-02-07, 2016-08-19 -- 2017-01-17, 2017-12-04 -- 2018-02-05, 2017-02-24 -- 2017-06-20
  1. Participation logs were used to track team members' professional learning about fractions. The logs asked participants to indicate the date, hours, type of activity, number of participants, and materials used. Teachers used the following list to indicate the type of activity: study of materials, lesson planning, lesson observation, and lesson reflection. To indicate the materials used, participants selected from: adopted textbook or curriculum materials, project-supplied materials (asked to specify), other materials (asked to specify).

  2. Teacher knowledge and beliefs were measured through web-based questionnaires. Student achievement was measured by group administered paper and pencil mathematics tests.

  3. Full information about data collection can be found in the related citations.

Hide

This study's outcome measures focus on mathematics instruction, student fractions knowledge, teacher fractions knowledge, beliefs about teaching-learning, and professional learning perceptions.

Randomization occurred at the school level. Student achievement data were gathered for preintervention and postintervention waves of data collection. Teacher knowledge and beliefs were gathered during preintervention and postintervention waves of data collection. Classroom instruction was measured for the Research Lesson Teacher (RLT) in each school site. Each school was represented by one classroom.

Cross-sectional, Longitudinal: Trend / Repeated Cross-section

Eligible teams consisted of at least three and not more than eight educators working in a single U.S. school. Each team was required to include at least one grade 3 or 4 teacher who was willing to video record fractions instruction (3 lessons). Participation was voluntary; educators were not required by their school or school district to participate. No more than one team per school was eligible.

Organization, Individual

A total of 88 sites were recruited to participate in the study. Of these schools, 80 met the eligibility criteria for inclusion. There were a total of 68 responses to requests for implementation data with 12 non-responses.

A total of 408 teachers were recruited from within these 88 sites. Once eligibility criteria were evaluated, there were 322 eligible teachers within the 80 remaining sites. The response rate for teacher measures was 258 with a non-response rate of 64 teachers.

One classroom from each site was selected for student data collection. A total of 1,831 students were identified within participating sites. Of these, 1,385 students provided parental consent. Of the consented students, a total of 1,199 students contributed pretest and posttest data with a non-response rate of 632 (446 declined consent and 186 did not complete assessments).

A brief listing and description of some of the scales used to quantify teacher knowledge and beliefs, mathematics instruction, and student achievement is provided here. More extensive description of some of the scales can be found in the Related Publications.

Teacher knowledge of subject matter. The Knowledge for Teaching Elementary Fractions (K-TEF) tests are computer-based tests that measure mathematical knowledge for teaching (MKT) at the elementary level, specifically in the domain of fractions. Based on the results of parallel analysis, both the K-TEF pretest and posttest appear to measure a single dominant factor. The coefficient alpha of the K-TEF pretest was found to be .76 with this sample, and the coefficient alpha of the K-TEF posttest was found to be .77 (Schoen, Yang, Liu, Paek, 2018; Schoen, Yang, Paek, 2018; see Related Publications).

Teacher beliefs. Teacher beliefs were measured by their responses to a series of statements that were drawn or adapted from various published questionnaires. Teachers responded to each statement by selecting a category on a 6-point, Likert-type scale that ranged from "strongly agree" to "strongly disagree." Scores for the expectations, self-efficacy, support, and growth-mindset scales were generated with models based on item-response theory. Parallel analysis suggested that there was one dominant factor underlying teachers' responses to the items in each of those four scales. A nonlinear SEM reliability coefficient (Green and Yang, 2009) was used to calculate reliability for each of those scales based on 317 teachers' responses during the pretest wave of data collection. The sources of the items on the questionnaires and the factors they measure are described in the following paragraphs. The full set of items and additional psychometric information are provided in (Schoen et al., 2020; see Related Publications).

Expectations for student achievement. Six items on a questionnaire designed to measure teachers' expectations for their students' achievement levels were drawn from those published by McLaughlin and Talbert (2001) and CRC (1994). The reliability coefficient was .72.

Self-efficacy for teaching fractions. Four items, designed to measure teachers' self-efficacy about their ability to teach mathematics/fractions, were developed by our research team or adapted from CRC (1994). Sample items include: "I am able to figure out what students know about fractions," and "I have some good strategies for making students' mathematical thinking visible." The reliability coefficient was .72.

Support for understanding and teaching the curriculum standards. Two items, adapted from questions published by Horizon Research, Inc. (2000), were used to measure teachers' perceptions about the level of support they received over the past year with respect to understanding and teaching their state's curriculum standards.

Growth mindset. The developers of the intervention components in the present study hypothesized that the intervention would positively impact teachers' growth mindset with respect to their students' abilities to learn mathematics. Four items were adapted from those used by Rattan, Good, and Dweck (2012) to focus on teachers' perceptions of the malleability of their students' abilities in mathematics. The reliability coefficient was .75.

Perceived collegial learning effectiveness. Four items, designed to measure teachers' perception of the effectiveness of learning with their colleagues, were adapted from CRC (1994) and Horizon Research Inc. (2000). The reliability coefficient was .71.

Hide

2022-10-06

Hide

Weighting was not needed for the randomized controlled trial, because all participating school sites had equal probability of assignment to each of the four conditions.

Hide

Notes