Capti Assess is one of the most advanced research-based diagnostic tools currently on the market. Capti Assess, powered by ETS ReadBasix (a.k.a. RISE or SARA in the research circles) is based on over two decades of research by a team of distinguished reading scientists, assessment researchers, and reading intervention practitioners at ETS and SERP.
The third and most recent version of the technical report for the Capti Assess battery. The results included in this report feature a calibrated item pool based on a national sample of students, an extension of the vertical scale to span Grades 3–12, psychometric analyses of the data for each subtest, an item response theory scaling study for each of the subtests across the entire grade span, an evaluation of multidimensionality, an evaluation of differential item functioning for gender and race/ethnicity, and an expanded review of validity evidence.
The second edition of the technical report for Capti Assess battery. The results included in this report feature a vertical extension of Capti Assess to span Grades 5–10, psychometric analysis of parallel forms of each subtest, results of item response theory (IRT) scaling studies for each of the subtests across the entire grade span, and evaluation of differential item functioning (DIF) for gender and race/ethnicity.
The first technical report on Capti Assess foundational skills battery that describes the history and rationale for the assessment, the constructs measured, and a pilot study on over 4,000 6-8th grade students.
Presents evidence that shortened version of three ReadBasix subtests (vocabulary, morphology and sentence processing) all strongly predicted high school students’ academic knowledge (r’s between .43 and .57), and reading comprehension on both a traditional style single-text comprehension test (r’s .56 - .57) and a modern scenario-based multiple-text comprehension test (r’s .50 - .54). The strength of relation between ReadBasix to either comprehension test was comparable to the relation between the two comprehension tests (r = .57). These results demonstrated that ReadBasix subtests are valid indicators of students’ academic achievement, single text comprehension, and scenario-based multiple-text comprehension.
Presents evidence to support the importance of timing data for the word recognition and decoding subtest. Poor decoders spend more time recognizing real words and pseudo-homophones, but less time on non-words. Study 2 indicated that time spent decoding novel words predicts decoding development. Poor decoders may be trapped in a vicious cycle: poor decoding skill combined with less time spent attempting to decode novel words interferes with decoding development.
Presents evidence to support the decoding threshold hypothesis. Students who score below a threshold on the decoding subtest were not likely to comprehend what they read. In study 2, students who scored below the decoding threshold were also not likely to grow in their reading comprehension over time. Inadequate decoding skill may limit student reading comprehension in middle and high school students.
Describes an early conception of Capti Assess and how it may fit into an RTI framework. Each of the six Capti Assess subtests predicted unique variance in the student’s prior state ELA test score. In other words, Capti Assess can help identify weakness in each of the six foundational skills. The battery was also found to be more predictive for students who were struggling readers.
This paper provided some evidence for the concurrent validity of Capti Assess. The authors found evidence that Capti Assess comprehension subtest correlated with external measures of reading comprehension. They found that Capti Assess comprehension subtest correlated with a standard reading comprehension test the Gates–MacGinitie reading test and a scenario-based assessment of reading comprehension. The relatively moderate to high correlation of Capti Assess to the scenario-based assessment is notable, as the scenario-based assessment is designed to cover higher level comprehension constructs, such as multiple text comprehension, synthesis, critical thinking, perspective taking, and digital literacy. The fact that these higher level constructs are related to foundational comprehension as measured by Capti Assess underscores its significance as a key component of reading ability.
This study examined the quality of a sample of Capti Assess passages with eye-tracking by investigating the relation between the passage content and the comprehension questions in proficient college readers. Results showed that more time spent reading relevant parts of passages facilitated the answering of comprehension questions, thus providing evidence for content validity of the test.
This study examined the structural validity of Capti Assess reading comprehension subtest by investigating the inter-relations among three aspects of reading comprehension: reading fluency as represented by a maze task, reading comprehension represented by summary writing, and reading comprehension represented by answering of multiple-choice questions. Results showed convergence among the three tasks: higher fluency is associated with better question answering, and summary writing improves the efficiency in answering comprehension questions.
This paper provides evidence for measuring foundational reading skills when assessing higher level comprehension. Each of the 6 subtests on Capti Assess predicted unique variance on a scenario-based measure of reading comprehension. There was also evidence to suggest that low levels of foundational skills may limit students’ comprehension. We argue that including a measure of component skills alongside a measure of higher-level comprehension is beneficial in interpreting student performance and providing useful information for instruction.
This study provides an independent evaluation of the STARI reading intervention to improve reading skill on outcomes such as Capti Assess. The Strategic Adolescent Reading Intervention (STARI) is an intervention that targets students’ word-reading skills, reading fluency, vocabulary development, and comprehension. In a sample of more than 400 sixth- to eighth-grade students, the authors found that students who participated in the STARI intervention scored higher than control students on ETS Diagnostic subtests of word recognition, morphology, and efficiency of basic reading comprehension. In other words, the skills measured in Capti Assess are malleable and can be improved by interventions such as STARI.
This study provides an independent evaluation of the READi reading intervention to improve students’ comprehension. Capti Assess was used as a pretest measure and it was related to a measure of deep comprehension (GISA). This seems to suggest that the skills tested on Capti Assess are not independent from the type deep comprehension required by more modern reading assessments.
This study provided some evidence for the concurrent validity of Capti Assess. Foorman et al. (2015) found evidence that component subtests in the Capti Assess were predictive of reading comprehension. In particular, they found that the vocabulary and morphology sections correlated with a state English language arts test. The authors also found that Capti Assess vocabulary and morphology subtests, correlated with the Gates–MacGinitie reading test.They also found Capti Assess vocabulary and morphology subtests demonstrated moderate correlations to proximal constructs of word identification, vocabulary, and oral language.