Capti Assess is powered by ETS ReadBasix — one of the most researched diagnostic tools in the market. ReadBasix is based on over two decades of research by a team of distinguished reading scientists, assessment researchers, and reading intervention practitioners at ETS and SERP Institute. ReadBasix is the commercial name for the assessment previously known as RISE (Reading Inventory and Student Evaluation) and SARA (Study Aid and Reading Assessment).
The R&D of ReadBasix was supported by the Institute of Education Sciences (IES) at the U.S. Department of Education, through Grant R305F100005 to the Educational Testing Service (ETS) as part of the Reading for Understanding Research (RFU) Initiative, as well as IES Grants R305G040065 and R305A150176. The R&D of Capti Assess was supported by Grants 91990021C0029, 91990019C0024, and 91990022C0042.
Sabatini, J., Bruce, K., Steinberg, J. (2013). SARA Reading Components Tests, RISE Form: Test Design and Technical Adequacy (ETS RR-13-08). Princeton, NJ: Educational Testing Service.
Report 1: This is the first technical report on the ReadBasix assessment (SARA / RISE). ReadBasix was originally designed for struggling readers in middle school because teachers within a large, urban district wanted more information about why their students were struggling to read. The battery of assessments includes a subtest for each foundational skill: word recognition and decoding, vocabulary, morphology, sentence processing, and reading efficiency, as well as for basic reading comprehension. This report details the research base that supports the design and development of the reading skills components battery, and describes a pilot study with students in grades 6-8.
Sabatini, J., Bruce, K., Steinberg, J., & Weeks, J. (2015). SARA Reading Components Tests, RISE Forms: Technical Adequacy and Test Design, 2nd Edition (ETS RR-15-32). Princeton, NJ: Educational Testing Service.
Report 2: The second edition of the technical report on the ReadBasix (SARA / RISE) assessment battery expands the first report by featuring grades 5-10 (the original had grades 6-8). Included in this report are analyses for each subtest (word recognition and decoding, vocabulary, morphology, sentence processing, and reading efficiency, and basic reading comprehension), psychometric analysis of parallel forms of each subtest, results of item response theory scaling studies for each subtest across the entire grade span, and evaluation of differential item functioning for gender, and race/ethnicity.
Sabatini, J., Weeks, J., O’ Reilly, T., Bruce, K., Steinberg, J., & Chao, S.-F. (2019). SARA Reading Components Tests, RISE forms: Technical adequacy and test design, 3rd edition (Research Report No. RR-19-36). Princeton, NJ: Educational Testing Service.
Report 3: This is the third and most recent edition of the technical report for the ReadBasix (SARA / RISE) assessment battery. This report expands the first and second reports by featuring a national sample of students from grades 3-12 (the first report had grades 6-8; the second one had grades 5-10). This report includes a theoretical overview of the battery of assessments including a subtest for each foundational skill: word recognition and decoding, vocabulary, morphology, sentence processing, and reading efficiency, and for basic reading comprehension. The report includes psychometric analyses, item response theory scaling study, evaluation of multidimensionality, validity evidence, evaluation of differential item functioning for gender, and race/ethnicity.
Wang, Z., O’Reilly, T., Sabatini, J., McCarthy, K., & McNamara, D. (2021). A tale of two tests: The role of topic and general academic knowledge in traditional versus contemporary scenario-based reading. Learning and Instruction, 73, 101462
Presents evidence that shortened version of three ReadBasix subtests (vocabulary, morphology and sentence processing) all strongly predicted high school students’ academic knowledge (r’s between .43 and .57), and reading comprehension on both a traditional style single-text comprehension test (r’s .56 - .57) and a modern scenario-based multiple-text comprehension test (r’s .50 - .54). The strength of relation between ReadBasix to either comprehension test was comparable to the relation between the two comprehension tests (r = .57). These results demonstrated that ReadBasix subtests are valid indicators of students’ academic achievement, single text comprehension, and scenario-based multiple-text comprehension.
Wang, Z., Sabatini, J., & O'Reilly, T. (2019). When slower is faster: Time spent decoding novel words predicts better decoding and faster growth. Scientific Studies of Reading
Presents evidence to support the importance of timing data for the word recognition and decoding subtest. Poor decoders spend more time recognizing real words and pseudo-homophones, but less time on non-words. Study 2 indicated that time spent decoding novel words predicts decoding development. Poor decoders may be trapped in a vicious cycle: poor decoding skill combined with less time spent attempting to decode novel words interferes with decoding development.
Wang, Z., Sabatini, J., O’Reilly, T., & Weeks, J. (2019). Decoding and reading comprehension: A test of the decoding threshold hypothesis. Journal of Educational Psychology, 111(3), 387-401.
Presents evidence to support the decoding threshold hypothesis. Students who score below a threshold on the ReadBasix decoding subtest were not likely to comprehend what they read. In study 2, students who scored below the decoding threshold were also not likely to grow in their reading comprehension over time. Inadequate decoding skill may limit student reading comprehension in middle and high school students.
O’Reilly, T., Sabatini, J., Bruce, K., Pillarisetti, S., & McCormick, C. (2012). Middle school reading assessment: Measuring what matters under an RTI framework. Reading Psychology Special Issue: Response to Intervention, 33 (1-2), 162-189.
Describes an early conception of ReadBasix and how it may fit into an RTI framework. Each of the six ReadBasix subtests predicted unique variance in the student’s prior state ELA test score. In other words, ReadBasix can help identify weakness in each of the six foundational skills. The battery was also found to be more predictive for students who were struggling readers.
Sabatini, J., O’Reilly, T., Weeks, J., & Wang, Z. (2019). Engineering a 21st Century reading comprehension assessment system utilizing scenario-based assessment techniques. International Journal of Testing. https://doi.org/10.1080/15305058.2018.1551224
This paper provided some evidence for the concurrent validity of ReadBasix. The authors found evidence that ReadBasix comprehension subtest correlated with external measures of reading comprehension. They found that ReadBasix comprehension subtest correlated with a standard reading comprehension test the Gates–MacGinitie reading test and a scenario-based assessment of reading comprehension. The relatively moderate to high correlation of ReadBasix to the scenario-based assessment is notable, as the scenario-based assessment is designed to cover higher level comprehension constructs, such as multiple text comprehension, synthesis, critical thinking, perspective taking, and digital literacy. The fact that these higher level constructs are related to foundational comprehension as measured by ReadBasix underscores its significance as a key component of reading ability.
O’Reilly, T., Feng, G., & Sabatini, J., Wang, Z., & Gorin, J. (2018). How do people read the passages during a reading comprehension test? The effect of reading purpose on text processing behavior. Educational Assessment.
This study examined the quality of a sample of ReadBasix passages with eye-tracking by investigating the relation between the passage content and the comprehension questions in proficient college readers. Results showed that more time spent reading relevant parts of passages facilitated the answering of comprehension questions, thus providing evidence for content validity of the test.
Wang, Z., Sabatini, J, O’Reilly, T., & Feng, G. (2017). How individual differences interact with task demands in text processing. Scientific Studies for Reading, 21 (2), 165-178.
This study examined the structural validity of ReadBasix reading comprehension subtest by investigating the inter-relations among three aspects of reading comprehension: reading fluency as represented by a maze task, reading comprehension represented by summary writing, and reading comprehension represented by answering of multiple-choice questions. Results showed convergence among the three tasks: higher fluency is associated with better question answering, and summary writing improves the efficiency in answering comprehension questions.
Sabatini, J., O’Reilly, T., Halderman, L. & Bruce, K. (2014). Integrating Scenario-based and component reading skill measures to understand the reading behavior of struggling readers. Learning Disabilities Research & Practice,29(1), 36-43.
This paper provides evidence for measuring foundational reading skills when assessing higher level comprehension. Each of the 6 subtests of ReadBasix predicted unique variance on a scenario-based measure of reading comprehension. There was also evidence to suggest that low levels of foundational skills may limit students’ comprehension. We argue that including a measure of component skills alongside a measure of higher-level comprehension is beneficial in interpreting student performance and providing useful information for instruction.
Goldman, S. R., Greenleaf, C., Yukhymenko-Lescroart, M., Brown, W., Ko, M. L. M., Emig, J. M., George M.A., Wallace P., Blaum D. & Britt, M. A. (2019). Explanatory modeling in science through text-based investigation: Testing the efficacy of the Project READI intervention approach. American Educational Research Journal, 56, 1148, 1216.
This study provides an independent evaluation of the READi reading intervention to improve students’ comprehension. ReadBasix was used as a pretest measure and it was related to a measure of deep comprehension (GISA). This seems to suggest that the skills tested on ReadBasix are not independent from the type deep comprehension required by more modern reading assessments.
Kim, J. S.,Hemphill, L., Troyer,M.,Thomson, J.M., Jones, S. M., LaRusso,M. D., & Donovan, S. (2017). Engaging struggling adolescent readers to improve reading skills. Reading Research Quarterly, 52, 357–382.
This study provides an independent evaluation of the STARI reading intervention to improve reading skill on outcomes such as ReadBasix. The Strategic Adolescent Reading Intervention (STARI) is an intervention that targets students’ word-reading skills, reading fluency, vocabulary development, and comprehension. In a sample of more than 400 sixth- to eighth-grade students, the authors found that students who participated in the STARI intervention scored higher than control students on ETS Diagnostic subtests of word recognition, morphology, and efficiency of basic reading comprehension. In other words, the skills measured in ReadBasix are malleable and can be improved by interventions such as STARI.
Foorman, B. R., Koon, S., Petscher, Y., Mitchell, A., & Truckenmiller, A. (2015). Examining general and specific factors in the dimensionality of oral language and reading in 4th–10th grades. Journal of Educational Psychology, 107, 884, 899
This study provided some evidence for the concurrent validity of ReadBasix. Foorman et al. (2015) found evidence that component subtests in the ReadBasix were predictive of reading comprehension. In particular, they found that the vocabulary and morphology sections correlated with a state English language arts test. The authors also found that ReadBasix vocabulary and morphology subtests, correlated with the Gates–MacGinitie reading test.They also found ReadBasix vocabulary and morphology subtests demonstrated moderate correlations to proximal constructs of word identification, vocabulary, and oral language.