What Educators Want from a Reading Assessment

assessment checklist

We administered a survey on what educators really thought about and wanted from their reading assessments. What we learned from the 100+ educators who responded to the survey surprised us, but the most interesting findings were in what educators actually wanted from their assessments.

Demographics.The respondents were K-12 administrators representing 36 states; 11% of the respondents were from urban districts, 24% — suburban, 65% — rural. In terms of district size, 43% were under 600 students 25% — 600–2.5k, 9% — 2.5k-5k, 8% — 5k-10k, 11% — 10k-25k, and 3% 25k+.

two pie charts with district sizes and locale


Survey findings

Use of Technology. Three quarters of the districts (75%) were using Google Classroom, and the remainder was mostly split between Schoology (14%), Canvas (11%), and SeeSaw (7%) — some districts used more than one LMS. But a few districts were also using other LMS such as Microsoft Classroom, Otus, Edgenuity, and Edmentum. Overwhelmingly, educators wanted to be able to administer their assessments through the learning management system they used.

Purchasing Assessments. Districts used a variety of funding sources to purchase their reading assessments, most often Title I funding, local funding, and grants. However, CARES and ESSER funding is now available for purchasing high-quality assessments that will help address learning loss by accurately assessing students’ academic progress and assisting educators in meeting students’ academic needs, including through differentiating instruction.

Use of Assessments. We learned that 60% of the respondents were using two and 20% three or more assessments, suggesting there is no one-size-fits-all assessment out there. Interestingly, despite school closures, only 5% of respondents did not use any reading assessments in the 2020–21 SY. It was no surprise that the majority of districts were gravitating to F&P, iReady, NWEA, and STAR, with STAR being the clear winner used by 40% of the respondents; however, 5% were planning to drop STAR for iReady, DIBELS, NWEA, F&P, and Capti Assess (similar to STAR, but more thorough). F&P, iReady, and NWEA were used in roughly 25% of districts each. The second tier was represented by DIBELS and Lexia Rapid at under 10% of districts, followed by a long tail of emerging reading assessments, including Capti Assess, DRA, mClass, AimsWeb, etc.

Satisfaction with Assessments. What surprised us was how dissatisfied the respondents were with the popular assessments such as F&P, iReady, NWEA, and especially STAR. Only 10% of the districts were happy with their assessments, but there was no clear winner. More than 50% of the respondents were somewhat satisfied, and a third (33%) were not happy with their assessments. What is more, many respondents expressed concerns that their reading assessments (across the board) were taking too much time to administer, and yet were not thorough enough. In fact, a number of administrators complained about inconsistent results, which made educators question assessment accuracy and validity. That is why many administrators were on the lookout for any new research-based assessment that would address their concerns.

The Need for Speed vs. Thoroughness. When asked for the one thing that they would change about their assessment, 16% of respondents wanted to make their assessments shorter because they either wanted to reclaim some of the instructional time or wanted to assess students more frequently. Interestingly, 9% of the respondents wanted to be able to assess more reading skills, which is in conflict with the desire for shorter assessments. Assessment designers know that the more questions they use to assess a specific skill, the more accurate and consistent the results will be. What we found particularly interesting is what seems like a contradiction between the need for a faster assessment vs. an assessment that is more thorough (in breadth — the variety of skills assessed, and depth — the number of questions and the amount of time devoted to each skill).

Usability Improvement Requests. Additional requests included easier to understand reports (12%), help with preparing interventions (11%), and improved ease of administration (9%). The report improvements included requests for more details on the performance of at-risk students, opportunities for students to explain their answers, easier to understand for students and parents, and more focus on areas of struggle. Requests for help with intervention were related to the reports, including actionable reports and automatically created interventions.

So, what did educators want from a reading assessment? Opinions differed, but most educators were dissatisfied with the "popular" assessments, and wanted a faster yet more accurate assessment that produced consistent results and that worked with their existing Learning Management Systems.

This and many other topics related to the Science of Reading are covered by the Professional Development training that is offered for the ETS ReadBasix assessment.
Enjoyed this article? Sign up to receive our insights on reading assessments and intervention.