As a former teacher, I recall striving to figure out what was causing my students to struggle with reading. I’d review their previous year’s state ELA assessment results and get an overall understanding of where my students fell within reading proficiency. I’d see that my students were highly proficient, proficient, approaching proficient, or below proficient. These four levels told me something about my students, but not enough to identify their strengths and areas of need, so I’d administer a diagnostic reading assessment to find out where I should focus my instruction.
Now, after 2 years of potentially interrupted reading instruction due to the COVID-19 pandemic, it is even more important to identify where students are in their reading development and abilities. This is especially true for students who are expected to read to learn (grades 4-12), but may have gaps in their foundational reading skills due to interrupted instruction. This is where you will need a diagnostic assessment, but how do you choose a good one?
When looking for a diagnostic assessment, there are certain characteristics to check. First, look for a diagnostic assessment that matches your purpose (e.g., the reading skills you want to assess). Then, look at the length of the assessment and examine its reliability and validity. To find these, you may have to go beyond marketing materials and look into technical documentation or reports, or just ask the customer representatives to give you the details.
Foundational skills. Depending on your purpose, you should select a test that measures the specific reading skills you want to focus on. It is important to measure more than just reading comprehension because often one or more of its component skills impede a reader’s ability to proficiently comprehend a text. That’s why a solid diagnostic reading assessment should measure foundational reading skills which are the stepping stones to proficient reading comprehension. The foundational reading skills that are critical to reading include decoding, word recognition, vocabulary development, morphological (word parts) awareness, sentence processing, and reading efficiency. Gathering information on the foundational skills will allow you to pinpoint areas of need to support students in becoming skilled readers.
Length. We all wish that assessments were quick and accurate. Unfortunately, the shorter the assessment, the less reliable and valid it will be. A good diagnostic assessment should include 20-30 items for each skill to provide insight into the reader’s abilities (the number of items may vary depending on their type). It is even better if you can assess each skill with a separate test because you do not have to administer them all at once, and a single short test is easier to fit into your class schedule. Often, assessments only claim to be diagnostic, but they do not include enough items to provide instructionally diagnostic information. Some assessments seem to include enough items to measure a single skill, but claim to measure multiple foundational skills. Beware, they may have low reliability or validity!
Reliability. If you assessed a student multiple times over a short period of time, you would expect that the student’s scores would be similar, right? Unfortunately, many assessments will produce wildly different scores. Reliability refers to the precision of the score and the consistency of the results from one test to the next. While the scores in reading comprehension may sometimes fluctuate due to students’ background knowledge, you should expect that a good diagnostic assessment produces a reliable measure, at least, for the foundational skills. A reliable diagnostic assessment should have a Cronbach's alpha score of at least 0.7 for each reading skill it claims to measure.
Validity. Finally, any good assessment should measure what it claims to measure. If an assessment has high reliability, you can feel fairly confident that it is measuring what it is meant to measure, indicating validity. Yet, there are a couple of things to examine when determining an assessment’s validity. And you may need to look into the technical manuals or, better yet, peer-reviewed publications about the assessment to see what the researchers who created the assessment say about its validity. First, look at the theoretical framework of the constructs to determine whether the research behind the assessment supports the intended measures and uses; marketing language can easily go well beyond what is supported by research. Then, examine research or technical reports to determine how the assessment has been correlated with other standardized measures. For instance, if a new reading assessment designed to assess comprehension is highly correlated with a well-known, standardized comprehension measure, such as the Gates–MacGinitie reading test, the new assessment is considered to be trustworthy. Conversely, if an assessment has weak correlation with a well-known test, it lacks validity.
So, what does a good diagnostic reading assessment look like? It really depends on your purpose. However, as a general rule, a good diagnostic reading assessment covers foundational skills, includes 20-30 items per skill, and provides a reliable and valid measure of each skill. You may have to do some research to justify your choice of the assessment, but it will be all worth it when you get results you can trust.
To learn more about the qualities of a good diagnostic assessment, watch the webinar below.
“As a teacher, what I am looking for in a diagnostic assessment is… to look at all of those different foundational skills to determine if there are weaknesses..."
— Sean Morrisey, Teacher, ELA Standards Leader, Frontier Central SD, NY