transcript
Speaker 1:
[00:00] Hi, listeners, it's Susan. I'll cut right to the chase. I'm launching a sub stack. I'll be sharing more insights and takeaways from my conversations with leading literacy researchers, and also some behind the scenes looks from my own travels through the world of literacy. My first post is up now, featuring some thoughts I shared during my session at the Plain Talk about Literacy and Learning Conference, as well as a wrap up of our recent comprehension season. Check it out today and subscribe at scienceofreading.substack.com.
Speaker 2:
[00:38] Really, the driving force behind writing this book was to communicate to teachers how assessment can be useful to them.
Speaker 1:
[00:50] This is Susan Lambert and welcome to Science of Reading The Podcast from Amplify. One of my favorite things on this podcast is interviewing scholars about their latest books. Another one of my favorite things is talking about assessment. And today, we get to do both. Dr. Stephanie Stoller and Kate Winn recently authored a wonderful book titled Reading Assessment Done Right, Tools and Techniques for Data-Driven Instruction. And today, we'll talk all about it. We'll discuss the real purpose of assessment. We'll talk about the different types of assessment and when to use each. And we'll address common misconceptions about assessment. Dr. Stephanie Stoller is an educational consultant and the creator of Reading Science Academy. Kate Winn is an elementary teacher with 25 years of classroom experience. She also hosts the podcast Reading Road Trip from IDA Ontario. This is a great conversation that impacts both research and best practices around assessment. And I hope you learn as much as I did. Well, I am so excited for today's episode. I have two amazing women joining me today, Stephanie Stoller and Kate Winn, who co-authored a book together. So happy to have you both on. And I think before we jump into the content, I would love it if the both of you could introduce yourself and tell our listeners a little bit about who you are. Stephanie, maybe we'll start with you.
Speaker 2:
[02:19] Well, thanks for having us, Susan. I'm Stephanie Stoller. I live in Cincinnati, Ohio, with my husband and our dog, Lincoln. He's a Bishla. There's another new tidbit about me, Susan, for you. Great. I am currently working as a consultant on my own, supporting schools to improve reading outcomes. I have worked in the past for an assessment company. I have worked as a professor in a couple of different programs, the School Psych Program, the Reading Science Program at Mount Saint Joseph University, which is where you and I met, Susan. I have worked with schools and districts to use assessment to change outcomes my entire career, starting with my first job, which was a school psychologist. Assessment has always been part of my professional life.
Speaker 1:
[03:09] That's amazing. Our listeners probably know you from your organization. Do you want to refresh our memory about what that is?
Speaker 2:
[03:16] Yeah, so my company is Stephanie Stollar Consulting, and I offer a subscription membership called Reading Science Academy. And we have a lot of fun there learning about the science of reading and supporting each other to implement it to get better results for students. That's what it's all about.
Speaker 1:
[03:34] That's amazing. And I know many of our listeners subscribe to your newsletter. So I see your name in my inbox all the time. So thank you for the work that you're doing.
Speaker 2:
[03:42] Oh, thank you.
Speaker 1:
[03:43] Kate, over to you, a fellow podcast sister here, huh?
Speaker 3:
[03:47] That's right. Yes. I host and co-produce Reading Road Trip from IDA Ontario. And I am in my 26th year of teaching here in Ontario. I started teaching French as a second language. I worked with gifted students for three years. And then I started teaching English homeroom classes. And I've worked my way down in the grades, a lot of combined classes over the years. So four or five to grade three, four to grade two, three. And then now it's my 10th year in kindergarten. And it's been during this time, I've done the vast majority of my learning about reading science and assessment. And excited about our book. And so excited to be here talking to you today. Thank you.
Speaker 1:
[04:27] Yeah, thanks so much for joining. And listeners, before we actually hit the record button, we heard that it was big news for Kate in her kindergarten classroom today because the kids were able to go outside for recess. And that's always a big day.
Speaker 3:
[04:40] Yay. Absolutely.
Speaker 1:
[04:42] So the book we're talking about is called Reading Assessment Done Right. I can't tell you how excited I am about this book. We'll dig into more of it. But listeners, if you haven't seen it or haven't gotten it, we have a link in the show notes for you so that you can put it in your cart. It is really amazing. And I'm wondering if you could both share a little bit about why you thought it was important for you to write this book at this time. And maybe how did this co-author situation come together? Stephanie, let's start with you.
Speaker 2:
[05:16] Well, I have been writing this book in my mind for decades. I've wanted this book myself as an educator. I've wanted it as a university trainer. I've been looking for a place where all of this information is housed in one spot. So that's really the driving force behind writing this book was to communicate to teachers how assessment can be useful to them, particularly around prevention and early intervention. We know so much about how reading develops and how we can equip students with the essential skills. But we don't always give teachers the tools that will tell them, where is a student on that journey to becoming a reader, that really targeted assessment information. At the same time, schools are collecting too many assessments. We're drowning in data, but it's not targeted and it's not useful to teachers. I really wanted to cut through some of the noise and get down to what's actually helpful in the classroom. I've been thinking about the content of this book for a long time, but hadn't acted on it. When Kate mentioned that she was interested in writing a book about assessment, it was an instant yes from me. To be able to partner with somebody who uses the really most helpful tools in their classroom and who has had a transformation from using other assessments that maybe aren't quite as direct and helpful, and now is seeing results with her students. I should let her tell the story, but results with her students because of the change in the assessments that she's using. That was just like a dream to me, so it was really fun to partner with Kate on this project.
Speaker 1:
[07:09] Kate, why now?
Speaker 3:
[07:11] I think for me, it started just at the personal professional level in my own classroom just as I made the shift and saw the impact on my instruction and really what we're looking for, the impact on student achievement. I just got so excited about it, I wanted to spread the word, which I started small in my own networks, and then I actually ended up facilitating training for our K-6 educators, our whole board on a universal screener, and so that was exciting, but I wanted to spread the word even further. Part of it too was wanting to clarify misconceptions because I wanted to get all that good information out there, but then I think when you're as active on social media as I am, and you see all of the misunderstandings and things like that out there, you really want one spot where people can find the right information. I thought that I might have a lot to share, but I also knew I couldn't do it alone, and the only person I had in mind. So I'm really thrilled she said yes, was Stephanie, with her expertise and that systems background, and just to bring both of what we have to offer together in one book. And really, we keep hearing from other people, which is wonderful that there's nothing out there like it, and that sort of thing. So we're just so glad to be able to get this in the hand of educators.
Speaker 1:
[08:24] Yeah, that's amazing. And I know I told you when we had our pre-call, how much I appreciated this book because of its accessibility, right? Easy to read, easy to understand, really straightforward to implement. You didn't focus on too much of the theory and, you know, talking too researchy, but yet you showed that bridge from research to practice. And so it really is an amazing partnership. And again, it's such a great book. We're going to hear your story in a minute, Kate. But before we get there, this seems kind of silly, but I think what I want to do is sort of a baseline. Like, can you help us understand what is assessment and maybe what is the purpose of assessment? And that maybe seems foundational or maybe too big. I'm not sure somewhere in the middle, but I think it would be helpful to just get that out there and sort of level set before we move on.
Speaker 2:
[09:17] Well, for me, assessment is a tool for conversation. It's investigation, it's uncovering what is known. And there are multiple purposes. All assessments are constructed to answer questions. And it's the questions that drive the selection of the tools. So in the book, we framed the assessment purposes within a multi-tiered system of supports, because that's the tried and true way to improve outcomes for all students at scale. So when you're implementing MTSS, you have to answer certain questions about your students. And that's what drives the purpose. So when you want to know which students and which systems within your school are at risk or on track, that's a universal screening tool that answers that question. When you want to know exactly what should you teach tomorrow to any individual student, that might require a diagnostic assessment. So a completely different assessment purpose. It will require a different tool. You can't use screening tools for the purpose of diagnostic assessment. You also want to know, is instruction working in real time? So that's progress monitoring assessment, answers that, should I keep teaching what I'm teaching? Should I make a change to my teaching? That's the progress monitoring purpose. And then you need a summative looking back assessment that answers the question, did my instruction work? So looking back at the middle or end of the year, was I effective with my instruction? So that's outcome assessment.
Speaker 1:
[10:56] Yeah.
Speaker 2:
[10:56] So four different sets of questions, four different purposes of assessment.
Speaker 1:
[11:01] And you actually structured the table of contents that way. So you actually, the chapters sort of answer those questions and talk about the purpose of that. I love that so much. Kate, anything to add?
Speaker 3:
[11:12] No, I don't think so other than just the fact that by answering those questions, it tells you what to do. So it's not about, you know, I'm going to check the box and tell the district, yeah, I did my assessment. We can actually do something with the information when you're using good assessments and using them right.
Speaker 1:
[11:28] Yeah. But you have a great story about, so we heard a preview of it already, you used different assessments in the past than the assessments that you're using now. Can you tell us a little bit of that story?
Speaker 3:
[11:41] Absolutely. So when I first started in grade five and then four, three, two, and worked my way down, we were using different running record kind of kits, and basically the child would read to you, which is a great thing. You do want to hear students read, absolutely. You would calculate their accuracy percentage, which is also a good thing. Accuracy comes first, but really there would be these arbitrary levels. We would have these magical charts that we would have posted saying, beginning, middle, and end of the grade, here's the levels that the child is supposed to be at. I always assumed that that came from somewhere scientific, that grade 3s needed to be at level 27 by the end of the year for a reason. That was my goal, to get them there. I always loved pulling students aside and having them read to me. It took forever to use the assessments in those kits. Sometimes the kids would go up levels, sometimes they wouldn't. I didn't know then that these levels were arbitrary. But then at the end of the day, I thought, okay, well, they're only at 15 and they're supposed to be at 27 by the end of the year. But what do I do about that? What do I do next? How do you go from 15 to 16? What's the next step to get them to 27, which was never there? And then, of course, I've learned now in my experience as well that we weren't measuring automaticity. We need to know that rate information when students read to us orally to find out important things about the students and their reading skills. When I came to kindergarten using those kits, I didn't even realize at the beginning that the first couple of levels didn't actually even assess reading because they were those predictable leveled kind of books where the kids pick up on the pattern and they look at the picture and they just tell you which zoo animal it happens to be on that page and you go from there. And so kindergarten is when it really hit me because I thought, okay, I don't know how to teach reading from scratch. Nobody has ever taught me this and these kids are not making any gains and I don't understand what's going on here. So that's really what kind of sent me down the science of reading rabbit hole in the first place was trying to figure out, okay, I don't even know how to figure out what they know and then once I do know what they know, I don't know what to do about that. So that's why I really thought that teachers do need this information because once I got it, it was just life-changing.
Speaker 1:
[13:48] Amazing. You mentioned that you see great gains and you've had a real impact because of this. Can you tell our listeners a little bit about why you believe that you've been able to have an impact with those kindergarten students because you've shifted assessments?
Speaker 3:
[14:03] I think because first of all, we talk in the book about essential skills. So we do know exactly what students need to know to be able to become good readers. So I know those things like I know phonemic awareness and the alphabetic principle and phonics. So I know where they need to start and where to go. But when I use my Universal Screener, it tells me which students are meeting benchmark, which ones aren't, and then I know exactly what to work on with those students. I've had so many successful experiences of having a student below benchmark, targeting that and being able to then watch the data climb and get them to the benchmark and know, okay, it's not just like a level 27 that means nothing. It's know these benchmarks actually means something. So now I can breathe a little easier that this child is not at risk of future reading difficulty, and it's exactly because of the work that I've just done. And so that's where that really rewarding and affirming piece comes in.
Speaker 1:
[14:57] Yeah, that's great. And Stephanie, you mentioned a little bit about the differences of purpose between a Universal Screener and a diagnostic. Can you explain a little bit about that process then? Like, what's the process for Universal Screening? And at what point do you actually need to administer a diagnostic then?
Speaker 2:
[15:17] Yeah. Well, they are two different types of tools designed to answer two different questions. Screening tells you which students and systems need support. Diagnostic assessments tell you exactly what skill to teach next, so they're more fine-grained. Screening tools really need to be constructed so that they are brief, reliable, valid, standardized indicators of those essential skills that Kate just mentioned. And most importantly, that they're predictive of important reading outcomes in the future. So they are often timed. That's part of the standardization. Kate mentioned the fluency. And diagnostic assessments don't share any of those characteristics. They are not standardized. So they are not about comparing student scores or predicting performance in the future. They are not brief. They're usually in-depth and take quite a while to give. And they work together. You need both pieces of information, usually starting with universal screening. And for students who need instructional support to help shine a light on exactly what should you teach next, you might need to follow up with a diagnostic. In the book, we tried to point out scenarios where you could have all the information you need from the universal screening. So for example, in kindergarten, where you're indicating phonemic awareness and the basic letter sound and basic word reading. If students don't do well on those tasks, you might not need any additional assessment information because you know that's where you need to start your instruction. But with older students, if they don't do well on a screening tool like oral reading fluency, you might know that they're not comprehending that they're not fluent and they're not accurate, but you don't really know why. So the next step would be to dig a little bit deeper into why are they not reading accurately, and you would use a phonics diagnostic for that. Or if they are accurate and fluent, but they're still not comprehending, it's a less common occurrence, but it happens, then you would need a language comprehension diagnostic. So the two types or purposes of assessment work together in that sort of sequence.
Speaker 1:
[17:39] Yeah. And would it be accurate to say that who we're administering this to, that universal screening process, we do administer that to all students?
Speaker 2:
[17:52] Yes. And not necessarily do you give a diagnostic to all students. You could give a diagnostic to students who don't do well on screening. You could give a diagnostic to students who do exceptionally well on screening because you want to accelerate their placement on the phonics scope and sequence, let's say, so it would tell you how far up you can advance their skills. But everybody doesn't necessarily need a diagnostic assessment. What makes this really confusing, I think, for educators is that the names of some of these assessments have the terminology that doesn't match their purpose. Sometimes screening tools have the word diagnostic in them, and sometimes diagnostic assessments have the word screening in their title.
Speaker 1:
[18:42] Oh my gosh. Okay, so we have to follow up on that. How is a teacher to know then if they have to read our book?
Speaker 2:
[18:52] A good answer. Yeah, it's very confusing, not just for teachers, but for district level administrators who are selecting these tools for teachers to use. It's very confusing. So understanding what we do lay out in the book, chapter by chapter, the characteristics of the tools, the questions that they're designed to answer, and being really clear on what's the question you need to address. About your students, if you don't have a question, you don't need to do more assessment.
Speaker 1:
[19:24] Say that again.
Speaker 2:
[19:26] If you don't have a question about your students, you don't need to do more assessment. This should not be a compliance activity.
Speaker 1:
[19:33] I love that. I love that because often we hear, I can't give one more assessment. I think you mentioned that at the beginning too, is just the amount of assessments.
Speaker 2:
[19:44] Lack of use, lack of action that Kate mentioned at the beginning. I always use, I don't know why, but a Weight Watchers analogy. Stepping on the scale is not what causes you to lose weight. The measurement is not the action. You still have to move more and eat less. Just having the assessment, assessment is important, don't get me wrong, but it is not the whole story. You have to take action with the data. If you're drowning in too much information or the information is not clearly matched to a question, or if you don't know how to act on the information, simply having lots and lots of assessment data is not helpful. It can actually be counterproductive.
Speaker 1:
[20:27] Yeah, that's a good point. Well, we're going to shift a little bit to talk about progress monitoring because progress monitoring implies that we've taken some kind of action and now we're going to monitor or see how well that action has produced results. Is that right?
Speaker 2:
[20:48] Yes. It's like the GPS for educators. That's an analogy that we use in the book. It is the feedback to the educator so that you can make decisions with ongoing, a series of data like data points on a graph instead of making decisions at a single point in time. What you're looking for with progress monitoring is a trend. I think it's important for educators to understand it's not about the data point you just collected compared to the one last week, but it's about visually looking at the student's overall trend of progress over time. So what's important for selecting progress monitoring tools is that every single measurement opportunity is at the same level of difficulty. So this is what's behind the scenes in good progress monitoring tools that are curriculum-based measures is that the authors have controlled for each one of those progress monitoring probes, if you will, being the same level of difficulty. So if you see a student's score go up on the graph, you can be confident it's because they're learning, they've acquired more skills, not because that one is easier than the one that came before it. So it's a fairly formal technology. Teachers will often say, I'm monitoring this student's progress. So we in the book talk about this idea of, I don't know, I would think of it as lowercase progress monitoring. You're monitoring a student's progress, that's not the same as capital P, capital M progress monitoring. It's a more formal act of checking in repeatedly with the same level of difficulty of task so that you can see if the student is trending towards the goal that you set with the instruction and interventions that you've been providing. And it's not just about documenting failure. The whole idea behind progress monitoring is to trigger a change to your instruction. If the trend is not going up towards the goal that you set, then you should take some sort of action. You should make a change to instruction.
Speaker 1:
[23:02] I like how you framed that too, because thinking about if what I'm doing isn't working, if the student isn't increasing in whatever area I'm trying to do that, that I need to take the responsibility to sort of change my action.
Speaker 2:
[23:17] Yes, and that's a lot to put on a teacher.
Speaker 1:
[23:21] It's a lot, yeah.
Speaker 2:
[23:22] Yes. Teachers, I believe, should take responsibility for the learning in their classroom, but they shouldn't do it alone. That responsibility should not be only on the classroom teacher's shoulders. They have to be supported within their educational system to know what might change if they're not seeing progress with their students, to have the support with materials that will allow them to have alternative ways of teaching if the first thing they try doesn't work, to have people around them in teams who can help them with that kind of shift in instruction if that's necessary. So lots of support around the teacher to be able to make those kinds of real time adjustments to instruction that will accelerate progress for students.
Speaker 1:
[24:09] Yeah, that's a very good point. And Kate, I'm sure that you have seen implications of progress monitoring. What does this look like from your point of view, from an educator teacher point of view?
Speaker 3:
[24:24] Well, something I love about kindergarten is that I get to be the one who catches it right away, so we're not talking about monitoring progress for months and months and months and not getting them where they need to be. We're talking about short interventions that actually do the trick, which is great. And I think having the tool, like the progress monitoring tool that aligns with your screener and being able to track those data points, there's just something. I know I used the word empowering before, but I keep coming back to it because it gives you direction. And then like Stephanie said, you see if it's working, keep doing what you're doing. If it's not working, there are things you can consider and change. It's not this big mystery about what to do. And there truly is kind of a high that you get when you see that data climbing and then you kind of run to the teacher next door and like, look at this progress monitoring and you let the parents or guardians know and that sort of thing. So it's a really important piece of the whole puzzle.
Speaker 1:
[25:20] What about this idea of being sort of part of a collective community whereby folks can support each other? And I mean, we're not talking about MTSS, but obviously that multi-tiered systems of support means system and support and how important that is to have that environment in which to work. Kate, do you feel like you're in that kind of community that you have that ability?
Speaker 3:
[25:46] I am for sure. I know here in Ontario, we had the Right to Read Inquiry and the report came out in 2022. So since then, a lot of things have changed within the province, within our districts, within our schools, which is wonderful. And my board has been one that has kind of led the way within the province. So I know that, you know, at the board office, we have leaders in place who are giving us the right information and resources and support. We're doing like cool things like data days where reading coach comes and we all bring our data and, you know, talk about it and make plans. They helped teachers set up their progress monitoring graphs at the start of the year for the ones that weren't quite sure how to do it. And so all of that great stuff.
Speaker 1:
[26:23] And I would imagine it would make you a better reading teacher too as you're starting to see what works and see alternatives that you may have to provide to students. So I would imagine that accumulation of knowledge year after year after year helps you grow as a professional and be better at your craft.
Speaker 3:
[26:43] Well, I find having year over year data now, because the first few years in kindergarten, I had that running record data. And so people will ask me to compare, you know, how many kids I had reading then to how many had reading now. And I have to say, well, I have no idea how many I had reading then. I can't use that data. But now that I've got several years of high quality, you know, screener data and that sort of thing, I can also start to kind of look at student profiles and things like that. Like, OK, this student is here right now because I have Pre-KK together. So this is where this student is now. But OK, those two were similar when they were there and then I did this and this. OK, and that got those kids where they needed to be. So I'll just do that again and try it. Right, you kind of build up that repertoire once you've done things a few times to be able to look back at more general and we talk about systems data too, right? Not just for individual students, but and again, I know what has worked whole group with my kids to get them where they need to be. So I know what I should continue doing and that sort of thing as well. So yeah, the more these years of experience add up, I think that does make a difference too.
Speaker 1:
[27:43] What about you, Stephanie? I know you work with schools all of the time and you work with educators all of the time. To implement these various assessments, answering these various questions, what do you find is the most difficult for people to really wrap their arms around? Or maybe it's everything. I don't know.
Speaker 2:
[28:03] Well, the fact that reading improvement is a cycle of continuous improvement, I think this is really hard for people to understand. There's not a set of procedures that if you just do these 25 things, everybody is going to be a reader. Even with all the information we have in the body of knowledge called the Science of Reading, it doesn't mean that a particular evidence-based instructional routine or program is going to work in the context that you teach in or with students who are in front of you. Having this collaborative improvement cycle, this database decision-making process, this circular asking questions, trying something that you think is reasonably going to work, monitoring to see if it does, and perhaps going back to redefining or further analyzing what's going on in that situation. That ongoing process of using data to make decisions in schools is the heart of what the business of reading improvement is all about. And I think people don't fully appreciate that and think that, well, if we just implement the things on this checklist, then it'll all be good. But even leaning into what's in the research, we're not always going to get things right the first time. So we have to have good ongoing assessments that we can use to guide us. And we have to be willing to go back in and roll up our sleeves and analyze why what we've been doing isn't working and be willing to make a change. So I think that's what makes it really difficult.
Speaker 1:
[29:50] How do you answer this question then? And I'd be curious for both of you to provide some input here to district schools that say, we just have so many assessments we have to do. And we've already said, yeah, you probably do. And if you're not using the data from that, don't do them, but you're a teacher in the classroom. And but I have to write like I have to administer these assessments. How do you answer that question about there's just too many assessments?
Speaker 2:
[30:19] Well, we have a tool in the book that's available for download on the companion site for the book, which is an audit of assessments. So we think it's important for people to take stock of what do they have? What are they asking teachers to do and why? Do they have multiple screening tools? I encounter this over and over again. Are they lacking diagnostic assessments? Do they not have progress monitoring tools? Are there assessments that they're giving but they don't get used? Are there things that are not mandated but teachers are spending their time doing them anyway? Are there ways that they could be more efficient? You know, some districts are now doing dyslexia screening, but if they understood that that is universal screening, they could make a wise choice about a tool that would allow them to answer both of those questions with one assessment instead of having a long list of multiple tools.
Speaker 3:
[31:21] Yeah, I would say there's a lot to be said for the education and training piece because I know here in Ontario, this is our second year with screeners being mandated, and I will still hear from some educators who say, yeah, yeah, I did that screener, but I'm still doing my running records because I find that much more valuable to know what level they're at, so I can pick their level of books and that sort of thing. So it's not even mandated for them to do that part anymore. They're adding this on because they're still in that learning journey of understanding that that data isn't actually as useful to them as maybe they think it is. So I think there's a lot of that still to come. And I think also as an educator, to use your voice, like we say in the book, if you're mandated to do something, we don't want you to get fired. So yes, do what you need to do, but also try to use your voice. Do it and then kind of the opposite of what I just said, do the mandated one that's not going to be so helpful. And then maybe if you're not mandated to use a universal screener, try it anyway. See whether that gives you more useful information. It helps you inform instruction, where to go from there, and then start spreading the word. I mean, I am living proof that a lowly old classroom teacher can have a voice in their board and make a difference and change the way assessment is done. So you can start with your administrator in your school. And again, it's all about the approach and it's all about being able to back what you have to say. But when you have that classroom evidence of what that has done for student achievement, that will often perk up the ears of the right people.
Speaker 1:
[32:48] Interesting. And again, I'm going back to the way that you present this in the book. And listeners, you can't see this, but I'm holding up a little graphic. I know I told you that I love this little chart that you included all the way throughout the book, because it helps ground the reader in, all right, what's the step that you're going through? What questions are you asking using this step? And what is the assessment purpose? And so for district leaders, for teachers in the classroom, it's a good way again to go back, ground yourself in a very simple chart that reminds you of question to answer purpose in thinking about what's happening in terms of assessment within the classroom.
Speaker 2:
[33:30] Yeah, that's that database decision-making that's at the heart of large scale school improvement change.
Speaker 1:
[33:38] Yeah, just love it. There's some misconceptions out there about assessments. So we're gonna hit some of these in a bit of a lightning round. We're gonna start with you, Stephanie. Let's talk about the misconception that computer adaptive assessments give us all the information we need. Explain why that's a misconception.
Speaker 2:
[34:00] Lightning round, do you mean quick? I don't know if I have a quick answer.
Speaker 1:
[34:04] How about as quick as you can do it? How's that?
Speaker 2:
[34:06] As quick as I can do it. Well, okay, computer adaptive assessments often will assess all the standards at a grade level in a way that leverages how other students performed to make the test more efficient. So if a student gets a certain item right, it will advance the student past a couple of items to a future item. And if they get an item wrong, it'll take them back into items that are perceived to be easier. And so every student has their own path through a computer adaptive assessment. It's individualized based on their performance. And it is measuring, it's designed to measure every single standard at the grade level. So that kind of a tool, some of them have some good evidence of being able to be used for screening that they can predict future reading performance, like success on a third grade state assessment in the US. But because of the way those tools are designed, they really aren't that useful for planning instruction. Sometimes the reports from computer adaptive assessments will say that the student passed out of a skill like phonemic awareness, but they actually never took any items about phonemic awareness.
Speaker 1:
[35:30] Right.
Speaker 2:
[35:31] Because that algorithm routed them past those items. So they're not all that useful for that diagnostic assessment, instructional planning, grouping of students for classroom instruction. And because every time a student takes a computer adaptive test, their journey is different. So it cannot be used for progress monitoring without doing conversion of scores that mere mortals do not understand. And you would be hard pressed to get the test vendor to reveal to you how those scores are determined. So screening, maybe, yes, some of them are pretty good at finding students who are likely to have difficulty or are currently having difficulty with reading, but they can't really be used for diagnostic assessment or for progress monitoring. So the curriculum-based measures, I mentioned oral reading fluency. Kate talked about measuring, you know, phonemic awareness with their kindergartners, something like segmenting phonemes. These one-minute tasks really are the most efficient tools to use because the same tasks that you're using for universal screening can also be used in alternate forms for progress monitoring. And there's a lot of information in screening and progress monitoring that can be used for instructional planning. So you get more out of the time that you spend and it takes you less time to do CBM than a computer adaptive test. That wasn't short. Sorry.
Speaker 1:
[37:06] It's okay. And it feels like we could like dig into that in an entire episode. So we'll put that one aside, right?
Speaker 2:
[37:12] Yes.
Speaker 1:
[37:13] All right, Kate. Another misconception is when we're talking about universal screening, that screening only looks for deficit and isn't asset based. Why is that a misconception?
Speaker 3:
[37:24] Yeah, I hear that sometimes. And I know that that comes from a good hearted place of people wanting to focus on the strengths of students. And as teachers, we do love each and every one of these unique little children. And we do, in terms of a whole program, we do honor their strengths and the assets that they bring to the classroom, their gifts, their talents, all of that. That's true. It's also true that it's our job to teach them to read. Right. So if we know that an assessment like this is the tool that is going to help them learn to read, I think it's okay that we are maybe using terminology like below benchmark. If this is an essential skill and they are not meeting the benchmark to tell us that, you know, we can feel safe that they're going to have reading success, that's important to know because we want to get to them right away. We want to target their instruction. We want to get them to where they are at benchmark. This is not like a lifetime label that we're, you know, focusing on something the child can't do. But I don't think it's doing the kids, I think it's doing them a disservice if we only want to be focusing on, you know, the good qualities. And let's not worry, they can't segment phonemes, but let's not worry about that because they're great artists. Well, we know that there are certain things they need in order to read, right? And so I think when that's coming from a place of love, I think the messaging just kind of is, when we love the kids, we want to do the best for them and it will include some assessments like this.
Speaker 1:
[38:49] And another misconception related to that, that teachers can't do that extra work that's necessary. So in order to then for a classroom teacher, they have to do so much. But once they've identified a skill a student might need to have, how in the world can they do that extra work?
Speaker 3:
[39:05] I think this is all more complicated or people think it's more complicated than it actually is when you really start to drill this down. So and I mean, I am a classroom teacher, I am a full-time teacher, teachers are my people. So I am not trying to dump anything extra on them and I would never judge them.
Speaker 2:
[39:21] Mention how many children you have in your classroom.
Speaker 3:
[39:24] Oh yes, we have 32 students. Yes, sometimes I'll share data and people will reply, but how many students do you have? Well, we have 32. But I honestly don't think class size matters as much as of course, the needs in your class. So I do hear from some of my peers that have classes with very, very high needs that require more adult support that don't have it. So they certainly have my sympathy on that. But what I don't think teachers necessarily realize is the power they have in tier one. So that first level of defense is just what you're giving the kids every day. And so I have had great luck with whole group, explicit and systematic phonics instruction. I know there is research and evidence to support the model of having differentiated tier one groups as well. You need more adults for that. Right. So I know Stephanie talks about that a lot. And when you are in a situation to do that, or if you can advocate for that, you can see great success there too. But what every kid is getting first is the most important thing to start with. And so if you're using, if you have, and again, this is where districts need to provide their teachers with what they need. If you have that explicit and systematic phonics program, if we're focused on the word reading piece here, and you are using it and you're using it effectively, you're going to have a lot of kids meeting benchmark from that alone. So I know, for example, I received a message just last week from a grade one teacher, and her beginning of year data only had 33% at benchmark. But she was in a school where explicit and systematic phonics was not a thing in kindergarten. So of course, in grade one, this is what she's starting with, right? So if that's what you're starting with, like that is really, really hard. So I'm not trying to say like, oh, it's easy, just go from there. But once we kind of work out all these kinks and tier one is going right, then the whole point is only a smaller number of students need that tier two. So in my class now, usually once I have done an official beginning, middle, end of year screener, I usually have one or two kids who perhaps need a bit of targeted support. It's very, very rare there would be anybody well below benchmark, but there might be one or two who just didn't quite make that cut off. Well, I have time for that. I can work with one or two kids. And so we've got to get tier one right first. And then after that, just the idea that small group doesn't even have to be as intensive as some people think, especially in the early years. In kindergarten, we can catch them so early. I shared data on Instagram a while back of students not at Benchmark for, sorry, it wasn't even not at Benchmark. It was kind of in advance. I checked their phoneme segmentation fluency before we even got to the Benchmark period. And there were four who could use a little bit of work. So after six sessions of a few minutes each, three of them shot above the Benchmark, all good. One of them was still a little bit below. And so we kept working with that student. But that's six days. After six days. So early intervention is so important. Now, if you teach grade three and you have a student with severe dyslexia who has fallen through the cracks and has not gotten what they needed, six days of short spurts of support are not going to help. And so I also don't want to place all of this on teachers that you're supposed to do your tier one, tier two, and tier three in your class. You certainly need to have systems within the school where there is the intensive support for the students who need it. But I think just even the idea of small groups in general, where people are still so used to that, I've got 25 kids, so five groups of five, and I'm going to ring a bell, and every few minutes we're going to move. Instead, really, it's just the kids who need you should get you. And they should get you for as much time as you have to give them to get them what they need. And it can be so effective, and I think it's not until teachers actually try it that they can see that. But I do want to reiterate the messaging that they need the human support, the actual program resources support, they need all of that in their classrooms too to be able to do it.
Speaker 1:
[43:08] Great. Thank you for that. All right. Last lightning round. Maybe this is more like a thunderstorm or something like that. I don't know. I don't know what the analogy is here. Okay. Stephanie, another misconception. You know what? We only need to screen all students one time a year. In other words, after one sort of screening, some kids don't have to be screened anymore. Why is that a misconception?
Speaker 2:
[43:32] Well, I think there are three reasons. First of all, all students need to grow. Even your high-performing students need to grow across the year. If you're on track at the beginning of the year, great, but you should be growing your skills and you should be performing higher at the middle of year and higher still at the end of the year. The second reason is a misconception is that in the primary grades, with curriculum-based measures for universal screening, the measures change over time. So Kate just eluded to this, you don't even screen with the tool she uses on phoneme segmentation fluency until the middle of kindergarten. So if you didn't screen kindergartners at the middle of the year, because they were on track at the beginning of the year, you wouldn't have a chance to assess them on that skill. So that's a second misconception. The biggest misconception about screening that you touched on is the mindset that it's only about finding struggling students, that we do universal screening to sort students into who needs intervention and who doesn't, or sort them into tiers of support, that we're only doing screenings so we find the low-performing students. And yes, that's an essential task because the earlier we find them and the earlier we intervene, we can impact the course of their lives. But there's a second reason we do universal screening, and that's to evaluate our instructional systems. So if we don't include every student at the middle and end of the year, let's say we take all of the high-performing students out of the mix, they were on track at the beginning of the year, we don't include them for screening at the middle or end of the year, then we've skewed the sample and we can't use that information to give us feedback about our adult systems of instructional support. So I think that's the biggest misconception when I hear people talk about, can't we make this faster by just only testing the low-performing kids at the second or third screening time?
Speaker 1:
[45:43] Just a little bit of a personal aha, I remember when I was first a principal and in a building that was actually doing a really good job with universal screening. The aha for me was, oh my gosh, I should be using this data from middle of the year, end of year, well, beginning of year too, but middle year, end of year to figure out where I need to funnel resources.
Speaker 2:
[46:08] Exactly.
Speaker 1:
[46:10] Once I had that aha, it was like, oh, my teachers felt supported, right? The kids, you know, we started to be able to service the kids differently. For me, it was an ongoing look at that data for me to be able to share the love around and put the resources where they needed to go, as opposed to everybody gets a half hour of this person or something like that.
Speaker 2:
[46:32] Yeah. Back to the questions, assessment being driven by question. If we're only looking for the students who need support, then we're never asking how many students need support.
Speaker 1:
[46:43] Yeah.
Speaker 2:
[46:43] What's the big picture here? It could be an individual student issue, or it could be a curriculum and instruction issue. It could be an issue with the schedule. It could be an issue with the instructional materials and routines that we're using. If we never ask the question about how many students are on track or at risk, then we can't use that big picture information for getting the systems in place that will get every student to be an on-track reader.
Speaker 1:
[47:14] Yeah, such a good point. Again, I'm going to go back to that chart that you have throughout the book. It's a good one to ground yourself in and go back to every single time. The other thing that I think that you've done really well in this book, and even a little bit today, is to talk about the relationship between assessment and instruction. That linkage is so critical. And I think reading assessment done right, I think you would agree that when you're doing reading assessment done right, it actually has to have a link back to that instructional process.
Speaker 2:
[47:49] Absolutely.
Speaker 1:
[47:50] So, congratulations on an amazing accomplishment with this book. I wonder as we sort of wrap things up and bring things to a close, if either of you have any final tips or advice that you would like to leave with our educators about assessment, and maybe I'll give it to Kate first.
Speaker 3:
[48:08] Thank you. I would just like to say that, believe it or not, reading assessment can be so exciting. It can be so empowering. I mean, I'm a nerd, I know, but if you're listening to this podcast, you're probably kind of in that nerd zone too, right? So, if you don't feel like assessment is something that interests you, it's dry, it's boring. I put it in a drawer, it's never helped me. When it's done right, it can be amazing. Just also as a nerd, we love hearing from people who read the book, by the way. So, if anybody does read it, and when people reach out and they tag us and they ask questions, and that's just so incredible because we really, really just want to spread the word to impact educators and then, of course, thereby to impact their students.
Speaker 1:
[48:49] Great. Stephanie, final thoughts?
Speaker 2:
[48:52] Yeah. Assessment, as I said earlier, is a tool for conversation. When teachers get together to talk about what their assessment results are telling them about where their students are, what they know, and what they need to learn next, there's so much power in that. And that's how we hope the book will support teachers to view assessment as their best friend. So we invite all educators to become Assessment Nerds with Kate and I.
Speaker 1:
[49:20] Assessment Nerds that get excited about assessment and realize that assessment is their best friend. So how amazing. Thank you again, both of you, for joining. We will link listeners to all of the resources so that if they don't know who you are and the organizations that you have or the podcasts that you host, they can get access to that. Again, thank you so much for joining us.
Speaker 3:
[49:43] Thank you, Susan.
Speaker 2:
[49:44] Thanks, Susan.
Speaker 1:
[49:46] That was Dr. Stephanie Stollar and Kate Winn, authors of Reading Assessment Done Right, Tools and Techniques for Data-Driven Instruction. Stephanie Stollar is also the co-author of the book MTSS for Reading Improvement, Promoting Effective MTSS Implementation. And the creator of Reading Science Academy. Kate Winn is an elementary teacher with 25 years of classroom experience, and teacher educator who specializes in evidence-based literacy. Visit the show notes for a link to their book, as well as a link to Kate's podcast Reading Road Trip from IDA Ontario, and to Stephanie's website. Remember to download our Dyslexia Support Power Pack for a free bundle of resources, including a Dyslexia Toolkit, a Dyslexia Fact vs. Fiction eBook, a Dyslexia Infographic that summarizes key information about Dyslexia, and more. There's a link in the show notes, or you can visit amplify.com/dyslexiapowerpack. We've got something fun planned for next time. Well, I am so excited. Read Lion to have you back on Science of Reading The Podcast It's always such a pleasure and an honor to speak with you. Thank you for joining us again.
Speaker 2:
[51:05] Well, yeah, it's great to be here. It's great to be talking with you.
Speaker 1:
[51:09] That's next time. Science of Reading The Podcast is brought to you by Amplify. I'm Susan Lambert. Thank you so much for listening. Hey, literacy educators. I want to tell you about a new professional learning resource featuring bite-sized actionable tips you can implement in your classroom right away. Check out Amplify's brand new YouTube channel, Advice for the Literacy Classroom.
Speaker 2:
[51:37] I'm going to share three tips you can use to make your read-alouds interactive and engaging.
Speaker 4:
[51:41] I just wanted to take you on a quick tour of my classroom and show you how I lay things out.
Speaker 2:
[51:47] Let me give you a second tip. Ask students about their language.
Speaker 1:
[51:50] Start learning now at amplify.com/literacyclassroomadvice.