I seem to be ‘stuck’ on a recurring idea or theme: relevance. In order for students to be engaged with information it must be relevant to them. In one of my other classes, we read and wrote about Gloria Ladson-Billings’ Dreamkeepers in which she explores the idea of culturally relevant teachers in poor, urban, predominantly African-American schools. Those teachers who can ‘reach’ the students with culturally relevant methods (coupled with high expectations, etc.) have students who generally exceed expectations in performance and who can connect the skills and abilities emphasized in school curricula to their lives in meaningful ways. In almost every article that I’ve read for two of my other courses, the same theme emerges again and again: in order to promote student understanding and knowledge acquisition, you must engage your students with relevant methods and/or content.
But what does this have to do with the Developmental Reading Assessment (DRA) which is used to assess student reading levels? And what does the DRA have to do with technology? Last week, I assisted in administering a DRA for a 5th grade student. Well, we started to administer it, but I had to get back to class before the assessment was completed. This was my first experience with a DRA – I don’t remember taking them as a student. It was evident from the moment that the teacher mentioned the DRA, that the student was resistant to the experience. I won’t conjecture as to the student’s reasons for her resistance, but what was obvious was that the student was not in the least bit interested in the content covered in the material she was to read and on which the assessment was based. Perhaps this was because the content was irrelevant to her life/experience; perhaps she just wasn’t interested. I wonder about the conclusions that would be drawn based the results of the assessment. Ostensibly, the test is used to measure reading abilities/levels. But if the content of the materials used is irrelevant and the student is not engaged, how well would they do on the assessment? I understand that most kids would rather surrender cell phone and/or computer privileges than take yet another exam/assessment. I understand that assessments are standardized so that students can be compared to each other on a continuum. And I assume that the materials used in these assessments are designed by professionals who are knowledgeable in literacy acquisition. I couldn’t help wondering that this student was being assessed not so much on how well she can read the material, but on how interested she was in the materials. At the very least, her disinterest and disengagement would have negative effects on the results.
So how do we accurately assess student reading levels in order to have a baseline for literacy instruction? Wouldn’t it be possible to use computer and internet technology to improve the relevancy of the content of materials used in these assessments? Instead of one story that every student must read, wouldn’t it be possible (and worthwhile!) to allow for some options, some student selectivity? The assessment could be computer-based and present the student with choices of stories or materials that would then be used as the basis for the assessment. Once the student wants to read about something, a more accurate assessment on their reading abilities could be presented.