Measures of Content Reading Comprehension: Comparing the Accelerated Reader Measure and an Informal Measure

ABSTRACT MEASURES OF CONTENT READING COMPREHENSION: COMPARING THE ACCELERATED READER MEASURE AND AN INFORMAL MEASURE by Kate Gage Ginno Master of Arts in Education Reading/Language Arts Option California State University, Chico Summer 2009 This researcher noticed a common trend in incoming seventh grade students’ reading comprehension abilities. Many of these students lacked the skills and strategies to effectively process text and respond to high level questions in written form and verbally. Many of these students came from schools where the Accelerated Reader (AR) program was used as an instructional program and/or a school-wide supplemental reading program. The purpose of this study was to determine if students reading at specific comprehension levels as determined by Accelerated Reader tests exhibited comparable reading comprehension levels on an independent reading inventory. A secondary purpose was to determine whether AR testing and reading level placement viii procedures place students at levels that accurately reflect their capacity to be successful in content area reading. This study investigated whether thirty fifth and sixth grade students reading a grade level determined by the AR program comprehended at the same reading grade level placement on an independent measure of reading. The independent measure of reading comprehension was entitled The Seminar Instrument (SI). The SI contains questions that are passage dependent and include the five categories associated with reading comprehension (detail, vocabulary, sequence, main idea, and inference). Students were each given three passages: one at their AR reading level placement, one below, and one above that level. Students responded to the questions in writing, which is more closely aligned with what they will be expected to do when they enter secondary schools. The student responses to each SI passage were scored using William’s and Wright’s analytic scoring procedure, which was used to identify essential key elements of the ideal answer. The data were represented by tables and figures to examine three points: the level of comprehension as evidenced by student performance based on each students’ independent reading level as determined by AR; the performance on the different types of comprehension questions on the SI; and whether or not students scored between 75-90% comprehension on the SI. The results of the thirty students’ reading level assessment on the SI were an average of 56%. Observations of the students’ responses show a pattern of students struggling with questions that require the student to manipulate the information in the passage to arrive at a logical conclusion that goes beyond a literal interpretation of passage content. Students struggled the most with inference type questions and questions ix related to vocabulary. Only three students met the 75% criterion for instructional reading level proficiency. The results indicate that AR seems to overrate students’ comprehension abilities, if one accepts that being able to respond to passage dependent questions that ask for types of understanding as the comprehension necessary to succeed in school tasks. The results of the AR test may not be trusted to effectively inform teachers of students’ instructional needs and nor does it prepare students to meet the demands of their future in secondary school.