Saturday, June 26, 2010

More about our NWEA Accountability System

In my last post, I wrote about our school district's use of NWEA testing results as a new system of accountability and instructional improvement. Because this new system plays a central role in the way in which the Board of Education intends to fulfill its responsibility to monitor educational results, I believe it is really important that the community understand how this new system works.

Teachers, principals, parents, the superintendent's executive team and the Board of Education are now all working from the same NWEA MAP test, which allows us to measure growth at the individual student level, at the classroom level, at the school level and across the district. You can find out more about how this nationally normed testing system works by clicking HERE. NWEA testing norms are based on a population of over one million students who take the tests. Parents who want to understand more about the NWEA testing system can download the "parent toolkit" at the main NWEA testing website. MAP tests are given three times a year, and that data is immediately provided to the teacher. Fall testing tells the teacher who is behind and who is ahead. Winter testing tells whether current strategies are working. (If they are not, remember, it can be the teaching strategy, the student's effort, the parent's support, or the curriculum, or all of the above that needs to change.) Spring testing then tells us where the student is at the end of the year, and each year, last year's data will be provided to the teacher in the fall.

The best way that I can help explain how the NWEA testing results serve as an accountability and instructional tool is to show you the key to the testing results that we see for schools and grade levels. There is a detailed guide to the reporting information on-line, if you want more information. Click Here

(Click on the image to get a better view)

Let's say that we want to look at this year's results for mathematics at Madison Elementary School, for example. Students are going to be assigned to four categories. In Brown, we will see students who are in the G-P- group. These are students who tested below projected proficiency (they are performing below target proficiency), and whose educational growth was also below typical (median growth). Some of these students may have major obstacles to learning. They may be recent immigrants who speak no English. They may have disabilities that significantly inhibit their learning. They may be students who could learn much more, much faster, but the teacher and school haven't figured out how to reach them. They may lack needed parental support. Or a combination of all of these factors. School may not be working for them. Or possibly, a different teacher would do a better job of teaching them.

The color code doesn't answer the question why. Its a beginning point in asking what is happening with this group of students in this particular classroom and school. In yellow, we see the students who are G+P-. These are students who started the year behind the target proficiency for their grade level. They need to catch up--to grow more than a year in a year's time, or they will never reach proficiency. When most of us were in school, kids who were behind usually stayed behind, or got even further behind, because being behind is always a handicap that makes it harder and harder to progress. So these G+P- kids are success stories; they are beating the odds that would otherwise cause them to get further and further behind.

Coded in Green are the G+P+ students. These are students who ended the year above expected proficiency, but who grew faster than an average year's growth. Good schools don't ignore these kids, just because they are ahead of average proficiency. Their goal is to challenge everyone, and so G+P+ percentage is a very important success measure. Coded in orange are the G-P+ kids. These are kids who are ahead of necessary proficiency targets, but they have not continued to get further ahead. They haven't grown as much as typical growth, but they are doing fine in terms of proficiency. Keep in mind, always, that "proficiency" is an artificial cut score level. No Child Left Behind likes to pretend that the same score should be the goal for everyone. But that is, well, just plain baloney. Different students have different intellectual strengths, different emotional obstacles, different personal goals, different interests. The idea that everyone should wind up at the same point is super silly

So now let's look at how the color coding helps us illustrate performance in the School district and in a particular school. Here are the reading results for District 742 (Grades K-9) and for Madison Elementary. We have a similar chart for every school.

(Click on the image to get a better view)
The chart tells us that just about 73 percent of Madison students exceeded the median growth for elementary students, based on national norms. In an average school, we would expect that 50% of students would exceed the expected growth (because expected growth, by definition, is the median growth). For the entire school district, 60 percent of students grew in reading faster than the expected growth rate. Keep in mind that not every student is in the system. We aren't doing the testing yet for grades 10-12 (this is a cost issue). Also, in order to have a growth measure, a student has to be with us at the beginning of the year and the end of the year, and so a bunch of students aren't measured. The information also tells us that at Madison, 69 percent of students reached projected proficiency performance, and that 68.1 percent in the district reached that level.

Now the Board of Education has adopted long term targets that seek to put more students in the green category--growing more and proficient. Teachers and principals across the district are working in teams to develop techniques to make that happen. Throughout the year, teachers and principals are monitoring their progress towards individual school and classroom goals and assessing whether the techniques that they are using are successful. Parents are receiving individual student reports that contain the same information--is the student at the target proficiency level for his or her grade in math and reading? Is the student progressing at the median rate, or higher or lower. Teachers, parents, principals, superintendent and public will all be receiving similar information.

As we do this, some classes, or some grades, or some schools are not going to do as well as others. The purpose of this data is not to humiliate anyone: it is part of a continuous progress model, which says, we cannot improve, if we don't look at what we are doing. My teaching experience may be ancient, but it tells me that there are all sorts of reasons why a particular class doesn't do as well as we would like. Sometimes you get a great class of super-stars, kids who really want to learn, and who work really hard for you. Sometimes you get the class from you-no-where, and it seems as though there is almost nothing you do that works. The goal here is to catch students and classes right away, and intervene immediately to use the best possible techniques to promote success.

We are going to see some schools that seem to need improvement. We just have to remember, that the first step to improvement is to see where improvement is needed. Its going to take several years to see if this new accountability system works--works for students, for teachers, for schools and for the district. But one thing is for sure, we now have hard data that allows us to scrutinize where we need to do more work.

No comments:

Post a Comment

comments welcome

Note: Only a member of this blog may post a comment.

Supreme Court's Second Cruz-Guzman Decision Requires Fundamental Re-Evaluation of Education Clause Claims

The Minnesota Supreme Court's recent Cruz-Guzman decision has radically, (but appropriately), refocused Minnesota's jurisprudence on...