Charles
Ungerleider, Professor Emeritus of Education, The University of British
Columbia
[permission
granted to reproduce if authorship acknowledged]
Confronted by what
seems to be a problem, educators often jump to a solution without knowing exactly
where the problem lies. It’s not surprising, but it is a distressingly common
and costly situation in education that occurs at the classroom, school
district, and at the provincial level.
Imagine eight
schools in which only 75% of students meet or exceed the “provincial standard”
using a levels approach to educational measurement. What I mean by a levels
approach is that the score a student earns on some assessment indicates that:
the student has not begun learning or was excused from the assessment (level
0); has begun learning but hasn’t made much progress (level 1); s/he is progressing
but is not quite at grade level (level 2); is firmly performing at grade level
(level 3); or is exceeding the expected performance for students at the grade
level (level 4). Although the meaning of the symbols may differ, the levels
approach is the same as assigning grades (A, B, C, D, F).
On paper, the 75% figure for level 3 and 4
students may look good. But merely adding together the level 3 and 4 students
doesn’t inform decision-making at the school, district or provincial levels
about students who are not at or above provincial standard.
Furthermore, in a school where 25% of the
students have not met the provincial standard there may be a relatively large
proportion of students who have not even achieved level 1 or who have been
excused from the assessment. Acknowledging the percentage of students at each
level provides a more complete understanding of student performance.
Providing Additional Information
Weighted averages provide more information than
can be conveyed in a simple percentage. A weighted average – instead of
treating all scores equally – considers the proportional relevance of each
score.
I have computed weighted averages for the
performance of students for a set of fictional schools. I did that by
multiplying the number of students at each level by the value of the level
achieved (level 0, level 1, level 2, level 3, or level 4) and adding the
products together. I divided the sum of those products by the sum of all the
students assessed including the students below level one and the students
excused from the assessment to find a weighted average for each school. I think
this approach provides a more nuanced look at differences among schools.
Here’s how it looks: an illustration
The table below contains the data from the
fictional schools. 75% of the students in each of the schools are at or above
the provincial standard (levels 3 and level 4), but there are some important
differences among them that affect the weighted average of their scores. For
example, the proportion of students at levels 3 and 4 at all the schools are
the same, but at Bay View 50% are at level 3 and 25% at level 4, while at River
View those proportions are reversed. The students at River View (weighted average
3.15) are outperforming the students at Bay View (weighted average 2.90). At
Coleman the proportion of students at levels 3 and 4 are the same as Bay View,
but the proportion of students at levels 1 and 2 at Coleman are exactly the
opposite of those at Bay view (15 % level 1 and 10% level 2 versus 10% level 1
and 15% level 2). That small percentage change is reflected in the difference
in the weighted average for Coleman (2.85).
By comparing the schools where the colours are
the same (Bay View with Westbrook, River View with Sea View, Coleman with
Oceanside, and Queen Anne with St. Lawrence), you can see the weighted averages
decline when students who have been excused and students who are below level 1
are included in the calculation. This provides a more accurate picture of the
results obtained. It also has a positive impact on morale when teachers efforts
across the entire range of levels are recognized and acknowledged, as opposed
to reporting only the percentage that met or exceeded the provincial standard.
Schools in which the percentage of students
meeting the provincial standard were identical can be differentiated from one
another. Examined more closely, the differences paint a more subtle and
complete portrait of student performance. The numbers are just a starting point.
Once you have them its time to dig deeper to diagnose what’s going on. You
cannot change something without understanding it.