Education Revolutionary – Is the Publishing of Education Accountability Results Helpful?

Education Revolutionary – Is the Publishing of Education Accountability Results Helpful?

Ratings, rankings, and test results are all part of the education accountability frenzy that is sweeping Canada and the United States. Governments, NGO’s, and the media all try to excite voters, sell copy, and enrich the economy by assessing quality in education. But does the publishing of these rankings, ratings and test results really accomplish anything positive, or has it simply created entertainment, dishonesty, and new industries of measurement? The Canadian Psychological Association (CPA) and the Canadian School Psychologists Association (CSPA) aren’t pleased (http://www.cpa.ca/documents/joint_position.html) that provincially mandated school test results are being ranked by the press with poorly reported results being blamed solely on the school rather than also on student deficiencies. They believe that this reporting puts too much unhealthy pressure on teachers, administrators and students in the low performing schools. Others, including the Frazer Institute believe that competition between the schools is healthy and will result in higher test scores. They believe that putting the public spotlight on schools’ performances will embarrass them into improvement, and will allow parents to make informed decisions on where to send their kids to school. The Frazer Institute has published its own rankings of schools.

The position paper from the CPA and the CSPA also gives examples of how the press has reported improvements in a schools performance from year to year, and given credit to improvements in teaching methods when the reasons for improvements could have also been other factors such as smaller class sizes, fewer ESL students, and changing scoring procedures for the tests. Their paper also tells of how Michigan real estate agents use school rankings as selling features for homes, and how American school teachers and administrators have positively fudged the results of the tests to gain good publicity and deflect punishment by governments. (Perhaps this positive cheating doesn’t happen here in Ontario because the Toronto Star and the teachers’ unions are titillated by low test results. They state that low scores are proof that the conservative government’s new curriculum has failed students).

Moving into accountability in postsecondary education, many of the provinces have copied from the Americans a nifty little number crunching model called “Key Performance Indicators” that goes hand in hand with “Performance Funding.” Institutions and governments devise the performance indicators that institutions will be assessed on, and then the institutions receive funding bonuses based on how well they score. In Ontario there are five performance indicators for colleges, and two performance indicators for universities. The colleges are assessed on graduate employment, graduate satisfaction, student satisfaction, student attrition, and employer satisfaction. The universities are assessed on graduate employment and student attrition. The reason there are more indicators for colleges is that the colleges are Ontario crown corporations, therefore the government has more power to make demands on them whereas the universities are more autonomous and more able to resist government intervention. Some of the problems with this model are that the colleges have selectively used the results in their massive advertising campaigns, reporting only the results that make them look good and leaving out the results that make them look bad such as student attrition rates. Also, both the colleges’ and universities’ “graduate employment rates” include those graduates who are working in “McJobs.” Students have also reported instances of fudging of the graduate employment numbers by college departments, and universities are reporting student attrition rates that don’t start until second year.

Maclean’s magazine does a yearly ranking of Canadian universities that intelligent readers label as “goofy journalism.” Perhaps people buy this issue for entertainment purposes – to catch up on the controversies that are going on in postsecondary education – or to see if someone is saying something bad about their choice of university. Are governments’ education accountability efforts going the same way as Maclean’s?