Assessment and evaluation conformity woes: a partial solution?

My stance on this subject is about as secret as it is neutral. I do not believe that teachers should have to use the same assessment and evaluation strategies simply because they are teaching different sections of the same course. To say that this practice ensures fairness may be accurate (although probably not) but this practice does not ensure equity. Here’s what often happens with this practice:

  • evaluations are determined well before students’ needs have ever been assessed
  • junior teachers are made to feel that they have to use the assessments of senior teachers because “they know better”
  • little to no differentiated based on student needs, strengths, or interests

In a perfect world, department members would find plenty of time to collaborate and constantly revise evaluations, but we all know how challenging it is to find this time.

So in an effort to please the powers that be who insist on uniformity across sections* I’ve come up with a plan:

Using our computerized grade book program, “Markbook,” we can assign different mark sets. In the past, I created a Term mark set, a Final Exam mark set, and a Course Culminating Activity (ISP, CCA… etc. Choose your acronym) mark set. Each mark set was weighted according to the percentages we use to calculate the final mark.

  • Term: 70%
  • Final Exam: 15%
  • CCA: 15%

(These numbers are determined by our board)

So now, all I’ve done is add one more mark set. Ready for it?

Here we go!

  • Formative: 20%
  • Summative: 50%
  • Final Exam: 15%
  • CCA: 15%

See what I did there? It doesn’t solve all the problems and of course we still need to be striving for at least the “appearance” of uniformity, but… now it doesn’t matter if teacher A records 15 different formative assessments and teacher B records 4 formative assessments; the summative assessments will be worth the same because of their weighting.

See this is where things were getting tricky in our department. We agreed that major assessments would be the same, (well… I didn’t agree but I don’t have a choice in the matter) but we also agreed that formative assessments could differ depending on the class (I did agree with this). But if Teacher A had 15 different formative assessments and Teacher B only had 4, then Teacher B’s summative assessment would be worth proportionately way more than Teacher A. Trying to get all the weightings to line up in Markbook is just ridiculous and doesn’t allow for much freedom in designing formative assessments UNLESS you do what I did.

So is it a perfect fix? No. But at least we can clearly show that regardless of the types and variety of formative assessments (or “rehearsals” if you like), the summative tasks (“performances”) are still worth the same percentage of the overall mark.

The only real challenge with this is that in the very early progress reports, the marks will be skewed (although, the are anyway). So we might have to play around with the weighting of the mark sets in the early stages to give students and parents a more accurate understanding of their progress. By midterm, however, we should be able to use the actual weightings.

We’ll see how this goes! Let me know what you think of the plan, or if you’ve tried something similar.

 

 

*… for perfectly understandable reasons, I should add: Students and parents complain when there is a perception that one teacher is “marking differently” than another teacher. The perception is that students in oneĀ  class are not receiving the same treatment as students in another class. Now, having students complete the same assessment doesn’t alleviate this problem; it just helps with the perception.