Student self-selection into elective courses tends to generate higher teacher course evaluation scores than for required courses. Tenure and promotion committees often say they take this into account, but such mental adjustments are unscientific and often unconvincing. At the end of the day the numbers speak for themselves.
A better approach is to scientifically adjust the numbers to account for any and all factors that are beyond the control of the professor such as gender bias, class size and other such qualitative and quantitative variables. This can be done using regression analysis. The whole point of regression analysis is to create a level playing field. After all that is why a regression is referred to as a conditional mean. It can produce a mean score that has been adjusted for potential biases and distortions.
For simplicity in creating indicator (qualitative) variables (aka binary or dummy variables), it is convenient to code the category most likely to have the higher score as a “1” and that with the lower score “0”. An elective would be coded “1” while a required course would be coded as “0”. Male professor might be coded “1” and female “0” to account for gender bias.
Quantitative variables such as class size could be coded as number of students for a straight line regression. If students in very small classes are hesitant to criticize the professor because the professor might have a pretty good idea as to who they are, then this will be a downward sloping line.
On the other hand it may sometimes be appropriate to include both number of students and number of students squared for a quadratic regression line. The latter approach might be useful when a very small class size generates higher scores, but somewhere around fifteen, twenty or twenty-five students the curve just flattens out with no additional reduction for larger sizes. Other adjustments might be appropriate for very large auditorium classes where students have lower expectations for student-teacher interactions.
To create a level playing field, each professor’s score could be adjusted to correspond to some baseline such as what score a male professor teaching an elective with forty students would be expected to get. A professor teaching a required course could add the coefficient value of the elective variable to his or her score while a female professor could add the male variable coefficient value to her score.
Finally the difference the predicted score for forty students and the predicted score for the actual number of students in the professor’s class could be added to the professor score. For small actual class size this “added” number might be a negative one.
The result of all these calculations is a new score that has been adjusted for the factors that are not under the control of the professor. If all appropriate factors have been taken into account, this new score will have removed the biases and distortions inherent in the original score. The teaching performance of professors can then be evaluated on the basis of a level playing field with meaningful numerical comparisons.
Of course, it is important to mention that scores from the students should be just one of many considerations when evaluating a professor’s teaching performance. We all probably remember a course or two we took as students that was really entertaining but later proved to be less than insightful or meaningful. Yet some other course may have seemed a bit dull but turned out to provide the most educational value. Student input is important but does not by itself provide a complete picture. However, biased and distorted scores are never helpful.
Comments