As I was reflecting on what I should write about to end the quarter, I was also attending a workshop on reforming teaching evaluations. So, this post is not actually about how to get positive evaluations from students, but rather how to make the teaching evaluation experience more positive! Before I go further, I want to emphasize my belief that the changes Vice Provost O’Dowd and the Academic Senate Council on Academic Personnel have implemented over the past years have moved UCI in a very positive direction. After describing my thoughts on the general landscape, I will return to what I think we are doing at UCI that is so positive.

Before I explain my recent thoughts on evaluating teaching, it is worth reflecting on how we evaluate research. First, there is what I call the “formative” assessment phase of research. We present talks, have papers and books reviewed, have proposals reviewed, and discuss our research with colleagues and mentors. Throughout this process, we receive significant feedback on what goes well, what the challenges are, and perhaps even where we fall short. By the time we are putting together our materials for merit, promotion, and tenure, we highlight, include, and focus on the successes. We might discuss challenges but always in the context of how we overcame or are overcoming them. The assessment of our research is fundamentally a strength-based, growth mindset, and positive process. The entire review process is aimed at success. Of course, if an individual is falling short, we recognize and hold them to standards, but the vast amount of formative assessment that occurs allows for the option of having that conversation outside the formal review process.

Now, for comparison, let’s consider the traditional views of evaluating teaching. This is grounded in a few experiences. First, most places have focused on student evaluations. Though this is changing, this remains at the heart of our teaching evaluation culture. Though many people use this for formative assessment and adapt their teaching based on this, it is a very individual activity that is not based on any formal process. When you change your teaching in response to student evaluations, no one really knows.  After all, you are teaching a new set of students, so they often do not know what you changed, and because it is not part of the formal review process, no one else knows either. This is very different from the communal nature of formative assessment in research.

Second, at a research university, teaching is a secondary condition. As such, it almost always only matters if your teaching is below a certain bar. Positive teaching generally does not have a meaningful impact. Therefore, even though you may be preparing your teaching evidence from a positive approach, the review process explicitly asks a negative question—are you below this bar? This is fundamentally a deficit-based culture focused on the idea  that “bad teaching needs to be fixed or weeded out.”

When we compare the different assessment cultures, they prompt very different responses. I have rarely (in fact, possibly never) heard a faculty member claim that their research efforts should not be evaluated, scrutinized, and subjected to data-based study. However, I regularly hear concerns from faculty that the data being used in the teaching evaluation process might label them as “bad teachers.” Given the current underlying culture of teaching assessment, this is a rational concern. However, if we can flip the narrative and align the assessment of teaching with the assessment of research, and truly commit to this, I presume that we would embrace data and discussion about our teaching to the same degree that we do with our research! 

What would it be like to employ an assessment of teaching that mirrors our assessment of research? Enter the UCI approach (and many others that I could cite after attending the workshop on assessment of teaching—I point people to the TEval and TQF websites as starting points). Introducing a second piece of evidence that focuses on a reflective teaching statement is grounded in the same strength-based, growth mindset, and positive process that we are used to for assessing research.

The premise of a reflective statement is that in between review periods, a faculty member is engaging in formative assessment through a range of tools and experiences. For many of us, the only source of data is still the student evaluations. However, as we build out resources within the Division of Teaching Excellence and Innovation, we have growing spaces for faculty to access formative feedback and data on their teaching. This includes tools available through the Teaching and Learning Analytics dashboards (formally UCI COMPASS) and various workshops, institutes, and learning communities. Faculty take this formative assessment and craft a presentation of their strengths (potentially including the challenges they have overcome or are currently overcoming) that is the heart of a self-reflective statement. Additionally, well-implemented peer evaluation can serve as an important and effective role in this formative assessment.

The one piece that is probably still missing for some faculty is the engagement in the formative process between reviews with others in the community. For our research, because the formative feedback always has a public element and is not exclusively private, we are able to focus the merit and review processes in such an overwhelmingly positive way. Yes, we still may not meet the expectations, but there is rarely significant mystery regarding the process.

Engaging in a more public formative assessment of our teaching can have an equally positive impact on the merit and review process. Learning communities are a great start, and there are many other ways to do this. If people in your department already know the formative feedback around your teaching, and you have been engaging in regular interactions with this feedback, the reality of teaching assessment can match the vision of a true “three-bucket model” in which teaching has its own set of metrics independent of research and grounded in a model of improvement. This does not impact the relative time commitment (importance) of research and teaching for different faculty roles, but it ensures that teaching stands on equal footing with research from an evaluation perspective.  

In other words, can we imagine a process that celebrates our research and teaching successes while maintaining standards in both, instead of one that celebrates our research successes and looks for our teaching failures? I would argue that we are doing just that at UCI—so let’s continue to do it and do it even better!