While there is a good deal of literature that describes faculty development programs that have been successful at individual institutions, there is scant literature on how to evaluate the success of faculty development. Many higher education institutions (both two- and four-year) appear to simply assume that the traditional faculty development practices are effective. No business or profession would accept this state of affairs. They would insist on some sort of evidence for the effectiveness of any professional development initiative.

There appear to be two primary reasons why colleges and universities pay little attention to evaluating faculty development efforts. First, the desired outcomes of faculty development are ambiguous. What are the goals of faculty development efforts? Should we be attempting to develop the whole person, the disciplinary specialist, or the teacher? In the past many schools have tried to do all three. However, this simply dissipates their efforts and dilutes their outcomes. It is like trying to put out a house fire with a garden hose. You may save the hose, but you will lose the house. The second reason is that it is difficult to determine the effectiveness of faculty development efforts. How does a faculty member’s attendance at a day-long teaching workshop or a disciplinary conference ultimately translate into better student learning?

These are difficult questions for community college leaders to answer and each institution will need to grapple with them in their own way. However, at least part of the answer to both questions is for community college leaders to develop a faculty development program with a coherent and cohesive mission and precise goals. Only then can they develop meaningful evaluation criteria. You must know what you want to have happen in order to recognize it when it does happen. Once colleges have a clear destination in mind they can develop evaluation procedures. When developing an evaluation plan, college leaders should allow for several types of evaluation. Each has strengths and weaknesses; however, when used in conjunction with each other, community college leaders can more readily see the big picture. Although most of the evaluation techniques that follow elicit subjective opinions, they can still provide useful data for faculty developers. A basic principle of adult learning is that adults tend to be motivated to learn when they believe their needs are being met. Subjective forms of program evaluation can assist faculty developers in determining whether or not faculty members believe current programming is meeting their needs.

Types of Evaluation

Verbal feedback

Verbal feedback seeks the subjective opinions of the participants and does not get at how an activity might change teaching behaviors or how it might enhance student learning. This is the weakest form of evaluation. It depends on individuals willingly offering their opinions. Often only those with strong negative opinions freely offer an opinion.

Nevertheless, since those with strong opinions can and do influence their colleagues, faculty developers should be responsive to any and all verbal feedback.

Open-ended written statements

Open-ended written statements are also a weak form of evaluation. It also elicits subjective opinions from participants and does not get at changes in teacher behaviors or student learning. It is, however, useful for soliciting honest and unanticipated responses that closed-end questionnaires do not allow. However, like verbal feedback, often only those with strong opinions respond.

Questionnaires

Questionnaires also elicit subjective opinions from participants and do not get at changes in teacher behaviors or student learning. One advantage to using questionnaires is that they can be designed to gather specific information to improve particular aspects of a program.

Formal written reports

Formal written reports can be used when faculty participate in off-campus development activities, such as attending a conference. It reasonable for an institution to ask those using college funds for some sort of report showing what the individual gained from participation. Nonetheless, formal written reports also convey subjective opinions and do not get at changes in teacher behaviors or student learning. Moreover, it can sometimes be difficult for a faculty member to express in writing the intangibles he or she gained from a conference.

Activity pre-test, post-test

A pre-test/post-test scenario allows faculty developers to judge whether or not participants gained knowledge and skills from a specific activity. However, they cannot tell us if teachers will use their new skills and knowledge or if these skills and knowledge will actually impact student learning. In using a pre-test/post-test design there is also a danger that faculty members will resent having the results are shared with others.

Testing student outcomes

Testing student outcomes may help us determine whether or not faculty development activities have an impact on student learning. Unfortunately, it can be very difficult to connect faculty development directly to improved student outcomes. To do so, we first need a clear and distinct statement of desired student outcomes. We must also keep in mind that rarely can we draw a straight line connecting participation in faculty development to improved outcomes. There can (and usually are) numerous contributing variables to improved outcomes and teacher behaviors may only be one.

Classroom observation

Classroom observation can allow us to see changes in teacher behaviors that might be attributable to participation in faculty development. Used with testing of student outcomes or some other means of assessing improved student learning, classroom observation can allow us to infer that participation in faculty development contributed to improved student learning. A disadvantage of classroom observation is that it is time consuming and requires trained objective observers.