EDUC 822
Simon Fraser University – Faculty of Education
Evaluation of Educational Programs
MEd Post-Secondary, VCC Cohort
Professor: Dr. Larry Johnson
Student: Kathryn Truant
June 5, 2020
True or false: “The best evaluations are conducted by those who know and care about the program and its effects on students and clients” (Wall, n.d., p. 2).
The above question is somewhat rhetorical because I feel that it raises my own personal and professional political issues. In the journal article, “Integrating participatory elements into an effective evaluation,” Dr. Tanner LeBaron Wallace (2008) reports from the perspective of a “practicing evaluator” (p. 202), whose academic background is in social research methodology. The article is a case study – an interventional effectiveness evaluation. In it, Wallace explains the benefits and difficulties in allowing internal stakeholders to actively participate in aspects of their own evaluation process.
I am focusing on the matter of trust and leadership within the participatory nature of the evaluation in Wallace’s article for this reflection. The evaluation methodology prescribed is a “Participatory, Theory-Based Effectiveness Evaluation (PTBEE)” (p. 201), where “stakeholder participation in the process of developing a theory of change that underlies a program provides a theory-based model of causal mechanisms and intervention effects” (p. 202). In other words, intervention and growth are based on projected theories from those inside the organization that supposedly know best. Trustworthiness is key. Elements from Wall (n.d.), Schweigert, (2011), and Baron (2011) resonate (and are likely to emerge throughout the remainder of this course; it has certainly been an important theme in my reflections thus far in this MEd program). Why am I so cynical?
The distinction of participatory evaluation is truly collaborative if the internal participants are free from self-serving interests and points of view, “a view from nowhere” (Schweigert as cited in Nagel, 1986, p. 48). Otherwise, the evaluation becomes no more than a self-assessment. I believe that self-assessments are somewhat formative, especially when the stakeholders themselves are part of the evaluation process, but it is a slippery slope. Wallace argues that, “stakeholders are a source of practical knowledge regarding a program, the community within which a program operates and range of diversity with program participants” (p. 201). It is the ‘range of diversity’ that concerns me the most. It is also important to note that,
“in this particular evaluation, stakeholders were not involved in determining the general evaluation question, but rather in collectively agreeing that exploring questions regarding effectiveness were worthwhile and also determining which aspects of the theoretical model were of the most practical and political relevance to the local community” (202).
Data can undoubtedly be collected by an internal evaluation team, but having these same individuals “develop a theory of change” (p. 203) is risky. Wallace admits that, “unfortunately, those in positions of greater power had stronger and more influential opinions” (p. 205), and that there were “problematic power struggles” (p. 205). She concluded that “one of the most significant challenges in involving stakeholders in substantive ways throughout the evaluation process was managing contributions from individual stakeholders” (p. 206).
Participatory stakeholders hold great responsibility. Wall is right in saying that the best evaluations are conducted by those who know, and care, and can see the effects of their actions (n.d.); they cannot have a personal agenda or a preconceived trajectory of the outcome of evaluation when effectiveness is at stake, unless they have unbiased data to substantiate their designs. A theory driven evaluation model is employed to offer stakeholders autonomy which ensures substantive participation and not limited to basic reporting. These “specific approaches to evaluation were selected by the evaluator to encourage and facilitate the participation of stakeholders” (Wallace, 2008, p. 202). This brings me back to my concern with the trustworthiness of the stakeholders commissioned to participate in an evaluation. Perhaps it is not simply a matter of trust, but also a combination of lack of experience and perspective:
“Organizations’ exposure to evaluation may be limited depending on the background and experience of the employees and leadership of the organization, the priority of the evaluation within the goals of the organization, and the access to evaluation training and expertise” (Baron, 2011, p. 88).
Wallace also encountered issues commissioning the collection of data to the internal stakeholders, for much the same reasons that they were unable to agree upon developing a theory of change in the organization. Meetings were unproductive and leadership within the organization was sorely lacking.
As a dental assistant, educator and community volunteer, I have a lot of experience with the dynamics of communication. I have participated in countless staff meetings, budget meetings, ad hoc committees, and round-table discussions. I have also been on the evaluee side of evaluation and accreditation on many different levels: clinically, in educational settings, and in my community. Good leadership, and especially trust in that leadership, is the key to effective communication, production and positive change. Wallace’s article is an excellent cautionary tale. I understand that there were other aspects at fault for the demise of the evaluation (geographical distance between the lead evaluator and the organization, physical restructuring, and financial limitations), and the article ends on an educational note with Wallace admitting that a lot was learned from the shortcomings of the evaluation. She also continues to argue that the participatory aspect of evaluation, and the knowledge that internal stakeholders contribute, provide accountability within an organization, while placing great value on the contributions of individual stakeholders: “The potential for the improved validity of effectiveness evaluations via stakeholder involvement is a promising area for future research on evaluation” (p. 206).
My answer to the ‘true or false’ question is ‘true.’ I agree that the people most suited to carry out an effectiveness evaluation, and reflect on what is best for an institution, are the people that have intimate knowledge of the workings of said institution (here comes my ‘but’); however, I also agree with Wall that “it is best to bring in a professionally trained individual or company to assist with the evaluation” (n.d., p. 2). In my opinion, Dr. Wallace did not have the experience or leadership capability to assist with this effectiveness evaluation; I have a feeling that she would agree.
References:
Baron, M. E. (2011). Designing internal evaluation for a small organization with limited resources. In B. B. Volkov & M. E. Baron (Eds.), Internal evaluation in the 21st century. New Directions for Evaluation, 132, 87-89.
Nagel, T. (1986). The view from nowhere. New York: Oxford University Press.
Schweigert, F. J. (2011). Predicament and promise: The internal evaluator as ethical leader. In B. B. Volkov & M. E. Baron (Eds.), Internal evaluation in the 21st century. New Directives for Evaluation, 132, 43-56.
Wall, J. E. (n.d.). Program evaluation model 9-Step process. Retrieved from https://www.janetwall.net/attachments/File/9_Step_Evaluation_Model_Paper.pdf
Wallace, T. L. (2008). Integrating participatory elements into an effectiveness evaluation. Studies in Educational Evaluation, 34, 201-207. doi: 10.1016/j.stueduc.2008.10.008