Kathryn Truant CDA MEd – a CDA's Role in Education and Oral & Maxillofacial Surgery

What is a supportive culture and how can it be cultivated and maintained within a small organization? – A case study

EDUC 822
Simon Fraser University – Faculty of Education
Evaluation of Educational Programs
MEd Post-Secondary, VCC Cohort
Professor: Dr. Larry Johnson
Student: Kathryn Truant
July 28, 2020


I am going to begin by editorializing and stating (based on decades of my own experience in dental practices) that organizational evaluations are not routinely done in the private sector unless the evaluation is solicited by the ‘owner’ of said organization. Small organizations are rarely formally evaluated because the owners believe that they can see all angles from their fisheye lens[1], but what they are seeing can be distorted from the view of ownership.

This case study is hypothetical, based on a new oral surgery practice that I am a member of, and one that I believe holds great potential, especially from an appreciative perspective. The owner from this point forward will be referred to as Dr. M. I am the lead evaluator in this case. I am new to the process of organizational evaluation, and my plan is to combine Janet Wall’s Program Evaluation Model 9-Step Process (n.d.), and “a contextually responsive approach to evaluating” (Manswell Butty, Reid & LaPoint, 2004, p. 41). Further, an action research method which reflects the Nominal Group Technique –  NGT (Kiely, 2003), to create a culturally focused approach in generating strategic and design dialogue (Banathy, 2008) will be well-suited for an evaluation of this size, and will be instrumental in demonstrating how Dr. M (and his staff) define culture. From my perspective:

Becoming involved in the organizational culture allows the internal evaluator to build the evaluation capacity of the organization by helping members of the organization to see evaluation as a daily process, and to prevent or change bad habits and attitudes related to evaluation, such as ignoring stakeholders, performing cursory evaluations to satisfy compliance issues, and failing to prioritize evaluation in the organization (Baron, 2011, p. 91).


Dr. M recently started his own oral surgery practice after breaking away from a much larger group practice where his own philosophies on running a business were not welcome. This was partly because the large practice was well-established before Dr. M joined, and mostly because the opinion of the other three surgeons was, ‘if it’s not broken, why fix it?’ The surgeon’s collective assumption that all was well within the organization, did little to address the staff as valued stakeholders. A culture existed in the large group practice where formative input from all staff members (including the newest surgeon) was not being considered.

So, Dr. M decided to go out on his own (and I made the decision to go with him). His goal is to create a supportive culture in his new practice – this is his vision. But, how will Dr. M know he has achieved this culture without formal and formative evaluation methodologies put in to place; how will he know if a model of praxis[2] is being realized? Remember, that this is a new practice; I had learned from our previous large group practice that assumptions and traditions lead to complacency. Wall defines evaluation: “an evaluation is a purposeful, systematic, and careful collection and analysis of information used for the purpose of documenting the effectiveness and impact of programs, establishing accountability and identifying areas needing change and improvement” (n.d., p. 1). Action research is a type of evaluation that focuses on improving practice, based on values and experiences from an inside perspective, and is distinctly collaborative (Whitehead & McNiff, 2006).

Step 1: Purpose and Scope of the Evaluation

As the lead evaluator, I initiated this formative evaluation myself (and even though it is –  at this stage –  a hypothetical evaluation, I have Dr. M’s consent with the goal being improvement). The stakeholders in question in this evaluation are the staff members. Dr. M believes that he has created a supportive culture for the staff within the organization. My objective is to create a formative, longitudinal evaluation system, which will take inventory from an appreciative standpoint, and build from there. The most important part of this process will be to discover and incorporate meaningful dialogue (Kiely, 2003), and formulate evaluation building activities (Baron, 2011).

The Plan:

  1. Determine if the staff members believe that they are part of Dr. M.’s supportive culture.
  2. Determine exactly what Dr. M. means by supportive culture.
  3. Determine ways (activities) to maintain a supportive culture within the organization.

Step 2: What are the Evaluation Questions – What Do I Want to Know?

Approaching this evaluation from an appreciative perspective means that I need to avoid a deficit-based case study. An appreciative inquiry is an organizational intervention that “is premised on the belief that it is much faster and more straight forward to go through the front door of enthusiasm” (Ludema, Cooperrider & Barrett, 2001, p. 1). Dr. M’s ‘front door’ is his mission to create a supportive culture for his staff.   I also believe that it is important to demonstrate an example (or two) of why I initiated this hypothetical evaluation in the first place. The first example reflects the exclusion of stakeholder input on a material level (Galperin, n.d.) – this account is entitled, We All Bear the Mark:

When Dr. M built the new clinic, the sink in the sterilization area was installed in the wrong place, and without consultation with those of us who use the sink – this may sound trivial, but a dental assistant stands over that sink scrubbing instruments for many hours each day. The sink is set into the counter too far, and there is a cupboard directly over the sink that we bump our heads on several times a day; this has become a bit of a joke now among my coworkers because our backs are sore, and we all have the same mark on our foreheads.

This resonated with me when I read An Educative, Values-Engaged Approach to Evaluating STEM Educational Programs (Greene, DeStefano, Burgon & Hall, 2006), and how the facility or physical structure of an institution is equally as important as the pedagogy and student diversity. Why were Dr. M’s staff not consulted in this important physical design aspect of the practice?

Another example which will bring verbal and mental actions into focus (Galperin, n.d.) are the monthly education meetings that Dr. M provides for his staff – this narrative is entitled, You Are Not Preaching to the Choir:

Once a month, Dr. M schedules a two-hour education session in which he expounds on the various procedures that we provide in our clinic. The problem is that he delivers the information far above the level of some of the newer (and less experienced) staff members, and he does not provide time for questions. I have also observed that my coworkers seem intimidated by the information, and do not want to interrupt by requesting clarification. Dr. M is lecturing at a very advanced level, so my coworkers often leave these sessions feeling confused.

Why does Dr. M feel that he is creating a supportive culture when he does not consider input from his staff? He has excellent intentions, but is not sharing his vision. Both these examples are reflective, and in no way, contribute from an appreciative standpoint. It is my intention for the evaluative process from this point forward to be reflexive.

Step 3: Specify the Design

I determine that Dr. M and his staff will be surveyed by a short questionnaire, and I plan to use a form of Kiely’s NGT (2003). Because Dr. M’s practice is new, I want to create a design that evolves formatively and longitudinally. Figure 1 best suits the evaluation model, incorporating action research, that I will present to Dr. M:

Screen Shot 2021-07-30 at 7.31.04 PM
Figure 1 SADIM by Taylor (2013)

Step 4: Create the Plan

In What Works for You? A Group Discussion Approach to Programme Evaluation (2003), Kiely introduces a collaborative approach to program evaluation where action research is directed by activities that advance improved understandings of a program. This evaluation begins with a survey. The questions will be posed to Dr. M and the staff, and will (hopefully) reflect the context in which Dr. M intends to define culture. At this stage, I will encourage the participants to be candid and honest. I believe that engaging stakeholders will be the trickiest part of this evaluation. Involving all stakeholders from the beginning of the evaluation (sharing the plan and including them in framing the questions) will inform the participants that they are essential, and play a critical role in the entire process (Manswell Butty et al., 2004). I believe that limiting the questions to three will maintain the focus of the evaluation, and make it simple for the stakeholders to participate.

The Questions:

  1. In your own words, define supportive.
  2. In your own words, define culture.
  3. Can you suggest ways to contribute to a supportive culture?

Step 5: Collect the Responses

The NGT method (Kiely, 2003) instructs the evaluation participants to complete the questionnaires individually. For this evaluation, the next step will be to form small groups of two or three stakeholders (Dr. M included) to collectively share and amalgamate their responses. In Kiely’s model (2003), the lead evaluator does not interfere in this process, however, there is a risk in any group discussion that the stronger personalities will prevail. It is at this stage that, as the lead evaluator, I will intervene by providing a brief tutorial on How to Listen in Skillful Discussion (EDUC 822, n.d.). It is extremely important to establish ground rules to “generate collective meaning” (Banathy, 2008, p. 1).

Step 6: Document and Analyze the Responses

I will deviate from Wall’s process at this stage by combining the documentation and analysis phases; step 6 of this evaluation amplifies the collaborative aspect of the action research model. The documentation process involves each group appointing a spokesperson who will communicate the responses in an open forum, and I will make lists in point form on the white board in our staff room without comment. The analysis phase is the melding of collective ideas from each group. Responses will be clarified by the groups, and a cultural theme will (hopefully) emerge.

Step 7: Disseminate and Interpret Feedback

Manswell Butty et al. define culture “as the shared values, traditions, norms, customs, arts, history, folklore, and institutions of a group of people” (2004, p. 39). Continuing in the spirit of the collaborative aspect of action research, and because I am evaluating a new practice, propagating a culture (as described above by Manswell Butty et al.) will foster and illuminate Dr. M’s vision. At this stage in this hypothetical evaluation, I cannot predict exactly what that vision will be, and it is ultimately up to Dr. M to decide how to assimilate the results of the survey, and the subsequent collaborative recommendations.

Step 8: Rinse and Repeat

As the owner of the practice, Dr. M must decide how often an action research evaluation should be conducted. The survey will need to evolve and reflect the implementation of results from previous surveys. The process will begin again with the staff (and Dr. M) being solicited to form a new instrument. Also, due to the collaborative nature of this evaluation, a timeline will need to be agreed upon for the implementation of results, recommendations, and ongoing evaluation.


The design of this evaluation is dynamic because the intended results will evolve over time. Initially the evaluation is being conducted to determine what Dr. M means by creating a supportive culture in his clinic. I believe that this collaborative, dialogue generating evaluation alone signifies a supportive culture: questionnaires and the formative dialogue that ensue will enact positive change driven by stakeholders. I think it is also important to speak on the unintended results of this evaluation. I cannot speak for my coworkers, but personally, I like having a voice and it is empowering to know that my opinion is welcome, and that I can be involved in enacting positive change. I also worry about strong personalities (and the quieter ones too) in this process, and it is important to address this early in the evaluation process (thank you to my professor, Larry Johnson for iterating and reiterating the importance of skillful discussion); I am learning that the evaluation process will improve my own communication skills! Another unintended consequence that I need to consider is the commitment of Dr. M, and the time required to perform an internal evaluation. Additionally, by initiating this evaluation from an appreciative perspective, I hope that Dr. M will not be offended from his standpoint of ownership. Finally, and in Dr. M’s defense, I am going to conclude on an end note from Baron:

As people implement a new business idea, there are many steps to take into account. These steps may include establishing the business entity, writing a business plan, acquiring licenses, hiring personnel, purchasing equipment, promoting the business through networking and other marketing techniques, and a host of other tasks. Evaluation is often the last thing on their minds. Notwithstanding, establishing a new business is a key time to start instilling evaluation into individuals and the organization (2011, 89-90).


[1] fisheye lens: a phrase meaning a distorted wide-angle view of an entire periphery

[2] praxis: a noun describing the gap in enacting theory to practice


Banathy, B. (2008). The conversation movement. In P. M. Jenlink and B. Banathy (Eds.), Dialogue as a collective means of design conversation (pp. 25-38). New York: Springer.

Baron, M. E. (2011). Designing internal evaluation for a small organization with limited resources. In B. B. Volkov & M. E. Baron (Eds.), Internal evaluation in the 21st century. New directions for evaluation, 132, 87-89.

EDUC 822. (n.d.). How to listen in skillful discussion (or any time) [handout]. Vancouver: Simon Fraser University, EDUC 822, MEd Program.

Galperin, P. (n.d.). Three basic levels of the action [handout]. Vancouver: Simon Fraser University, EDUC 822, MEd Program.

Greene, J. C., DeStefano, L., Burgon, H., & Hall, J. (2006). An educative, values- engaged approach to evaluating STEM educational programs. New Directions for Evaluation109, 53-71.

Kiely, R. (2003). What works for you? A group discussion approach to programme evaluation. Studies in Educational Evaluation, 29 (4), 293-314.

Ludema, J. D., Cooperrider, D. L., & Barrett, F. J. (2001). Appreciative inquiry: The power of the unconditional positive question. Graduate School of Business & Public Policy (GSBPP).

Taylor, J. P. (2013, March 8). SADIM [infographic]. Retrieved from   http://permaculturediploma.blogspot.com/2013/03/sadim.html

Whitehead, J., McNiff, J. (2006). Action research living theory [Handout]. Vancouver: Simon Fraser University, EDUC 822, MEd Program.

%d bloggers like this: