By Caitlin Mapitsa
At the recent conference of the South Africa Monitoring and Evaluation Association (SAMEA), I attended a stream on lessons learned from Evaluation Case Studies. At CLEAR, we often struggle with how to codify and synthesize our evaluation tools, approaches, and experiences, therefore I joined the stream in the hope of learning from other organisations’ case studies.
My impressions from the stream are as follows:
- The range of methodological approaches, multidisciplinary collaboration, and possibility for innovation within the stream was impressive. While in the past, a preponderance of evaluations came from the health and education sectors, this conference elicited a much wider range of sectors and approaches.
- Most of the methodological innovations that are taking place, are contextually responding to a lack of data. Presenters are creating long term impact conclusions without a long term impact study, counter-factuals with no experimental design. In essence they are really practicing ‘real world’ evaluation methods, with creativity, and methodological rigour.
- Lastly, more case studies that are emerging are not only about conducting evaluations, but about building evaluation systems. While this is a welcome progression, it creates a bit of disjuncture on the stream, because methodological case studies are usually presented by the evaluation practitioners, while systems are usually built by the commissioners and users.
CLEAR has been aware of this gap between users and producers of evaluations for some time, and looked for ways of advocating for narrowing the gap. Based on my learnings form SAMEA I think a case study approach is an ideal opportunity for bringing together different viewpoints on the evidence divide. Furthermore, through the use of case studies, it might be possible to curate different opinions about the same situation, and see if this moves the conversation forward.