Can we undo the Gordian knot once and for all and decide whether we need accreditation, credentialing and certification?

A few weeks ago, I wrote a piece on professionalizing evaluation that attracted a great deal of attention. Many of you contributed interesting views, suggestions and concerns, suggesting the subject is "€œalive and well".

But the debate has gone on a long time, spanning articles in the American Journal of Evaluation when I started out in evaluation 25 years ago, to recent features in the Canadian Journal of Program Evaluation, and in New Directions for Evaluation. The longevity of the debate, and the absence of any obvious consensus leads me to ask if the subject is so fraught that there is no likely solution, and if there is little sense in expending more effort on it?

But, as Wilcox and King explain, professionalization has taken significant time in coming about for many other professions, from carpenters to doctors, moving through what seems to be a common evolutionary process involving the establishment of professional associations, standards, and systems for legitimizing practitioners.

And, as anyone practicing, researching, or teaching evaluation knows, evaluators are a diverse lot, representative of strong and often different views of what is right or wrong. So maybe we'€™re not doing too badly given our profession is recognizable as such for less than 60 years, and maybe we shouldn't expect to have solved all of the problems just yet!

From my perspective, there is a range of pressures and factors at play that are nudging us, as a global evaluation community, ever closer towards professionalization.

These include pressures on improving the quality of evaluations.

  • Effectiveness of Evaluation. Users of evaluation are increasingly inclined to ask questions about what difference evaluation makes, about value-for-money, and about evaluation quality. These, legitimate questions will intensify, and push us to better demonstrate the effectiveness of our work and how we influence change -€“ the competence of evaluators has a big role to play in this.
     
  • Delivering timely, high quality evaluations. For some time, commissioners of evaluation have struggled to get the right people for the right jobs. At least part of the drive towards professionalization is motivated by frustrations with the costs -€“ not just in money, but also in time, reputation, and effort -€“ associated with poor quality evaluations.

The risks to the users of evaluation findings - making ill-informed decisions and suffering the consequences - are real, and so are the risks to the evaluation profession. If evaluation cannot positively influence change - and quality of evaluation is an important determinant in that its own relevance might be questioned.

As evaluators, our aim needs to be to deliver commonly defined, higher quality evaluations of a type that new entrants to the profession can train for, build upon, and develop further. And that brings me to what might be a turning point in the professionalization debate: the formal training of professional evaluators. LaVelle & Donaldson (NDE 2015) discuss the strengths and weaknesses of the current supply of evaluators. They observe, that while evaluation continues to be embedded in faculties like education, or public sector management, the last decade has seen an incredible expansion of degree programs (masters and doctorate).

The cadre of professionals graduating from these programs will shift the formation of the profession. In spite of the weaknesses in the current "supply chain" of evaluators, these graduates will have a lot more in common than, say, those who enter evaluation as part of a broader career in the field of international development. Many of them are keenly interested in international development and understand the potential of evaluation to effect positive change. They will become more and more demanding of their peers, and will more readily agree on shared concepts, understandings, and practices. I am also confident they will be mindful of the importance of recognizing diverse value systems and local context, and of codifying competencies and codes of conduct to ensure greater systematic focus on these dimensions of our work than at present.

In short: I believe the move towards professionalization is inevitable and, if managed wisely, can bring an incredible boost to the evaluation profession.

But do we need to wait for the new generation of evaluators, or is there something we can do to spur the process? For instance, LaVelle speaks about weaknesses in the formal education system for evaluators. A concerted effort from evaluators in academia, research, and practice could strengthen the offerings in degree courses for evaluation across faculties. Likewise, the Canadian experience shows the importance of broad-based consultation, which, although time-consuming and complicated, is essential to create ownership.

I will be discussing these questions at the CEval in Germany in a couple of days, and will also present evaluation competencies and their use at the WBG. So, notwithstanding the longevity of the debate about professionalization, it appears to be far from over. Stay tuned!

Comments

Submitted by Susan Stout on Tue, 06/09/2015 - 07:00

Permalink
Thank you for keeping us updated on thinking along these lines. I hope the discussions in CEval are constructive, and will be interested in any discussions/conclusions/suggested next steps that emerge concerning how to ensure skill and competency in helping developing country institutions learn about how to put monitoring as well as evaluation tools to use - beyond the unfortunate focus on feeding various donor beasts information about 'their' results.

Submitted by Caroline Heider on Wed, 06/10/2015 - 05:45

In reply to by Susan Stout

Permalink
Susan, thanks for drawing attention to something that is really important: evaluation is no longer a simple donor-driven exercise. Over the past 10 years or so, I have seen a marked increase in demand for evaluation from countries that borrow or receive grants from multilateral institutions, like the multilateral development banks for the UN. An impressive example was the recent panel I chaired at the African Development Bank's annual meetings, where we had many participants from African countries who all wanted to know how evaluation would inform governments and other stakeholders in their countries to ensure they get better results from investments.

Submitted by Benoît Gauthier on Tue, 06/09/2015 - 00:01

Permalink
Thank you, Caroline, for another insightful analysis. Yes, common training is a key factor in building an esprit de corps and in ensuring a base level of knowledge, *** in the long run ***. Can we wait that long? I don't think so. That's why the Canadian Evaluation Society (CES) tackled this issue starting in 2006 based on survey data that showed that there was an appetite for a system that would be indicative of competence (for those who commission evaluations and hire evaluators, as well as for evaluators wanting to make a demonstration of their commitment). The initial reactions to the credentialing project were mixed, so CES put in place various types of consultation mechanisms that were mainly aimed at understanding the concerns so that the credentialing program could be built in a way that factored in these concerns. I think that, in large part, CES has succeeded in addressing these negative views because the vast majority of CES members (and non-members) consider the program to be a positive addition. Yes to consultation but I suggest that it should be seen as a tool to design a professionalization program that is sensitive to the needs and the particulars of its implementation environment.

Submitted by Caroline Heider on Wed, 06/10/2015 - 05:47

In reply to by Benoît Gauthier

Permalink
Benoit, many thanks for sharing your experience and adding to the discussion. You and CES have a lot to say about this process, and I appreciate your contributions here and as our guest blogger. The inclusive approach you have taken sets an excellent example of what we need to follow at the global level, even if it is hard to bring everyone into "one tent".

Submitted by Tessie Catsambas on Wed, 06/10/2015 - 06:46

Permalink
Thank you, Caronine for the encouraging thought of future evaluators emerging with skills that are most standardized, so that when hiring an evaluator, a client can be assured of a minimum skills package. What will we do in the intervening years? For one thing, we may continue to look for the impossible that CES seems to have achieved: exclude the unqualified, but include everyone, and discourage no one. Yet, there are still some really inadequate or harmful evaluations out there. The good news is that clients are getting smarter about what to require (beyond subject matter expertise). Parallel to the credentialing saga, let's have a discussion on what we consider excellent evaluation? The theme for the American Evaluation Association's annual conference in November in Chicago is "Exemplary Evaluation." To inform our discourse, we have solicited submissions of examples of exemplary evaluation from the AEA's more than 50 Thematic Interest Groups. We are planning to share these in different ways to spark discussion. So, even as we continue to talk about credentialing, let's also think about how to educate clients (and each other) on the range of evaluations we consider to be excellent. There are several experienced evaluators among us who have done meta evaluations and assessed evaluation quality for funding organizations. Might one of them publish a synthesis report to inform a discussion on what is excellent evaluation, and what is reasonable for clients to expect?

Submitted by Caroline Heider on Wed, 06/10/2015 - 08:16

In reply to by Tessie Catsambas

Permalink
Tessie, great contribution. And YES: having discerning "consumers" of evaluation is an essential part to driving high quality. That's why I think the focus in higher education should not only be on producing high quality evaluators, but also include courses on appreciation and use of evaluation in other faculties, like governance, public sector management, and other subject matter courses. I also like your idea of what do "exemplary evaluations" look like, but would caution that we need to get to principles to avoid practices are copied from one evaluation to another even when that particular design feature is not appropriate. Lots more to discuss and write about. Thanks for contributing to a stimulating discussion.

Add new comment

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Image CAPTCHA
Enter the characters shown in the image.