When I first wrote about Value-for-Money in the Evaluation Business, a number of people commented about the importance of addressing the question of “value for whom” and whether evaluation was generating value for a diverse set of stakeholders.

As I’ve written in the past, knowing your stakeholders is important to me as an evaluator. To be effective we need to understand who our stakeholders are, what is at stake for them, and what they want to get out of the evaluation. So,  what is the value we are creating for stakeholders, and can we please them all?

Yes and no.

We have a range of stakeholders that look to us for independent evaluation and evidence about what works and what needs fixing.

Within the World Bank Group our main stakeholder groups are:

  • The Bank Group shareholders, represented in the Board, who look to IEG and who use our work to engage with senior management on strategic issues and policy directions and who rate our work highly for its usefulness;
     
  • Senior management which derives  the highest value from our major evaluations and from our Results and Performance report that summarizes performance trends of the portfolio overall, also with the aim of raising strategic issues. Feedback from this group about the quality of our work, and hence, its value added has decreased and we are working hard to improve our standing. Recent evaluations and how they were received indicate we are on the right track, but we will follow-up with our client survey next month; and
     
  • Colleagues who work in The World Bank Group’s country offices and regions on country engagements and in the new Global Practices and Cross-Cutting Solution Areas handling projects. Signals from this group are mixed. On the one hand, the client satisfaction survey tells us that operational staff value our work, but individually we often get questions that show dissatisfaction.

Answering the right questions

The comments on the first value for money blog raised other concerns, namely whether we were answering the right questions or had come too late in the process to make a difference. These are valid concerns, and they arise in all evaluation functions, not just in IEG.

Some of these issues come from different expectations of what evaluation can do. For instance, we often get asked why we cannot provide hands-on support to improving quality. The simple answer is it would interfere with management’s responsibilities and undermine our independence.

In other cases when it comes to asking the right questions, we share the approach papers for our major evaluations and concept notes for our major learning products with stakeholders and ask for feedback. At the end of the evaluation process, we have a similar interaction to ensure we get the facts right and that the recommendations make sense.

Clients and users

The users of our evaluations also go beyond the World Bank Group, even though we do not have direct client relationships and rely on Bank Group  operational staff to follow-through on lessons and recommendations.

These stakeholders include civil society organizations and other bilateral and multilateral organizations that rely on us as an arms-length institution that is close enough to see what is going on, while removed enough to have an independent evidence-based view.

Does this mean we serve every stakeholder and answer every question? Certainly not!

Reflecting on the value-for-money question there are three ways in which we  as evaluators can push ourselves further:

  • We can be sure that the questions we ask in our evaluations hone in on specifics that deepen the understanding of results and past experience;
     
  • We can ask ourselves tougher questions about what difference our recommendations will make once implemented, and what value-added they will create; and
     
  • We can capitalize on our work by making it more accessible to different user groups, creating greater value for them, at relatively little cost to us.

Comments

Submitted by Anonymous on Wed, 10/29/2014 - 06:04

Permalink
For me the evaluation question is: If we can produce the same output with two different activities having different costs, why should we spend more?

Submitted by Caroline Heider on Thu, 10/30/2014 - 04:05

In reply to by Anonymous

Permalink
I agree with you, but would go even further: if we can produce the same outcome with two sets of interventions or activities, we should aim for using the most efficient ones. In a simple equation this is fairly simple: for instance, purchasing the same book from different (online) bookstores, might get you the identical product at a lower price. In development, there are many combinations that contribute to outcomes, and likewise for evaluation.

Submitted by Karel on Wed, 10/29/2014 - 21:15

Permalink
As evaluators, we all understand the meaning of what we call indicators. I have learned to spot a few during my carreer: Stakeholders who value your work will increase or at least maintain your financial and personnel means. They will ask for an evaluation before they take the decision and not ask you to validate their decision with an evaluation. They will at least accept conclusions written bij highly qualified specialist istead of challenging them. They will understand that good evaluation work cannot be delivered in one day. If the stakeholder do not value your work in any of the above mentioned ways, then maybe it is time to close our evaluation office and spare the costs. The question of "is this money wel spend?" meaning "will this benefit the (or even one single) poor?" can also be applied on many other activities in the developpement circles. I think e.g. at international gatherings, high level meetings, etc.

Submitted by Caroline Heider on Fri, 10/31/2014 - 04:51

In reply to by Karel

Permalink
Karel, yes, the clients who value evaluations and use them for decision-making are optimal. But, at least in my experience there is no single stakeholder, and the views on the value-for-money differs depending on what is at stake. For instance, the executive board values an evaluation for different reasons than management might. And, sometimes the least welcome evaluations are those that are most needed and will be most influential, even though they have to overcome a lot of hurdles. And, of course: VfM is not a question that concerns evaluation alone, but needs to be applied to many other development interventions and activities, as you mention.

Submitted by Thilo Hatzius on Wed, 10/29/2014 - 23:06

Permalink
By listing the 3 stakeholder groups indicating that (1) rates your work highly for its usefulness (figleafe!) (2) quality of your work, and hence, its value added has decreased and (3) individually you often get questions that show dissatisfaction you implicitely give the answer to the question of usefulness of your work... don't you?

Submitted by Caroline Heider on Fri, 10/31/2014 - 04:57

In reply to by Thilo Hatzius

Permalink
Thilo, interesting comment, but I would rather see it as a discussion of areas that we need to take into account when thinking about VfM that includes how to we increase value while decreasing the cost. The other point that I was trying to make is that different stakeholder groups have different appreciations of evaluation work, in part dependent on what is at stake for them. In estminating VfM we will have to think about how to take the different views into account. For instance, if all stakeholder groups think something is high (or low, for that matter) value, the aggregation is easy. The challenge arises when you have different perspectives and have to attach weights to the views of different stakeholder groups.

Submitted by Julian King on Fri, 10/31/2014 - 08:02

Permalink
Are our evaluations providing value for money? What an important question, and one that is seldom asked. I think this is tied to: i) the quality of the evaluation - for example, the validity, utility, credibility and ethicality of our evaluations (as per Scriven's KEC); ii) the specific question of whether the evaluation is provided at an appropriate cost; and iii) critically, the relationship between resources invested and the quality of the evaluation. As an example of (iii) above, if a small additional investment in an evaluation would significantly enhance its quality, we might get better VFM by increasing the investment. Conversely, if we could provide an evaluation that was just as valid, useful, credible and ethical by doing it another way, at lower cost, we could derive better VFM by doing so. At the American Evaluation Association Conference in Denver this year, I presented at theoretical foundation for the evaluation of value for investment, integrating economic and evaluative thinking. My e-book is available at www.julianking.co.nz/downloads/ It is only by understanding the resources invested ('what did we put in?'), the value derived from the investment ('what did we get out?'), and by having a robust basis for reconciling the two and reaching a clear, well-reasoned conclusion ('was it worth it?') that we can reach a comprehensive assessment of value for investment. We can do this with or without economic methods, depending on context. Either way, good evaluation of value for investment is underpinned by evaluation-specific methodology including evaluation logic, context-appropriate valuing, and program evaluation standards. The current separation of evaluation and economics is sub-optimal. If we can get smarter about combining them, we will get better at understanding the value of our investments.

Submitted by Caroline Heider on Fri, 10/31/2014 - 06:59

Permalink
Julian, many thanks for your well reasoned contribution. Patricia Rogers had pointed me to your presentation: thanks for tapping into the blog and for sharing both your thought and the link to your e-book. Very helpful. We will be running another piece on cost, where I would be interested in your views. Hope to see you back on our blog.

Submitted by Meg on Wed, 11/05/2014 - 21:46

Permalink
Julian's point is spot on. Its part of the problem of defining and measuring the numerator-- "value." Sometimes funding for development programs does not prioritize getting a good handle on value, because it can be complicated and costly. Those proposing the program have less of an incentive to emphasize this as well, since they may be given a funding ceiling and putting more resources toward monitoring and evaluation reduces program funds. Later, we find that the validity and credibility of the estimated value are subject to much debate. Putting the concept "value for money" into the wrong hands can mean "penny wise and pound foolish".

Submitted by Caroline Heider on Mon, 11/10/2014 - 06:24

In reply to by Meg

Permalink
Thank you, Meg, for making this important point that sometimes money is saved at the wrong time and in the wrong places. The concept and tools in themselves are not wrong, it's just how we use them. Great responsibility for every one of us.

Submitted by S Chakravarty on Wed, 10/29/2014 - 22:46

Permalink
This assumes that the prices at which the costs are calculated are unique, deriving from a process that commands respect, and impound all relevant information. Do we believe that?

Submitted by Caroline Heider on Fri, 10/31/2014 - 04:47

Permalink
Chakravarty, your comment illustrates why it is even more complex in development to estimate VfM whether for development interventions or their evaluation. But, I would venture that even with all the imperfections we might encounter, it is worth trying, learning, and improving upon it.

Add new comment