The following pointers are a summary of the suggestions provided by members of Evaltalk, the American Evaluation Association Discussion List in June 2002. These suggestions are based on the assumptions that the primary purpose of evaluation is to improve programs or initiatives, and that people are more open to learning and change when they are not feeling threatened.
1. Use a participatory approach from the start.
Engage stakeholders in describing program logic, defining evaluation questions,
identifying indicators of success and selecting appropriate data collection methods
and tools. When these are defined by stakeholders, evaluation results are more
likely to be in line with their expectations.
2. Discuss possible negative results in the early contracting and design stages.
Encourage clients or stakeholders to articulate their concerns and expectations
early on about what the evaluation will reveal, and plan with them about how best
to handle these results if they do occur.
3. Inform clients immediately and often - a 'no surprises' approach.
The worst way for people to learn about negative results is in the evaluation
report or in a near-final presentation. As soon as any negative results begin to
emerge, gently inform the client through a phone call or a meeting. Continue to
communicate this both formally and informally as the evaluation progresses. This
approach provides time for people to come to grips with negative findings, to
decide how to handle them, and to question the methods or data while there is still
time to make adjustments.
4. Build in time for course correction.
Recognize from the start that negative findings may occur, and build time into the evaluation plan for clients to initiate action to address them before the evaluation is complete. The final report can then tell the positive story of how a problem was id
entified and has been corrected.
5. Question the evaluation plan.
In cases where evaluation questions, indicators or data collection tools have been
imposed on the program, question whether they are appropriate. If not, develop
alternative criteria and tools, and tell both stories: how the imposed methods show
no progress but locally relevant methods do.
6. Emphasize the positives.
Every initiative will have some positive results, even if they are not very
relevant to the funders' priorities. Make sure that your evaluation captures all
positive outcomes, and highlight these. Begin and end reports and presentations
with the positives, sandwiching the negative findings in the middle.
7. Tell the truth.
Ethically, negative findings must be fully reported. Most of the stakeholders will
already be aware of the problems and will appreciate the fact that they have been
brought out into the open and can now be addressed.
8. Present results in terms of lessons learned.
Identify what is working, what might need tweaking, and what needs to go back to
the drawing board.
9. Provide suggestions for addressing deficiencies.
Provide clients with concrete suggestions for addressing the issues, drawing on
your own experience and the research literature. Refer to best practices and to
how others have successfully handled similar issues. When available, provide
contacts who have agreed to speak with them about how they dealt with these issues.
10. Involve stakeholders in identifying obstacles and ways to overcome them.
There are often many good reasons why work has not been carried out as planned or
objectives have not been achieved. Use a participatory process such as a force
field analysis to engage stakeholders in identifying what internal and external
forces were working against them, and describe these in your report. Involve
stakeholders in identifying ways to overcome these hindering forces and to
strengthen the forces that support their work.
Catherine Bingle Barrie, Ontario CANADA
Bill Collins
Susan Lilley
|
Joyce Morris Oklahoma City, Oklahoma USA
Burt Perrin
Eileen Stryker
|