- 1 FAQ Template [Insert Topic here]
- 2 Introduction
- 3 Keywords
- 4 Detailed Description
- 5 KM4Dev Discussions
- 6 Examples in Application
- 7 Related FAQs
- 8 Further Information
- 9 Original Author and Subsequent Contributors of this FAQ
- 10 Dates of First Creation and Further Revisions
- 11 FAQ KM4Dev Source Materials
FAQ Template [Insert Topic here]
[A general introduction to the topic – no more than 1-2 paragraphs]
Monitoring, evaluation, impact assessment, indicators
[the meat of the topic – clearly, crisply communicated summary of the topic. Where relevant, a brief story – no more than 1-2 paragraphs - of how this topic has been turned into practice, ideally from the KM4Dev archives? If the example is long, separate into a separate subsection]
There are three distinct levels apparent in the various discussions about monitoring and evaluation that have taken place on the KM4Dev platform.
The first level focuses on the monitoring and evaluation within KM activities. These are largely numerical summaries of outputs. For example, one member asked for ways of monitoring electronic discussions and online chats, and collated the following as useful indicators:
1- For electronic discussions:
- user transaction reports (to see when members visit the CoP site etc.
- number of comments/posts etc.
- lurk to post ratio
- document creation and upload to libraries/archives/lessons learnt/databases etc.
- qualitative results from surveys
2- For online chats:
- Unique Chat Hosts
- Unique Chat User
- Unique Active Chat User
- Unique Chat User Duration
Similar indicators were more geared towards specific knowledge strategies, and focused on ways of tracking the utilisation of knowledge and learning tools, for example:
- Use of intranet, tracking things like hits,
- Application of After Action Review and Peer Assists in projects
- Number of informal knowledge sharing sessions
- No of exit interviews and handovers
All of the above might be usefully tracked as percentages, i.e. % of projects with AARs, etc.
The second level of the discussions focused on the difficulties of monitoring and evaluating the impact of KM strategies. Here the discussion focused less on solutions and more on how difficult this kind of monitoring actually proves to be in practice.
There were obvious practical issues – for example, how do you track the utilisation of a single AAR, let alone a library of AARs? There were also some conceptual issues which were to do with the nature of knowledge as an inherently unmeasurable aspect of organisational life.
Managerial difficulties were also highlighted. Specifically, there is a need to align knowledge and learning with overall organisational objectives, but knowledge and learning often calls for business strategy to be scrutinised and questioned from multiple angles. This leads to a tension which isn’t easily resolved. Finally, these activities carry time and cost implications in terms of collection and storage of information related to knowledge and learning. It is frequently hard enough to get users to apply KM approaches – asking for careful and consistent monitoring is seen by some as asking for too much. One way of resolving this was to focus on information which had true management impact. But this was also problematic. As one contributor put it: “knowledge is in the eye of the beholder, and different people have different needs in terms of information quality, quantity and timeliness.”
Finally, the third level relates to the potential application of knowledge and learning for monitoring and evaluating the overall impact of development. Here the key question was whether KM strategies should focus less on the performance of KM programs, or the impact on of KM organisations, and more on he impact of knowledge on developmental results? This raised the issue of the cultural and intellectual pillars of evaluation, and how this has a fundamental impact on what knowledge is seen as “appropriate” or “useful” for development.
Specifically, it was suggested that the explicit or implicit premise of knowledge management is that judgments and decisions that are based on certain kinds of evidence, information, and analysis are superior to others. However, in the development environment, there is seldom an solid irrefutable case for any particular decision – this is true in public policy generally, but is heightened in the international aid world.
Despite this difficulty: “[development practitioners] value plans and budgets and proposals that are grounded in facts, credible evidence, well argued analogies from similar situations, building from credible theories with convincing success stories, pilot projects that were successful and merit "scaling up" with or without modifications, etc. We frown upon proposals to the extent that they lack these attractive qualities even if they are attractive plausible visions of charismatic leaders who are able to inspire a group to action based on other factors such as religious faith, political expediency, cronyism, and desperation for lack of alternatives.”
As hinted at earlier, the issue is not specific to development, but universal, and can be summarised as finding an effective bridge between the social sciences and management decision making. KM might be of most help in initiatives such as following up the Monterey Statement, in collating common approaches and lessons learned on how donor organizations and partners track and monitor development results and impact. In this context, it was thought that KM should point to the importance of diversity and robustness of approaches to monitoring and evaluation. Specifically, there should not be an attempt to reduce all approaches to a standard mechanism which is determined by those with the largest funds to allocate for developmental purposes.
It was suggested, in light of the debate, that the potential of KM is not to simply support the impact of development, but also question how development is undertaken. Such an approach runs the risk of “trying to improve the quality of a car's gauges without repairing the engine.”
Examples in Application
[One or a few practical examples and references that illustrate the topic or show how it is done in practice]
[Insert links to related FAQs]
Original Author and Subsequent Contributors of this FAQ
Dates of First Creation and Further Revisions
FAQ KM4Dev Source Materials
[Raw text of email discussions on which the FAQ is based]