Lessons Learned - The Loch Ness monster of KM

From KM4Dev Wiki
Jump to: navigation, search

Original Message

From: Johannes Schunter, posted on 2013/07/18

Hi all,

"Whenever I talk to managers and executives, often the most important thing they want to get out of Knowledge Management is "capturing lessons learned". This often goes along with a request to "create a database of lessons learned" where experiences can be systematically archived and searched to inform future initiatives. However, since I have become a member of KM4Dev 7 years ago, I have yet to come across an example of an organization where this was actually done successfully in a systematic way. Where lessons got collected, aggregated, shared and - most importantly - applied. It always felt to me like lessons learned were a phantom that everyone was chasing after, but nobody has ever seen.

The reason for this, I am convinced, is that knowledge sharing only works where experience is shared within a specific context, where the experience can be attributed to the person who has learned the lesson, and where the knowledge sharing is happening just in time (when the input is needed), not just in case. This is happening in Communities of Practices and social networks, but not in databases of documents.

My problem is that I often have a hard time coming up with evidence for this. Would you know of any good paper, research article or other evidence that looks at this in a comprehensive way and provides some insight whether collecting lessons learned documents in a database is (or is not) smart KM?

Of course, if you happen to have evidence of where this is actually working, I'd be happy to hear about that too. No one would be happier than me to be shown that this can actually work.

Looking forward to your insights!"

Contributors

All replies in full are available in the discussion page. Contributions received with thanks from:

Johannes Schunter
Sophie Treinen
Eric Mullerbeck
Nadia von Holzen
Josef Hofer-Alfeis
Jaap Pels
Charles Dhewa
Davide Piga
Ian Thorpe
Eva Schiffer
Pete Cranston
Robin Van Kippersluis
Neil Pakenham-Walsh
Paul Mundy
Stephen Bounds
Rinko Kinoshita
Matt Moore
Ewen Le Borgne
Philipp Grunewald
Nancy White

Related Discussions


Summary of Contributions

  • Sophie Treinen stated that FAO has set up several repositories to archive different types of documents. For her, learning lessons is part of the process of good KM, but we need to know what the process will lead to. If the process leads to the documentation, sharing and appropriation of good practices, the final products (good practice fact sheet, video, audio programme, theatre play, etc.) need to be archived somewhere. What is crucial is to use good metadata so that you can use and reuse the same document and have it appear on several websites.
  • Nadia von Holzen referred to a previous KM4Dev discussion on Knowledge banks and emphasized that "we cannot treat knowledge or insights as a commodity we can put in a bank. Knowledge is mysterious. A special stuff, knowledge is fluid, gas-like or steam-like: you can't arrange it successfully in a bank". Trying to make knowledge storable would mean loosing its characteristics, its quality.
  • Josef Hofer-Alfeis agreed that LL sharing is seldomly done effectively. He referred to one good example from Continental Cooperation in Germany, which is a LL database accompanied with a lot of complementary organizational, cultural and other KM measures. He linked to a graphic illustration of their success factors, showing that the database is only a minor part in their approach.
  • Charles Dhewa stated his conviction that KM is more about behaviour change and answering questions than publishing ideas. Working at the interface of formal and informal agriculture in Zimbabwe, he e.g. sees traders become active digital share-croppers. But since these interactions are contextual, he doubts whether trapping this into a paper/article or video will illuminate anything. In his view there are too many case studies that can't be replicated anywhere else.
  • Jaap Peel suggested thinking of development as "consciously developing, stimulating and maintaining capacities of people (perhaps next to interfering context, infrastructure, technology which in itself would be useless unless done by capacitated people). He therefore proposed to understand a LL database as information on people capacitated by the organisation. This would then actually be a network which is best kept up-to-date by all the 'people in it' themselves. So instead of focusing on LL databases of documents, an organisation should cherish networks around its staff.
  • Davide Piga shared his case study of a recent LL project for which he sought advice from the KM4Dev community in 2011 (see KM4Dev discussion: "How to collect and present Lessons Learnt"). In this project the MDG Achievement Fund activated about 130 UN Joint Programmes (JPs) on various thematic areas related to the MDGs, with the objective to "accelerate the progress on the MDGs", and one component of the initiative was collecting LLs. The 130 JPs were organized in thematic windows. UNEP was the convener for the "Environment and Climate Change Window", comprising of 17 JPs. The KM project involved setting up a Community of Practice and, among other things, collecting LLs. Davide, as the facilitator of that community, prepared a handout on how to prepare good lessons learned, and a template, which helped him collect about 50 LL documents. Once we collect lessons, we need to allow other people to use them, for which we need to make them available, which led to a database of lessons learned from the 17 MDG-F JPs on Environment and Climate Change. The problem in his view isn't the database itself, but feeding the lessons to the people who could use them. This requires good metadata, but also promoting the LLs in a number of ways, in all places where the target audience has a presence. They could then also be used for for future planning and inform mandatory steps in new programs and projects. A selection of their projects LLs has been compiled in a booklet available in English, French and Spanish and distributed at a number of events.
  • Eva Schiffer considered that managers usually fall into the trap of thinking that a technical tool like LL databases might be the solution to fix the complex problem of organizational learning. A LL database is something tangible that we can budget for and measure its use, and that gives us a product to show for our work. However, for Eva such a repository of stories is just a hammer, not the house we want to build with it. The actual learning and knowledge flow happens between people, in discussions, coffee breaks, random chats, with struggles, learning from failure, experimentation, etc. For Eva KM in an organization is a trade-off between one one side giving management a product that is easy to understand and to show (e.g. a LL repository), and on the other side taking the freedom to spend the rest of our time doing the far messier, less codified, non-technical work of facilitation, bringing people together and making knowledge sharing happen between them.
  • Pete Cranston suggested that perhaps we need a collection of 'personal stories on success and failure' rather than a 'lessons learnt database', along the lines of the case-studies Davide and Ian described above. Pete felt inspired by the HBS Institutional Memory site 'Community Narratives', a collection of stories from people about HBS. When looking at Wikipedia's definition of databases as "an organised collection of data", he acknowledged that it is difficult to capture all the associated, unstructured data that make up part of a story, or to make it all available with smart search. What people want is a "a dynamically searchable aggregation of personal narratives, quantitative data, multimedia content and social content" but this requires curation skills around the gathering and linking of relevant data on the one hand and, on the other, the knowledge and skills to construct relatively intelligent searches. At the same time people can't simply be consumers of stored knowledge, but would have to learn how to actively to mine resources.
  • Robin van Kippersluis also found that general experiences with LL databases are mixed, but it depends on what you want to do with the lessons and why you generate them. He pointed to a good example with SNV Netherlands which in cooperation with UNDP produced a LL publication on Capacity Development. As elements of success he listed a) a very specific focus; b) inpired field people had a chance to "showcase" their good work in a recognized publication; c) good mix of internal and external lessons from different organizations; and d) a clear process, timeline and budget to bring it all together. Regarding LLs in general he added three points:
    • Lessons are learnt at different levels (individual, group/team, organisational, societal) and organisation need to think through how to stimulate learning and capture and share lessons at all these levels and between them. A database can be a part of that, but is not a goal in itself.
    • Lessons are generated for different reasons. Often to 'prove' (to donors) or to 'improve' (within the organization). The 'prove' approach is often a 'obligatory' and can create a top down approach which is not inspirational at all. The 'improve' approach is different and often relies on community exchange, group discussions or training. A database may not always be the best option here.
    • Pragmatic types of contextualized learning should be balanced with a new rigor on the side of evidence-based, longer term and strategic research, for instance through randomized controlled trials in certain fields of development work. A recent example in UNICEF was the call for "Best of UNICEF Research" in Nov 2012, the outcome of which included a) a booklet with ten pre-selected examples from all over the world; b) a few highlighted examples that received praise form an external review panel; c) a site that will stimulate exchange and learning of such examples and d) a detailed analysis of all submissions including trends and patterns observed, stimulating further discussions on quality assurance for research processes.
  • Paul Mundy sees two issues with many KM techniques, which is that they are are ephemeral (not recorded for posterity, like a song only performed once), and they have a limited audience (e.g. small group in distinct event). The idea of LL databases is to overcome this by identifying something worthwhile (a "lesson" or a "case"), recording it somehow (shoot a video, record an interview, or write it down), and classifying and putting it somewhere so people looking for it can find it. Each of these steps has weaknesses:
    • It's hard to identify worthwhile lessons. Many are either too general to be useful in specific situations, or too specific to be generalizable.
    • Recording it is difficult. Many people find it hard to tell a good story in a succinct, easily understandable way.
    • Classification is hard. Lessons out of context are fairly useless. You have to give enough context to know what the problem was all about, then enough information to show how it was solved.
    • Getting people to access your database is difficult.

Paul suggested the following solutions:

    • Choose the lessons carefully. Select an area, and find the best (and worst) examples of each. Don't just ask everyone what they did; find examples of where something really worked well (or failed spectacularly), then dig for the details.
    • Get professional help with writing stories or making recordings. Someone with journalistic skills who is familiar with the field, who can spot what's important, and who can put it in words that others understand.
    • Do some proper analysis. Impose a framework on the stories so you can analyse them according to a set of criteria. Development practice is full of such frameworks - gender analysis, marketing theory, value-chains approach, project life-cycle steps, etc. If one of these doesn't fit, invent your own. Generalize from specific stories, and show how general points are relevant to specific situations.
    • Publish the material as a book or collection of videos. Edit it properly and present it attractively. Make it easy for people to find, read or view. That solves the "ephemeral" problem.
    • After doing all this work, make as much use of the results as possible. Adapt it for different audiences (staff, policymakers, farmers.) and publish it in multiple formats and (books, brochures, information sheets, posters, webpages, radio.). Make widely available, that solves the "small audience" problem.
  • Stephen Bounds considered LLs to be a form of (internal) reporting. He referenced Jay Rosen's work who demonstrated that when people try to write LLs as dry, objective, and a "view from nowhere", they become less trustworthy and less compelling. Stephen argued that having a strong, personal viewpoint is essential for making lessons learned useful. Particularly when there are not just simple, factual reasons that explain success or failure (i.e. "we thought water boiled at 80 deg C not 100"). He also advised to take into account the bias and prejudices (neutrally understood) of the person reporting to give readers a sense of the thinking process involved. This provides more context and makes for a stronger narrative.
  • Rinko Kinoshita agreed with others that a LL database is not the end product, but just one of many KM tools. For her the challenge is to not let the content of the database become static, but instead 'make the database dynamic to actually attract people come and visit'. She suggested linking the database with an active CoP so that a new entry could be directly discussed by the community. Or (as was done in UNICEF) have people suggest one theme with a distinct knowledge gap and then do a 'campaign' to collect and improve lessons learned around this theme. Distributing the results through newsletters (which had teasers linking to the database) increased access of the database. She also reminded to not only focus only on sharing and applying lessons, but also consider the positive effect on the people who are documenting the LLs or case studies, which enables them to analyse and self-reflect on their own experiences. These people also receive personal recognition by being cited in the database or publication, which motivates them further to document, share and promote other experiences.
  • Matt Moore pointed out that a lesson is only learned if it is applied in the future. A lot of effort goes into documenting lessons rather than applying them. However, he thought LL databases can be useful if lessons are the output of a reflective activity after an event (e.g. after action review). Equally important though is the step when we start an activity and ask "has anyone done this before?" This step does not happen often enough, but when it does, the LL database becomes a good resource to answer that question. Matt indicated that NASA has an excellent LL database and also recommended Nick Milton's book on LLs as a resource.
  • Ian Thorpe suggested that communities and LL databases should be seen as complementary approaches that can reinforce each other. However, like with communities, it's not always that easy to do them successfully. He shared UNICEF's past experience where for a time it managed a database of LLs that were either submitted by countries in their annual reports, or which they actively sought out for specific themes. The database had several hundred LLs of varying qualities, and the better ones were used in donor reporting, external publications and thematic analyses. These were "case study" LLs that gave specific country programme examples with the LLs rather than generic procedural lessons. Ian reflected that
    • A lessons learned database needs to be part of a broader KM strategy which includes other tools such as knowledge sharing events and communities of practice. The database itself can never be sufficiently detailed and every lesson needs to be adapted to context. Amended with contact information of people, the database should be seen as a conversation starter rather than an end in itself.
    • Getting people to actually use knowledge that is already available is a challenge. Effort needs to be put into making the database user friendly and relevant, promoting it, and linking into people's everyday work.
    • The reason why management tends to like LL databases is also that i) they can help provide examples for donor reporting and advocacy publications ii) they can be a useful information resource for thematic analyses and planning iii) the database is something tangible that can be shown off iv) case study LLs can be useful tools for internal and external advocacy around new programming approaches as they illustrate them in practical terms.

Regarding the role of research versus lessons learned, Ian reflected that research tools such as RCTs give their greatest insight in the areas of programme design i.e. whether a particular approach works and how to optimize it. LLs based more on self-reflection are better at looking at some of the tacit factors that can affect success of programmes such as team dynamics, relationships with local partners, how threats or changes in a situation were handled etc. Self-reflection is also quicker and less expensive than carrying out rigorous research, and can be applied when full evaluation or research isn’t feasible. Ideally a good LL case study combines both of these.

  • Neil Pakenham-Walsh shared the experience of HIFA Voices. The HIFA2015 forum has 6,300 professionals exploring questions of how to improve the availability and use of healthcare information in low- and middle-income countries. At HIFA they found that that summaries of discussion threads were a lot of work, had a risk of misinterpretation, and were forgotten about a few months afterwards. Their new approach (see presentation about HIFA) involves selection of short passages from HIFA forum messages, carefully selected according to pre-defined criteria, and then tagged and added to an organically evolving relational database. So far they've identified about 500 HIFA-Voices which have already been used successfully to help inform a new WHO policy guideline on task shifting for maternal and newborn health. They are planning to expand the work to cover the whole archive of HIFA2015 and CHILD2015, which they estimate contains at least 5,000 HIFA-Voices (with about 10 more messages per day, generating one or two HIFA-Voices per day). Neil was also inspired by Eric's recommendation to add RSS feeds in the future so that users can request to receive new HIFA-Voices that match their selected interests.
  • Ewen Le Borgne highlighted the role that the Cynefin framwork plays in this. Lessons about rather straightforward processes can be easily harvested in a LL database. However for more complex tasks it becomes very difficult to 'capture' the richness of a lesson and communicate it to others. He pondered whether there's a future in LL databases with a combination of semantic web and social networks to help identify the complexity level of the task at hand, and depending on that help us find either a) a direct answer in a LL database for straightforward issue (e.g. how to paint a cupboard most effectively), b) a case study or story describing a process that's a bit more complicated (e.g. what are the most up-to-date materials and techniques to paint a cupboard nowadays) or c) direct contact with the people involved or CoPs related to the issue to discuss lessons learnt and specific past experiences. Ewen agreed also with Ian that LLs might be just used as conversation starters, considering that we all have a human tendancy not to naturally look into the past for lessons.
  • Johannes Schunter reflected - following Ewen's input - further on the role Cynefin framework as a key to understanding why experiences with LLs are so mixed. If LLs belong (together with "good practices") in the "Complicated" domain of the Cynefin framework then it is only in the domains of simple and complicated challenges that systematic LLs are relevant and useful. The further we go into the domains of complex and even chaotic development challenges, the less helpful pre-canned solutions or lessons will be, and we have to allow for emerging and completely novel solutions to develop. We therefore need to ask which of ou organizations' challenges are inherently complex in nature (this is when we have to learn via ad-hoc exchange while we do something), and which are merely complicated (where we can learn systematically from what worked in the past)? It is only for the latter that the systematic collection of LL would be worth the effort. The difficulty then of course is finding out which domain your particular work/challenge belongs to.
  • Eric Mullerbeck has not heard of any working Lessons Learned databases outside UNICEF. However, he pointed out that if such databases are set up, we might want to ensure that the databases are not 'pull only' so that they don't depend on people going to the database to try and hunt down relevant lessons. Instead they should have 'push' features like RSS linked to specific and well-defined topics, that will automatically push the LL documents to the persons who have interest in those topics. Also, given that UNICEF's LL database was discontinued after the team managing it left, he suggested that the people promoting and supporting the use of LL database in an active network might be as important as the LL database itself. Referring to a comment by Neil, Eric was also skeptical about summaries of discussion threads as LL mechanism, as he suspects they are forgotten shortly after production, and he found them difficult to read unless he was a participant in the particular discussion. Finally, Eric suggested that in context of the Cynefin Framework there might be considerable use for LLs within the "Complex" domain, and not only the "Complicated" domain. Scenario planning is a recognized approach for the Complex domain, which can be helped by having access to a broad variety of LLs from similar and dissimilar contexts, all ideally with good contextualizing information. Access to the authors/participants in the lessons would be of great added value.
  • Philip Grunewald found the Cynefin framework also to be a helpful starting point, however, he felt that categorizing problems like this could be counter-productive. For him it is not about categorizing problems, but about finding a custom-made best fit solution for the particular problem at hand. It would be through understanding the problem and context that solutions organically suggest themselves.


Recommended Resources