Measuring knowledge sharing

From KM4Dev Wiki
Jump to: navigation, search

Title

Measuring knowledge sharing (is it even possible?)

Introduction

All issues related to the monitoring, evaluation, metering of knowledge, knowledge sharing and KM generally regularly spark heated debates about the why, what and how. When Stefano Barale proposed his approach on the KM4Dev list on a quiet day of early August 2010, he did not know he was about to provoke the most vibrant discussion on the list during the year 2010 (77 individual messages under the thread), a highly valuable discussion of methods, approaches but also epistemological considerations and the status of science. This thread is worth a read, not least for the wealth of resources it features.

Keywords

Knowledge sharing, M&E, monitoring, metrics, science, scientific, indicators, narrative.

Detailed Description

[the meat of the topic – clearly, crisply communicated summary of the topic. Where relevant, a brief story – no more than 1-2 paragraphs - of how this topic has been turned into practice, ideally from the KM4Dev archives? If the example is long, separate into a separate subsection]

KM4Dev Discussions

Examples in Application

[One or a few practical examples and references that illustrate the topic or show how it is done in practice]

Related FAQs

[Insert links to related FAQs]

Further Information

== Original Author and Subsequent Contributors of this FAQ

Author of the original message: Stefano Barale.

Subsequent contributors: Atanu Garai garai; Abdou Fall; Alejandro Balanzo; Amina Singh; Arthur Shelley; Ben Ramalingam; Brad Hinton; Charles Dhewa; Cher Devey; Chris Burman; Christina Merl; Damas Ogwe; Denise Senmartin; Eric Mullerbeck; Hannah Beardon; Jacqueline Nnam; Jennifer Nelson; John Smith; Jim Tarrant; khushamadu@yahoo.co.uk; Maarten Samson; Margarita Salas; Matt Moore; Md Santo; Nhamo Samasuwo; Paul Mundy; Peter Chomley; Sebastiao Ferreira; Stacey Young; Tony Pryor; Ueli Scheuermeier; Valerie Brown; W COWIE.

FAQ author: Ewen Le Borgne

Dates of First Creation and Further Revisions

FAQ KM4Dev Source Materials

The idea here is quite simple and related to the assessment phase of KM introduction inside any organization, as far as I understand it. Until today I've seen various tools used by most KM practitioners to try to reply to the very basic question in a KM assessment exercise: "where does this organization stand in terms of KMS"? The tools I've seen are:

  • questionnaires (manual, to be distributed to a wide statistical sample)
  • questionnaires (web-based such as the IBM-Inquira tool, with automatic stats displayed at the end, to be filled by the largest number of employees)
  • knowledge expeditions
  • interviews

These tools are surely good, but I was looking for something as much "scientific" as possible; something that could help us define KS well as speed, mass and position (over time) define the motion of a body in classical mechanics. Some indexes capable of replying to the question: "what makes a knowledge organization different from the others?". These indicators (indexes) should tell me that the organization is actually sharing knowledge... or not. Here you find my tentative list:

  • number of co-authored documents (indicating good collaboration) compared with total documents produced, in particular if the authors come from different departments of the organization (indicating good cross-departmental collaboration);
  • frequency of updates to documents present in the knowledge base of the organization (indicating good learning after, i.e. knowledge capture);
  • frequency of accesses to the organization knowledge base (indicating good learning before);
  • number of references (links) to other documents that are saved in the organization knowledge-base, per document (indicating again good level of collaboration in terms of learning from experience).

I know it may sound a bit simplistic as an approach (after all classical mechanics is quite simplistic compared to quantum theory), but I think the good thing about it is that studying the evolution of these indexes over time may lead to a very good picture of how the organization is evolving its KS activity over time, even if at a very rough first-order approximation. Do you feel there's anything missing or that any of the assumptions may be improved? Over to you. (Stefano Barale) ---

Wow. You’ve hit the nail on the head. The problem for me however is that your list defines either the vehicles within which knowledge is packaged, or the frequency with which the objects are picked up and opened, and then later referenced. But none of them tell you about the two most important questions:

  1. Did the person who has that new knowledge change any decision, change any action, or rethink any conceptual framework or approach BASED on that new knowledge? And if so,
  2. Did it make a difference in terms of an end DEVELOPMENT result?

I find the problem that arises with trying to place value on KM is that to be rigorous sometimes we revert to thinking of knowledge in IT terms. So it becomes “let us assume that knowledge can be viewed as data, then…” The problem is that for me knowledge sharing is as much about behavior change, and conceptual change, as it is about anything. The temptation with equating KM with packaged data is that “rigor” usually leads to quantitative indicators. And that would be fine, if the indicators were about the final subject matter, NOT about knowledge objects per se.

Another way to think of it, let’s say a health officer walks into your KM team room attached to an AIDS program and says “prove to me that spending money for your KM effort actually improves my ability to meet my AIDS-defined objectives”, what do you say? To my mind, THAT’s what needs the rigor, not trying to understand the velocity and “stickiness” of knowledge.

What I do like about your questions is that they indirectly get at questions of trust and understanding of shared knowledge. If after five years, the material we have passed around shows up throughout the literature, it shows that we indeed have had one type of impact. Whether the world is a better place because of it may simply be too hard to state (although one CAN state that the presence of our ideas elsewhere is not necessarily good). But maybe it’s the best one can do. Like trying to understand the culture and society of a long lost community when the only things that survive, and can be counted, are gold trinkets and silver sacrificial knives. They tell a story but.. (Tony Pryor) ---

Dear Tony, thank you very much for your message. Let me start by admitting that I posed a "naive" question on purpose. In fact, I personally think that the reply to the question on top of my message is "Obviously NO!". :-) Still, as you point out, there are some indicators, some indexes, that may help us understand what's happening under the surface of the deep waters of organizational knowledge. Maybe we'll have to accept that we'll never know the position of our "knowledge item", while travelling the space of common knowledge, but this doesn't mean that knowing its "probability function" may not be interesting for our purposes. And, indeed, I think it is. Let me then restate the problem in a less naive way. The scope of establishing this indexes is not really to measure the level of knowledge sharing but, less ambitiously, to see if the organization has the right environment for knowledge sharing to happen. Stated like that, I hope that you will agree with me that an organization that scores zero on all of the following tentative indexes:

  • number of co-authored documents compared with total documents produced, in particular if the authors come from different departments of the organization;
  • frequency of updates to documents present in the knowledge base of the organization;
  • frequency of accesses to the organization knowledge base;
  • number of references (links) to other documents that are saved in the organization knowledge-base, per document will probably have a very KS-unfriendly environment. And this will make ourselves reasonably sure that no one will ever be able, in that

organization, to change any decision based on previous knowledge.

There may be (at least) one exception. The organization in question may be very much based on oral communication more than written communication. In that case the lessons learned may not be captured in a knowledge base... well, at least not a traditional one (1)

I hope this may be a good starting point to reply to your first observation: "The problem is that for me knowledge sharing is as much about behavior change, and conceptual change, as it is about anything. The temptation with equating KM with packaged data is that “rigor” usually leads to quantitative indicators. And that would be fine, if the indicators were about the final subject matter, NOT about knowledge objects per se" The indexes in question, over time, may give you a good idea about a behavioral change happening... or not. They are quantitative, but may be used to draw some rough (first-order approximation) conclusions on quality, as well.

And this leads me to the silent assumption behind my previous message. Everything I wrote was based on the idea that the organization taken into consideration already has, or wants to build some form of knowledge base. I think accepting this assumption does not mean necessarily thinking of knowledge in IT terms, even if the indexes proposed would require an incredibly big amount of time to be managed without computers.

But let me conclude this message by attempting to address the issue that I liked most in your message: Another way to think of it, let’s say a health officer walks into your KM team room attached to an AIDS program and says “prove to me that spending money for your KM effort actually improves my ability to meet my AIDS-defined objectives”, what do you say? To my mind, THAT’s what needs the rigor, not trying to understand the velocity and “stickiness” of knowledge.

Well, this raises a completely new issue, as I see it. That is: Does working in a KS-friendly environment really affect the outcome of a program, project or organization? How?

In this case I would present the officer in question with a solid case for KM, based on previous experiences, and then show him that the indicators in question may help us make ourselves sure that the organization is really progressing in sharing knowledge.

Once a KS-friendly environment will be established, we would need some strategy (method) to help the people inside the organization really make good use of the knowledge they can now access and share.

What I have learned so far in this sense is: first, people in knowledge organizations need information useful to them, just in time. This means customized information they can use almost immediately. Secondly, they are in need of training that can be delivered where they are. Third, they can learn much from people who are or have been in the same sort of situation they they are now. This means, in my opinion, collaborative learning.

If one could provide them with the skills and tools (including on-line ones) to learn collaboratively, these people would be learning as they work, being brought into a process that immediately serves their needs and excites them to pass on what they are learning to others.

Then, the only missing step would be to demonstrate that KS+OCL really helped the organization in question to meet its goals. And this is probably the most complicate part of the game...

I hope this makes more sense.

Thanks for sharing your thoughts! Stefano (1) In fact, this was the case with some African organizations that my department worked with. For these reasons back in 2008 we conceived an asynchronous conferencing system for collective knowledge building using audio messages. One could think of building a knowledge base in a similar way: making more use of audio and video resources instead of written materials (even if I'm not convinced that multimedia is always the best solution for knowledge sharing, there are some cases in which the good old paper has its great advantages over modern tools). (Stefano Barale) ---

Good morning.

The two approaches put forward by Stefano and Tony relate to the reporting on outputs and outcomes. Outputs are generally "preferred" because they are easier to determine. Outcomes are usually what we want and strive for, but it is much more difficult to "measure" because outcomes are a combination of complex arrangements within a complex space. Output reporting generally shows a clearer picture of causality than outcome reporting and this too makes output reporting more popular, but not necessarily the "best". If I were to generalise by using the Cynefin framework, outputs have most meaning in the simple space where causality is easier to determine - doing A leads to B. Outputs may be meaningful in the complicated space because here we are adding expertise to a problem and we may be able to show causality - A+B lead to C. But outputs are far more difficult to determine in the complex space because there is usually no way of determining causality, let alone determining and measuring all the complex factors at work. In the complex space we need to continually test and respond to feedback to gauge how effective an action or intervention is. The output-outcome debate is relevant to both knowledge management and to international development. regards, (Brad Hinton) ---

PRACTICAL GUIDE TO MEASURING KNOWLEDGE SHARING IN ORGANIZATION From *MOBEE KNOWLEDGE (http://mobeeknowledge.ning.com) KM Framework, Metrics and Maturity Model - * http://www.scribd.com/doc/35628527/MOBEE-KNOWLEDGE-http-mobeeknowledge-ning-com-KM-Framework-Metrics-and-Maturity-Model Take a look at :

  • 1. **HUMAN SYSTEM BIOLOGY-BASED KM™MODEL** ***

Established a taxonomy of cross-functional business processes for your organization derived from Process Classification Framework (PCF) – American Productivity and Quality Center (APQC) template

The Categories – The Process Groups – The Processes items, classified as Competencies and could be treated as outputs. The Activities items classified as Capabilities and could be treated as outcomes. Both the items, Competencies as well as Capabilities actually are the sources for us to developing Key Performance Indicators (KPI)

Further, find out by your judgment, the Processes as well as the Activities items which are highly related significantly with the variable of Knowledge Sharing

The Knowledge Sharing related items whether they’re generated from Competencies or Activities items , then classified further into KM Tools, KM Process Framework as well as KM Standards (Culture and Value) for Competencies items and into Maturity columns for Activities items

2. *MOBEE KNOWLEDGE COMPETENCY AND CAPABILITY MATURITY MODEL (MKCCM™)*

Do measurement of each KM components in term of KM Tools (weighted score 1), KM Process Framework (weighted score 3) and KM Standards Culture and Value (weighted score 5) as well for the achievement of processes in Knowledge sharing competency

Do measurement also for the achievement of activities in Knowledge sharing capability within five ranges scoring system (Initial – Aware – Established – Quantitatively Managed – Optimizing) (Md Santo) ---

Brad -

Is this not where Outcome Mapping comes in? I'm aware of OM but have never applied it and my understanding is that it looks at behaviour change as a form of interrogating 'outcomes'. Any thoughts / clarifications anyone ? With kind regards (Chris Burman) ---

Stefano, to attempt to add knowledge to your question, I’d like to understand the context more.

What do you understand as “knowledge sharing”(compared to knowledge transfer and learning)? Your (tentative) indices below imply to me that you are focusing on assessing the individual as the unit of measure rather than the group or organisation. Anthony DiBella in his book “Learning Practices – Assessment and Action for Organizational Improvement” talks about understanding “learning use” and “learning impact” as measures to compare different initiatives in this area (and cautions about comparing initiatives in different area). My other question is what is your sharing focusing on – are you trying to train people (ie focusing on compliance or task competency – the know how – the ordered domain) or do you want to build knowledge (know why – the unordered domain) so people can use the knowledge to probe or explore a situation before responding with an approach (which may have to be further modified as it is applied).

Brad mentioned Dave Snowden’s Cynefin framework which is a great framework for analysing a situation. I agree with Tony in that “knowledge sharing is as much about behavior change, and conceptual change” – technology is a good “necessary business tool” to build upon but that will depend on what infrastructure your audience can support (or need). Understanding the culture (national AND organisational) of your target is key. Regard (Peter Chomley) ---

Hi Chris,

Just to follow up on the Outcome Mapping (OM) and KM issue, I gave a presentation on OM and KM at the joint American Evaluation Society / Canadian Evaluation Society meeting in 2005 - the file can be downloaded from the IDRC website: http://www.idrc.ca/uploads/user-S/11335475941OMandKM_Toronto.pps <http://www.idrc.ca/uploads/user-S/11335475941OMandKM_Toronto.pps>.

Discussions will certainly have moved on since then - it may be worth linking up to the Outcome Mapping Learning Community www.outcomemapping.ca to find out more...

All best, (Ben Ramalingam) ---

Dear Brad, thanks a lot for your message. I wasn't thinking it in Project Management terms, but this is definitely a good idea! I agree that outputs are easier to measure than outcomes, but -as I tried to clarify in my second message- I think using smartly the first ones one could "indirectly" measure the seconds, or at least make this task someway more rigorous (no outcome of KM can be measured in an environment where there is zero KS). I just have one doubt: what do you mean by Cynefin framework? (Stefano Barale) ---