Difference between revisions of "Talk:Measuring Knowledge Sharing"

From KM4Dev Wiki
Jump to: navigation, search
(Christina Merl, 2010/8/26: new section)
 
(3 intermediate revisions by the same user not shown)
Line 506: Line 506:
  
 
What do we even want to achieve by "measuring knowledge sharing"?
 
What do we even want to achieve by "measuring knowledge sharing"?
 +
 +
== Ueli Scheuermeier, 2010/8/26 ==
 +
 +
Christina,
 +
 +
"What do we even want to achieve by "measuring knowledge sharing"?"
 +
 +
Hm.... I couldn't resist that one. My answer is: Behavioural change.
 +
 +
No, that's wrong. Sharing knowledge must lead to behavioural change, or in my opinion there is no point in sharing at all. We can keep on jabbering and babbling along with each other, if it doesn't make a difference in the real world out there, I couldn't care less. So the measure of knowledge sharing is the changes in behaviour.
 +
 +
Let's go check how behaviour has changed in the real world due to the shared knowledge?
 +
 +
Ueli
 +
 +
== Christina Merl, 2010/8/26 ==
 +
 +
<nowiki>:)</nowiki> I am glad you responded to that one, Ueli! I bet you would!!
 +
 +
BTW, sorry for sending two messages -- I fully agree .. though I'd love
 +
to know how you would define behaviourall change. Thanks.
 +
 +
== Ueli Scheuermeier, 2010/8/26 ==
 +
 +
Christina,
 +
 +
How would I define behavioural change? It's in the "doing" you refer to in your thread. If I can observe people doing something differently than before, that is behavioural change. And I mean real doing, ie. things I could film and publish on Youtube or record on an audiotape. Just changing our philosophy and mindset etc. ie. stuff in our heads, doesn't count. It's the real doing that counts, that makes real things happen in reality. Show me that (it's easy: Just film it or stick a microphone into the changed situation), and show me it's because of knowledge sharing, and I'll believe you.
 +
 +
That's my take
 +
 +
Ueli
 +
 +
== Charles Dhewa, 2010/8/26 ==
 +
 +
It's disheartening to see the bad side of behavior change stemming from knowledge sharing.  Here in Africa, as soon as some people become more educated they tend to stay aloof and don't socialise the way they did before they were highly educated.  They grew up heading cattle in the forests but now can't even remember the names of local trees.  Their behavior has changed completely, they have become alienated from issues that affect ordinary people. 
 +
 +
Maybe we need to look at both sides of behavior change and put more effort on how knowledge sharing can reinforce positive behavior change.  Some of the worst decisions are made by people who have acquired a lot of knowledge and know how to manipulate it.  I am not sure if such knowledge is valuable.
 +
 +
Regards,
 +
 +
Charles
 +
 +
== Ben Ramalingam, 2010/8/26 ==
 +
 +
Charles, thank you for a great, thought provoking comment.
 +
 +
You focus on Africa, but I would argue that that applies to the development sector as a whole, and the counter-productive bureaucracy that goes with it, and the knowledge we have to maintain in order to 'feed the beast'.
 +
 +
Our capacity for knowledge is not infinite, and I see many good, passionate people who - almost by bureaucratic necessity - have also "become alienated from issues that affect ordinary people".
 +
 +
How many of us dreamt of filling out logframes when we were young?
 +
 +
== Jennifer Nelson, 2010/8/26 ==
 +
 +
Hi Charles,
 +
 +
Just wanted to thank you for these words -- it was great to walk into work this morning and read this gentle reminder that we should remember our roots, stay humble, and always think about both the positive and negative consequences of our respective quests to make the world a better place. Thank you!
 +
 +
Jenny Nelson
 +
 +
== William Cowie, 2010/8/26 ==
 +
 +
It must be remembered that many people move not to remember but to forget.
 +
They want to put their past behind them and never look back. North America was populated by such people.
 +
 +
The economist John Kenneth Galbraith said “I grew up on a small farm in Chatham, and I lived there long enough to know that I did not want to spend the rest of my life on a small farm in Chatham”.
 +
 +
Cornell International Institute for Food, Agriculture and Development
 +
 +
== Eric Mullerbeck, 2010/8/26 ==
 +
 +
This is one of the most interesting discussions I've seen on this list.
 +
 +
Quite a few respondents seem to feel that the intangible benefits of knowledge sharing are more valuable than the tangible benefits. But in order for the intangibles to have real value they must eventually translate into tangible benefits as well (unless we are interested in supporting knowledge for knowledge sake). It may be very difficult to measure the "ripple effect" of long term benefits derived from these intangibles, but in order for KM to be properly valued and supported by decision makers, surely we must try to do so, rather than simply saying, "Knowledge sharing is mostly about intangible benefits" and leaving it at that. Maybe this is why so many KM groups have trouble getting resources within their organizations?
 +
 +
With respect to the significance of behavioural change as an indicator of improved knowledge, I agree with Ueli. There are systematic approaches to doing measuring behavioural including ethnographic research and activity metrics. The latter may be easy to gather, but they are NOT easy to use well; they must be selected and interpreted with care according to the context. It's certainly not enough to count downloads or page views.
 +
 +
And responding to Matt Moore's post on KS and repositories: My evidence of the value of repositories/document storage for knowledge sharing is from evaluations of projects to create improved repositories and better document sharing systems. System analytics were selected and compared with subjective satisfaction metrics from surveys and reported benefits from user interviews. The particular projects I'm discussing were in organizations that are geographically very widely dispersed, with high turnover and relatively standard operating procedures in many locations. The results tended to confirm that improvements in document repositories were correlated with large subjective satisfaction improvements. I suppose that much of this improvement was due to the nature of the organization; organizations with different operating conditions might see less improvement. I would still be interested in the research you mentioned on knowledge sharing taking place mostly through informal channels.
 +
 +
Your (Matt's) excellent article on 'justifying KM' has a number of very useful recommendations on measurement of KM. But from your posting to this list it sounds as though you consider measurement mainly as a useful tool for justification and not for impact assessment...?
 +
 +
Eric Mullerbeck Knowledge Management Specialist - Taxonomy, Documents & Records Information and Knowledge Management Unit Division of Policy and Practice, UNICEF
 +
 +
== Matt Moore, 2010/8/27 ==
 +
 +
Eric,
 +
 +
"My evidence of the value of repositories/document storage for knowledge sharing is from evaluations of projects to create improved repositories and better document sharing systems. System analytics were selected and compared with subjective satisfaction metrics from surveys and reported benefits from user interviews."
 +
 +
I was careful to state that knowledge repositories can have a beneficial role. What I disputed was the claim that they are the primary means of knowledge sharing for most workers. The evidence you quote doesn't really challenge my counterclaim.
 +
 +
Evidence to the contrary:
 +
* Chapter 6 of Tom Davenport's "Thinking for a Living" (based on IWPC studies). Users mostly used email and the phone for their information work rather than corporate website and databases.
 +
* CIPD's "Who learns at work?" survey indicates that learners prefer to learn via practice, coaching  or their colleagues rather than from internet based tools (backed up by research from NIACE in 2007).
 +
* Marketing research that indicates people rely on word of mouth and peer recommendations as their most trusted resource in purchasing situations (Nielsen Online Global Consumer Study 2007). Not directly relevant but highly suggestive of human behaviour patterns in general.
 +
 +
"Your (Matt's) excellent article on 'justifying KM' has a number of very useful recommendations on measurement of KM. But from your posting to this list it sounds as though you consider measurement mainly as a useful tool for justification and not for impact assessment...?"
 +
 +
* I believe that measurement is important for many reasons and that justification is only one of those.
 +
* I believe that impact assessment is an important activity, it is not identical to justification but it can overlap with it.
 +
 +
Cheers,
 +
 +
Matt
 +
 +
== Brad Hinton, 2010/8/27 ==
 +
 +
Matt,
 +
 +
Further to your comments on repositories, let me add that my work experience supports the general thrust of your observations.
 +
 +
At one professional services firm, there was an extensive repository of information about client work, lessons, and industries. Most people thought that the information in the repository was important.
 +
 +
However, the "most important information" was the name or names of the people who worked on a project/assignment/engagement so that people could be identified and contacted for direct voice or email communication. The upshot was that making visible the knowledge held by people was the "most important" element of the repository, not the documents themselves.
 +
 +
As to measurement, I could easily determine how many documents were viewed or downloaded from the repository. However, this didn't tell me anything about how the documents or the repository were being used and what impact they had. I only knew that by talking with people - the people who did use or didn't use the repository.
 +
 +
regards, Brad
 +
 +
== William Cowie, 2010/8/27 ==
 +
 +
Knowledge is lost as well as gained and always has been. As people move to new forms of livelihood and as new technologies emerge they lose the knowledge of the old livelihood and the old technologies. That knowledge is no longer relevant to their individual lives. Can they be blamed for this? I do not believe so.
 +
 +
Our political processes must give the different constituencies whose knowledge and understandings are very different – and who very often do not understand or appreciate each other very much at all – a voice and forums so that they can come to accommodations and not be victims of tyranny by majorities – or unrepresentative minorities. Rights based liberal democratic societies of a representative nature have done this best with social peace and overall stability being maintained despite wide differences in personal and group situations.
 +
 +
America right now is going through a sense of ‘what we all knew is being lost’ – the assumptions about what America is being challenged by a generation that is no longer so uniformly white and by countries fast moving up the economic chain. For the moment much of it is being expressed as anger – but the ‘loss’ will not be reversed.

Latest revision as of 09:30, 27 August 2010


See the original thread of this E-Discussion on D-Groups

Stefano Barale, 2010/8/10

The idea here is quite simple and related to the assessment phase of KM introduction inside any organization, as far as I understand it. Until today I've seen various tools used by most KM practitioners to try to reply to the very basic question in a KM assessment exercise: "where does this organization stand in terms of KMS"? The tools I've seen are:

  • questionnaires (manual, to be distributed to a wide statistical sample)
  • questionnaires (web-based such as the IBM-Inquira tool, with automatic stats displayed at the end, to be filled by the largest number of employees)
  • knowledge expeditions
  • interviews

These tools are surely good, but I was looking for something as much "scientific" as possible; something that could help us define KS well as speed, mass and position (over time) define the motion of a body in classical mechanics. Some indexes capable of replying to the question: "what makes a knowledge organization different from the others?". These indicators (indexes) should tell me that the organization is actually sharing knowledge... or not. Here you find my tentative list:

  • number of co-authored documents (indicating good collaboration) compared with total documents produced, in particular if the authors come from different departments of the organization (indicating good cross-departmental collaboration);
  • frequency of updates to documents present in the knowledge base of the organization (indicating good learning after, i.e. knowledge capture);
  • frequency of accesses to the organization knowledge base (indicating good learning before);
  • number of references (links) to other documents that are saved in the organization knowledge-base, per document (indicating again good level of collaboration in terms of learning from experience).

I know it may sound a bit simplistic as an approach (after all classical mechanics is quite simplistic compared to quantum theory), but I think the good thing about it is that studying the evolution of these indexes over time may lead to a very good picture of how the organization is evolving its KS activity over time, even if at a very rough first-order approximation. Do you feel there's anything missing or that any of the assumptions may be improved? Over to you. Stefano

Tony Pryor, 2010/8/10

Wow. You’ve hit the nail on the head. The problem for me however is that your list defines either the vehicles within which knowledge is packaged, or the frequency with which the objects are picked up and opened, and then later referenced. But none of them tell you about the two most important questions:

  1. Did the person who has that new knowledge change any decision, change any action, or rethink any conceptual framework or approach BASED on that new knowledge? And if so,
  2. Did it make a difference in terms of an end DEVELOPMENT result?

I find the problem that arises with trying to place value on KM is that to be rigorous sometimes we revert to thinking of knowledge in IT terms. So it becomes “let us assume that knowledge can be viewed as data, then…” The problem is that for me knowledge sharing is as much about behavior change, and conceptual change, as it is about anything. The temptation with equating KM with packaged data is that “rigor” usually leads to quantitative indicators. And that would be fine, if the indicators were about the final subject matter, NOT about knowledge objects per se.

Another way to think of it, let’s say a health officer walks into your KM team room attached to an AIDS program and says “prove to me that spending money for your KM effort actually improves my ability to meet my AIDS-defined objectives”, what do you say? To my mind, THAT’s what needs the rigor, not trying to understand the velocity and “stickiness” of knowledge.

What I do like about your questions is that they indirectly get at questions of trust and understanding of shared knowledge. If after five years, the material we have passed around shows up throughout the literature, it shows that we indeed have had one type of impact. Whether the world is a better place because of it may simply be too hard to state (although one CAN state that the presence of our ideas elsewhere is not necessarily good). But maybe it’s the best one can do. Like trying to understand the culture and society of a long lost community when the only things that survive, and can be counted, are gold trinkets and silver sacrificial knives. They tell a story but..

Stefano Barale, 2010/8/10

Dear Tony, thank you very much for your message. Let me start by admitting that I posed a "naive" question on purpose. In fact, I personally think that the reply to the question on top of my message is "Obviously NO!". :-)

Still, as you point out, there are some indicators, some indexes, that may help us understand what's happening under the surface of the deep waters of organizational knowledge.

Maybe we'll have to accept that we'll never know the position of our "knowledge item", while travelling the space of common knowledge, but this doesn't mean that knowing its "probability function" may not be interesting for our purposes. And, indeed, I think it is. Let me then restate the problem in a less naive way. The scope of establishing this indexes is not really to measure the level of knowledge sharing but, less ambitiously, to see if the organization has the right environment for knowledge sharing to happen. Stated like that, I hope that you will agree with me that an organization that scores zero on all of the following tentative indexes:

  • number of co-authored documents compared with total documents produced, in particular if the authors come from different departments of the organization;
  • frequency of updates to documents present in the knowledge base of the organization;
  • frequency of accesses to the organization knowledge base;
  • number of references (links) to other documents that are saved in the organization knowledge-base, per document

will probably have a very KS-unfriendly environment. And this will make ourselves reasonably sure that no one will ever be able, in that organization, to change any decision based on previous knowledge.

There may be (at least) one exception. The organization in question may be very much based on oral communication more than written communication. In that case the lessons learned may not be captured in a knowledge base... well, at least not a traditional one (1)

I hope this may be a good starting point to reply to your first observation:

The problem is that for me knowledge sharing is as much about behavior change, and conceptual change, as it is about anything. The temptation with equating KM with packaged data is that “rigor” usually leads to quantitative indicators. And that would be fine, if the indicators were about the final subject matter, NOT about knowledge objects per se

The indexes in question, over time, may give you a good idea about a behavioral change happening... or not. They are quantitative, but may be used to draw some rough (first-order approximation) conclusions on quality, as well.

And this leads me to the silent assumption behind my previous message. Everything I wrote was based on the idea that the organization taken into consideration already has, or wants to build some form of knowledge base. I think accepting this assumption does not mean necessarily thinking of knowledge in IT terms, even if the indexes proposed would require an incredibly big amount of time to be managed without computers.

But let me conclude this message by attempting to address the issue that I liked most in your message:

Another way to think of it, let’s say a health officer walks into your KM team room attached to an AIDS program and says “prove to me that spending money for your KM effort actually improves my ability to meet my AIDS-defined objectives”, what do you say? To my mind, THAT’s what needs the rigor, not trying to understand the velocity and “stickiness” of knowledge.

Well, this raises a completely new issue, as I see it. That is:

Does working in a KS-friendly environment really affect the outcome of a program, project or organization? How?

In this case I would present the officer in question with a solid case for KM, based on previous experiences, and then show him that the indicators in question may help us make ourselves sure that the organization is really progressing in sharing knowledge.

Once a KS-friendly environment will be established, we would need some strategy (method) to help the people inside the organization really make good use of the knowledge they can now access and share.

What I have learned so far in this sense is: first, people in knowledge organizations need information useful to them, just in time. This means customized information they can use almost immediately. Secondly, they are in need of training that can be delivered where they are. Third, they can learn much from people who are or have been in the same sort of situation they they are now. This means, in my opinion, collaborative learning.

If one could provide them with the skills and tools (including on-line ones) to learn collaboratively, these people would be learning as they work, being brought into a process that immediately serves their needs and excites them to pass on what they are learning to others.

Then, the only missing step would be to demonstrate that KS+OCL really helped the organization in question to meet its goals. And this is probably the most complicate part of the game...

I hope this makes more sense.

Thanks for sharing your thoughts! Stefano

(1) In fact, this was the case with some African organizations that my department worked with. For these reasons back in 2008 we conceived an asynchronous conferencing system for collective knowledge building using audio messages. One could think of building a knowledge base in a similar way: making more use of audio and video resources instead of written materials (even if I'm not convinced that multimedia is always the best solution for knowledge sharing, there are some cases in which the good old paper has its great advantages over modern tools).

Brad Hinton, 2010/8/10

Good morning.

The two approaches put forward by Stefano and Tony relate to the reporting on outputs and outcomes. Outputs are generally "preferred" because they are easier to determine. Outcomes are usually what we want and strive for, but it is much more difficult to "measure" because outcomes are a combination of complex arrangements within a complex space. Output reporting generally shows a clearer picture of causality than outcome reporting and this too makes output reporting more popular, but not necessarily the "best".

If I were to generalise by using the Cynefin framework, outputs have most meaning in the simple space where causality is easier to determine - doing A leads to B. Outputs may be meaningful in the complicated space because here we are adding expertise to a problem and we may be able to show causality - A+B lead to C. But outputs are far more difficult to determine in the complex space becasue there is usually no way of determining causality, let alone determining and measuring all the complex factors at work. In the complex space we need to continually test and respond to feedback to gauge how effective an action or intervention is.

The output-outcome debate is relevant to both knowledge management and to international development.

regards, Brad Hinton AusAID Canberra, Australia

Md Santo, 2010/8/11

PRACTICAL GUIDE TO MEASURING KNOWLEDGE SHARING IN ORGANIZATION From *MOBEE KNOWLEDGE ( http://mobeeknowledge.ning.com ) KM Framework, Metrics and Maturity Model - * http://www.scribd.com/doc/35628527/MOBEE-KNOWLEDGE-http-mobeeknowledge-ning-com-KM-Framework-Metrics-and-Maturity-Model take a look at :

1. HUMAN SYSTEM BIOLOGY-BASED KM™MODEL

Established a taxonomy of cross-functional business processes for your organization derived from Process Classification Framework (PCF) – American Productivity and Quality Center (APQC) template

The Categories – The Process Groups – The Processes items, classified as Competencies and could be treated as outputs. The Activities items classified as Capabilities and could be treated as outcomes. Both the items, Competencies as well as Capabilities actually are the sources for us to developing Key Performance Indicators (KPI)

Further, find out by your judgment, the Processes as well as the Activities items which are highly related significantly with the variable of Knowledge Sharing

The Knowledge Sharing related items whether they’re generated from Competencies or Activities items , then classified further into KM Tools, KM Process Framework as well as KM Standards (Culture and Value) for Competencies items and into Maturity columns for Activities items

2. MOBEE KNOWLEDGE COMPETENCY AND CAPABILITY MATURITY MODEL (MKCCM™)

Do measurement of each KM components in term of KM Tools (weighted score 1), KM Process Framework (weighted score 3) and KM Standards Culture and Value (weighted score 5) as well for the achievement of processes in Knowledge sharing competency

Do measurement also for the achievement of activities in Knowledge sharing capability within five ranges scoring system ( Initial – Aware – Established – Quantitatively Managed – Optimizing)

Md Santo

Chris Burman, 2010/8/11

Brad -

Is this not where Outcome Mapping comes in? I'm aware of OM but have never applied it and my understanding is that it looks at behaviour change as a form of interrogating 'outcomes'. Any thoughts / clarifications anyone ?

With kind regards

Chris

Dr C.J. Burman The Development Facilitation and Training Institute University of Limpopo

Peter Chomley, 2010/8/11

Stefano, to attempt to add knowledge to your question, I’d like to understand the context more.

What do you understand as “knowledge sharing”(compared to knowledge transfer and learning)?

Your (tentative) indices below imply to me that you are focussing on assessing the individual as the unit of measure rather than the group or organisation.

Anthony DiBella in his book “Learning Practices – Assessment and Action for Organizational Improvement” talks about understanding “learning use” and “learning impact” as measures to compare different initiatives in this area (and cautions about comparing initiatives in different area).

My other question is what is your sharing focussing on – are you trying to train people (ie focussing on compliance or task competency – the know how – the ordered domain) or do you want to build knowledge (know why – the unordered domain ) so people can use the knowledge to probe or explore a situation before responding with an approach (which may have to be further modified as it is applied).

Brad mentioned Dave Snowden’s Cynefin framework which is a great framework for analysing a situation.

I agree with Tony in that “knowledge sharing is as much about behavior change, and conceptual change” – technology is a good “necessary business tool” to build upon but that will depend on what infrastructure your audience can support (or need). Understanding the culture (national AND organisational) of your target is key.

Regard

Peter

Ben Ramalingam, 2010/8/10

Hi Chris,

Just to follow up on the Outcome Mapping (OM) and KM issue, I gave a presentation on OM and KM at the joint American Evaluation Society / Canadian Evaluation Society meeting in 2005 - [http://www.idrc.ca/uploads/user-S/11335475941OMandKM_Toronto.pps the file can be downloaded from the IDRC website].

Discussions will certainly have moved on since then - it may be worth linking up to the Outcome Mapping Learning Community www.outcomemapping.ca to find out more...

All best,

B.

Stefano Barale, 2010/8/13

Dear Brad, thanks a lot for your message. I wasn't thinking it in Project Management terms, but this is definitely a good idea! I agree that outputs are easier to measure than outcomes, but -as I tried to clarify in my second message- I think using smartly the first ones one could "indirectly" measure the seconds, or at least make this task someway more rigorous (no outcome of KM can be measured in an environment where there is zero KS). I just have one doubt: what do you mean by Cynefin framework? Stefano

Stefano Barale, 2010/8/14

Dear Ben, thanks for sharing your presentation on Outcome Mapping. Looks very interesting to me. I would like to know more. Do you have any books or longer document on the topic to share with us? Stef

Matt Moore, 2010/8/14

Stefano,

I'd like to challenge your use of the word "rigorous". It seems to mean "numerically precise". In most organizations, measuring usage of a document library will only tell you how many people are using that document library - not how effectively staff in that organization are sharing their knowledge to do their jobs better. This is because most knowlege sharing does not occur through the uploading and downloading of documents. Forcing people to do so would probably harm organizational effectiveness whilst apparently "improving" some of the metrics that you mention.

It is possible to be rigorously wrong.

Cheers,

Matt Moore

Brad Hinton, 2010/8/15

Stefano,

The Cynefin framework was established by Dave Snowden. More info from here: http://en.wikipedia.org/wiki/Cynefin and here: http://www.cognitive-edge.com/articledetails.php?articleid=14

Also, I believe Dave may have a video about the Cynefin Framework on YouTube.

regards,

Brad

Ben Ramalingam, 2010/8/15

Dear Stef,

The best thing to learn more about Outcome Mapping would be to check out the Outcome Mapping Learning Community (www.outcomemapping.ca) and the related resource section, which include the OM Guide, key templates, discussion summaries and project information, as well as information about the global members.

All best,

Ben

Amina Singh, 2010/8/16

I have to say - I am loving this discussion thread and everyday eagerly open my hotmail in anticipation of what new interesting stuff I get to read about :-) Thanks Brad for the links on Cynefin Framework - I am so glad, I came across this framework right now :-) While searching for resources on it, I came across these two article often cited in many of the resources: Snowden, D.J. & Boone, M. (2007). A Leader’s Framework for Decision Making. Harvard Business Review, November 2007, pp. 69-76. I found it at http://www.mpiweb.org/CMS/uploadedFiles/Article%20for%20Marketing%20-%20Mary%20Boone.pdf Kurtz, C. F. & Snowden, D. J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world, IBM Systems Journal, 42 (3), p. 462. Found this one at http://alumni.media.mit.edu/~brooks/storybiz/kurtz.pdf regardsamina

Brad Hinton, 2010/8/16

Amina,

Glad the framework is of interest.

The article that you cite by Snowden and Boone in Harvard Business Review (2007) is a great article - thanks for reminding me!

regards, Brad

Ben Ramalingam, 2010/8/16

This debate is a fascinating one, and to my mind, it really hinges on your underlying theory of change and the implications for knowledge.

If you think the world is predictable, simple, linear, then measurement of knowledge sharing will be entirely possible. Look no further than Henry Ford and scientific management for the appropriate tools, based on metrics and targets.

If you think the world is complex, interconnected, dynamic and emergent, then measurement of knowledge may be akin to using a machine gun to catch a butterfly. For sure, there can be retrospective learning, narratives, sensemaking... .

This face-off is discussed in a recent Aid on the Edge post on how advances in the understanding of the brain are challenging some of our longstanding theories of organisational learning (see here for more details: http://aidontheedge.info/2010/08/13/what-brain-scientists-can-tell-us-about-l earning-in-aid-agencies/)

As that post suggests, many of us working on knowledge and learning have long understood that the social and emergent side of learning is the most powerful in terms of changing how organisations do things.

However, paradoxically, the majority of our effort still goes on over-designed, top-down systems which have an overly mechanistic view of human beings, how they interact and how they learn.

The critical issue may be how to move the aid sector beyond the linear and mechanical approaches to which is it wedded in so many different settings...

Hannah Beardon, 2010/8/16

Thanks, a very much more eloquent and better researched version of the answer I wanted to give to another post - about rigorous methods for extracting lessons learned - in response to the search for ways to minimise subjectivity and promote structure and systematic-ness... (I did say you were more eloquent!). Juliana notes that "as humans, we all have our own perceptions and biases" and is looking for how to minimise those and extract lessons from an intervention that can become valid/ solid evidence to inform a new program or project. While I agree with learning and using that learning I worry about minimising subjectivity, because all you can do is minimise the recognition that it is there, and I think part of the problem is that we don't recognise subjectivity enough.

Although in science experiments can result in evidence, I have found this principle difficult to apply in development. How to deal with the many subjective, divergent and incoherent views and perspectives that different stakeholders have of a situation is something which underpins our approach to development. You can use your power (perhaps as donor or editor) to decide which view is valid, you could apply selective hearing to select evidence, but I have yet to see any rigorous scientific approach to determining what is a lesson or a valid point of view. As part of the IKM Emergent work on the ripples of participatory processes in INGOs I have been reflecting on the process whereby people's views, stories, perspectives etc are turned into evidence and data, and how a blind eye is turned to manipulation and exploitation within INGOs. For example, where 'voices of the poor' are selected to support a campaign message or tactic, rather than the campaign really being built on and deal transparently with the complexity which is inevitably reflected in the divergent voices or intricate stories which come out of grassroots particiaptory processes.

This might sound a bit abstract or irrelevant to planning for improved lessons learned, and the need to build in systematic capturing and sharing of lessons learned is very real (I have recently used reporting templates to support this but of course it depends on the context). But instead of trying to minimise subjectivity, I would like to see more appreciation and recognition of that, to encourage people to think critically about the lessons, how they emerged and what can be learned from them for other contexts and projects, if possible encouraging more than one interpretation of lessons from a project or experience. Because lessons, like stories of change and definitions of a better society, are subjective and may tell as much about the teller as about the experience they describe.

Hannah

Abdou Fall, 2010/8/17

Bonjour, Thanks Brad for your useful clarification Ouput versus Outcome ! I totally aggree with you. Yes Chris, I thing that OM could really come here. Measure knowledge sharing has much to do with attitude and behavior and those are somehow strongly linked to acquisition of new knowledge. OM in this context can help to plan the people/organisation behaviour to konwledge sharing and put a follow up process to measure progresses.

Abdou Fall Responsable de programme, FRAO/Program Officer, WARF Coordonnateur Fidafrique AOC/ Fidafrique Coordinator WCA

Tony Pryor, 2010/8/17

Brad:

I am attracted to this model but even after reading more on it, I am confused re the categorization of practices. What it SEEMS to be saying is that best practices can really only be defined for relationships where cause and effect are clear and "simple". As systems or problem get more complex, the ability to say "ah, THAT's the right path to success" diminishes since 1) most likely there are multiple paths to the same objective and 2) the definition of success is more nuanced Is that (vaguely) right?

It seems that these though are not quite the same thing. In the first instance, where the result is clear but the choices may be varied, the concept of "fuzzy logic" comes into play; I don't much care how much water or how much rice exactly I put into my rice cooker, but I know when it's done. In the world of development, this is somewhat akin to having a performance based contract; I can't and won't tell you exactly how to get across that river - that is up to you to propose - but I CAN tell you that I DO want to get across the river somehow. The second though seems to be more complex and nuanced because the end result itself is evolving, or may be viewed differently by different people (you may want to go across that river, but I am happy here, thank you very much). Aren't these two types of complexity quite different?

Now getting down from the esoteric, what fascinated me about Cynefin was its impact on the rest of us in how "practice" is defined. If indeed one buys into this model, then I would bet that most of what is called "best practice" is nothing of the sort. Not because we did a sloppy job, or didn't dig deep enough but because the problem is inherently complex. In that case, I'd say for many of the developmental problems outside of the very specific technical fix (building a bridge, handing out vaccine) "best practice" per se should be pretty rare in our world. I find this to be an exceedingly useful way of reconsidering what "best practice" means. I had a bunch of reasons why I have always hated the "best practice" term, but this gives me even more fodder!

Below are some of my other grumblings over "best practice", but before I list them I had one more concern over an unintended consequence of Cynefin: doesn't this lead somepeople to want to aim for the simple and clear-cut - OR (worse still) pretend things are simple and clear-cut - so that one has more control, can define the "right answer", and then package it as a best practice? I would bet that for many of us on this list serve, the concept of complexity is bracing, even sought after, while for many taff within development agencies and within the "results reporting" world, complexity is something intensely to be avoided. For many of the KM issues discussed on km4dev to be seen as things to be embraced rather than problems that need to be simplified, I think we need to figure out a way to give those who hate more than one answer - and more than one path to that answer - some anchor to windward, some hope that something concrete will eventually result. Sooooo, what do I tell them?

And now for my pet peeves re "best practice":

  • Both words are seldom defined: what is a "practice"? And is the thing called a practice easy to hold, or in fact so complex that it's hard to see where it stops and starts (Cynefin helps on this point);
  • What makes something "best"? I think we like the term because it gives us confidence consciously or unconsciously that someone somewhere reviewed one practice against another based on the input of experts, and reviewed them against some yardstick of quality. And not only was some sort of yardstick used, and used wisely, but that it led to a "winner".
  • In most instances, what makes the practice not necessarily the "best" is not that it is complex (as Cynefin defines it) but because there was no yardstick used, and no "expert" process involved in carrying out the comparison.
  • Often what is seen as a best practice is more like a "practice that I heard about which seems to be pretty good". Now that isn't a BAD thing, and in fact is probably as good as one can get to quickly. The problem is that by calling it a "best practice" we tend to strip from that practice any contextual concerns or issues which in fact probably made the practice of interest to begin with.
  • And last but not least, we often study a practice without having an understanding of its life cycle. If you're promoting the planting of on-farm trees with the hope of having increased household income from fruit or other produce, calling something a "best practice" in year 3 seems a tad premature. This is compounded by the tendency to track, evaluate and comment on interventions while we are funding them, whereas many impacts occur after (sometimes long after) external support ends. The ex post evaluations and reviews that look at final impact and THEN declare something a best practice often gets overwhelmed by a deep-seated concern over tracking how our money is getting used, or misused.

Tony Pryor

James Tarrant, 2010/8/17

Performance-based contracts are based not just on results but also specific standards or criteria related to those results. So, yes, you want to get across that river but you also want to be alive (as opposed to drowned), perhaps also dry, perhaps also within one hour, etc. The more specific the performance standards and criteria, the narrower the range of possible routes to a desired result.

Originally, the concept of "best practice" came from the engineering world and was usually codified in a "code of practice". The code, in turn, was usually based on a collective assessment of peers' experiences over a long period of time and even enshrined in regulation occasionally. The problem is that this kind of standardization can work well in engineering situations: bridge building, building construction, etc. but break down pretty quickly in most social science kinds of situations, e.g. anthropology, economics, sociology where even the desired result may be subject to dispute or at least different interpretation. However, I would certainly agree that neither "best" nor "practice" is often defined precisely. So what often happens is that the only best practices on which agreement can be reached are the most tangible or readily measured.

James J. Tarrant, Phd Senior Manager International Resources Group (IRG)

Eric Mullerbeck, 2010/8/17

Matt,

If one takes it as given that "most knowledge sharing does not occur through the uploading and downloading of documents", then clearly there's no value, and some potential harm, in measuring these activities. However I would hope that this assumption was arrived at through some kind of empirical process; if it's simply an unexamined assumption, it leaves unanswered the question of whether measurement of KS can be accomplished by measuring these activities.

One approach would be to carry out measurement of both outputs and outcomes in the context of a KM project such as improvements in a document repository. This would involve measuring the activities (which could include document up/download, or any other arguably relevant and measurable activities) during the course of the project, assessing desired outcomes at the end of the project (perhaps through less rigorous, subjective approaches such as gathering opinions of participants and beneficiaries), and determine what degree of correlation exists among them. It would be interesting to hear from any managers of KM projects who have conducted this kind of measurement.

regards, Eric -- Eric Mullerbeck

Matt Moore, 2010/8/19

Eric,

The research that I have seen (I may need to dig around in bookcase to find the actual references) indicates that most information gathering, knowledge sharing and learning takes place through informal channels. Only a minority occurs through officially sanctioned knowledge databases. If you have evidence to the contrary then please serve it up.

Now this does not mean that knowledge databases and information repositories are unnecessary nor does it mean that activity levels related to these repositories should not be measured. However it does mean that measuring such activity levels will probably not give you an accurate picture of actual knowledge sharing, information gathering and learning across the organization.

There is a well-known fable of a man who drops his keys in the dark part of the street but searches under a streetlight because that is where the light is. If we want to examine the impact of KM programmes we can't just focus on the things that are easy to measure. We have to venture into the dark. Some of my own thoughts can be found here: http://innotecture.wordpress.com/2008/11/03/justifying-your-knowledge-management-programme/

Cheers,

Matt

Maarten Samson, 2010/8/19

Hello,

I think there two ways to interpret your observation, that "most information gathering, knowledge sharing and learning takes place through informal channels. Only a minority occurs through officially sanctioned knowledge databases. If you have evidence to the contrary then please serve it up."

  1. Knowledge and information need informal ways to be shared... And Nonaka's concept 'tacit knowledge' will confirm and, I would say, imbed us in this way.
  2. Professionnals and researchers don't how to manage or regulate knowledge and information sharing better then when it is not manage, except by organzing some informal places (in reaction to organization focused on individual task performance, like brown bag lunch). This second interpretation can find arguments in problematologic approach and the idea that questions and problems have veen forgotten in a lot of knowledge theories and practices (education, knowledge management, etc). For this way, cf. Michel Meyer (who founded problematolgy concepts), Michel Fabre (education and knowledge transfer philosopher), and they refer to Dewey (reread...) and Bachelard.

Regards Maarten

Damas Ogwe, 2010/8/20

Measuring knowledge management ideally has more to do with the impact. It is more concerned with how lives have been changed or transformed by the knowledge shared. Thus in measuring success of Knowledge sharing, we look at what gains have been made whether positive or negative after the sharing of knowledge. We should compare where the recipient and/or giver of knowledge are at the moment and where they have come from. We should also consider how well the resources available to the recipient/giver of knowledge are being utilized after knowledge sharing and before the sharing.

The main concern in knowledge sharing when measuring impact could be in assessing some of the following:

  • Improvement in quality of life
  • The extent of adaptation of new technologies and how they improve productivity and efficiency.
  • Innovations and adapting to different situations and circumstances
  • In the case of agriculture, farm productivity and food security/insecurity levels could come in handy as a gauge.
  • Change in perceptions, thoughts and ideas

The measurement of could be argued out in the following equation: PI + CI = (K2 + R2) – (K1 + R1)

Where PI = Personal Impact after sharing CI = Community Impact after sharing K1 = Knowledge before sharing K2 = Knowledge after sharing R1 = Resource utilization before sharing R2 = Resource utilization after sharing

Thus the difference between the sum of knowledge(K2) and resource use after(R2) knowledge sharing on one hand and the summation of knowledge(K1) and resource use before(R1) Knowledge sharing( on the other can help us understand both personal (PI)and community impact (CI). This equation may not be conclusive but just provides some food for thought.

It is difficult at times to measure but of course what is difficult may not an out rightly impossible.

Damas Ogwe Content Facilitator Kenya Telecentres Link

Atanu Garai, 2010/8/20

"Rigorous scientific" evaluation is ideally conducted to measure if an intervention is effective or not, especially in a small scale. Once it is proven effective and the intervention has been scaled up, rigorous scientific evaluation of effectiveness is generally not required - as such evaluation will take considerable amount of resources. Sometimes and most times, such evaluation is also not required for large scale and often commercial initiatives that have often contributed to development but less talked about in development sector. The most used method of knowledge sharing (using in the sense information is retrieved from electronic or human sources) universally is the google search engine and we do not need evidence that use of google has led to social and economic development.

Among research designs, experimental design is considered the gold standard for effectiveness evaluation. e-learning can be considered a valid method of knowledge-sharing. Searching in Pubmed, some studies appeared that used control trial design to evaluate the efficacy of e-learning for capacity building, as compared to traditional classroom training. One such example is the study by Hugenholtz et al. (2008).

Hugenholtz, N. I., de Croon, E. M., Smits, P. B., van Dijk, F. J., & Nieuwenhuijsen, K. (2008). Effectiveness of e-learning in continuing medical education for occupational physicians. Occup Med (Lond), 58(5), 370-372.

Abstract:

Background Within a clinical context e-learning is comparable to traditional approaches of continuing medical education (CME). However, the occupational health context differs and until now the effect of postgraduate e-learning among occupational physicians (OPs) has not been evaluated.

Aim To evaluate the effect of e-learning on knowledge on mental health issues as compared to lecture-based learning in a CME programme for OPs.

Methods Within the context of a postgraduate meeting for 74 OPs, a randomized controlled trial was conducted. Test assessments of knowledge were made before and immediately after an educational session with either e-learning or lecture-based learning.

Results In both groups, a significant gain in knowledge on mental health care was found (P < 0.05). However, there was no significant difference between the two educational approaches.

Conclusion The effect of e-learning on OPs' mental health care knowledge is comparable to a lecture-based approach. Therefore, e-learning can be beneficial for the CME of OPs.

Christina Merl, 2010/8/23

Dear Ogwe,

I think that's an interesting equation that you present here. I am wondering about the "-" in it .. what if this "-" were a "+" ?

Christina

Arthur Shelley, 2010/8/23

Christina and Damas Ogwe,

I understand what you are getting at here. Most people measure what is able to be measured, but that is not where the real value is. The value is in the intangibles generated as a result of the interactions. I call these intangibles the outcomes (as opposed to the tangible outputs). They tend to be things like increased levels of knowledge, a greater capacity to innovate, the capability to create more ideas, relationships, trust and active networks (like this one - why do you contribute to this forum, can you measure it?)

See more at this blog post: http://organizationalzoo.blogspot.com/2010/01/conversations-that-matter.html

By discussing intangible outcomes through reflective practice we can make a huge difference to the behavioural environment and if done long enough this starts to generate tangible outputs (such as measurable performance improvements). As Einstein stated: Not everything that can be counted counts, and not everything that counts can be counted.

Regards, Arthur Shelley Founder: Intelligent Answers & Organizational Zoo Ambassadors Network Author: The Organizational Zoo & Being a Successful Knowledge Leader

Christina Merl, 2010/8/23

Dear Arthur,

My "+" thought was trying to be provocative.. though I find Damas Ogwe's equation an interesting attempt.

I really do like Einstein's quote - a nice one for management! Thanks for sharing this.

Damas Ogwe, 2010/8/24

Christina and Arthur,

Thanks for your comments. Knowledge sharing is difficult to measure succesfully using scientifc or mathematical formulae. However I propose that we look at it this way:

  1. A pastor is able to know how succesful or unsuccesful he is by the number of conerts he receives in his church in comparison to the defectors.
  2. A newspaper publisher is able to know the impact of the content of his publication through circulation numbers and advertising revenue.
  3. A political party which sells its manifesto to the public knows how it fares when the election results are out.

The list may be endless.

But how, do we measure the impact of knowledge sharing in a communtiy. It is all about looking at the deviation in behaviour and/or adoptation of new practices and knowledge and what this impact has had or NOT had in the individual's or communtiy's life.

I may agree that when we measure this impact then the "-" in my earlier equation could be "+". This may only be so if the new knowledge is being used in addition to previous knowledge and resources. However, where new knowledge sharing requires that certain past actions and traditions have to be discarded due to repugnance or new scientifc discoveries, then we could possibly retain the "-".

However, this is not the point. Though many thesis have been advanced in measuring KS, I for one would be skeptical about adopting any single formula as the standard through which to measure the success of KS both quantitavely. KS is one ingredient that can spur real development especially at the grassroots where resources are in plenty but which are unutilised or misused esecially in the developing world.

Thus when we try to measure KS, we may look at key specific anticipations from the shring. The following are just three examples of information/knowledge sharing (though they may appear out of context) and their expected outcomes: i- KS on Malaria prevention/treatment - fewer malaria related deaths, recording fo number of malaria cases in agiven area or uptake in the sue of mosquito nets. iii- KS on HIV/AIDS prevention - look at the increase in the use of condoms, reduction in pre-marital/youth pregancies, increase in number of persons wanting to know their HIV status etc. iv- KS on fighting the striga weed- increased maize/corn, increased use of striga weed resistant maize seed varieties etc. iv- Sharings on girl child rights - reduction in child labor, increased girld child enrolments in schools etc.

Thus whenever jnformation is shared, there is a purpose. "Have these purposes been achieved?" is the key question that we have to address a few months or years down the road. We may not be able to clearly measure exactness or specificity of success or failure, but we might just be be able to gauge whether we have succeded through KS or not.

Damas Ogwe Content Facilitator Kenya Telecentres Link

Ben Ramalingam, 2010/8/24

Thanks Arthur, also really like that Einstein quote.

Last year ALNAP produced a compendium of performance management approaches, with a title inspired by the quote.

Counting what counts: performance and effectiveness in the humanitarian sector - http://www.alnap.org/pool/files/8rhach1.pdf

Ben

Arthur Shelley, 2010/8/24

Ben,

Very nice piece of work. Good combination of qualitative and quantitative.

I like what you have done. Wrote about similar things in the book below. Hard to get the right mix that displayed the true impact of what had been achieved (or in some instances destroyed).

Imagine the difference in the world if the money spent on destructive technologies (war and the like) was invested in development! The real measures of worthy activities should be based on what we leave behind, not the size of our budget and the number of people reporting to us. Humanitarians can count number of children removed from malnutrition or educated per dollar spent. Politicians should start counting numbers killed as a KPI, then we might start to get some shifted perspectives. I know it is not that simple, but humans have a very bad habit of making basics far too complicated- and there is a vested interest in maintaining “haves” and “have nots”. We can only hope more people read your documents and less read propaganda of hate and divisive literature. Maybe we can rebalance the world a little.

Regards,

Arthur Shelley Founder: Intelligent Answers & Organizational Zoo Ambassadors Network Author: The Organizational Zoo & Being a Successful Knowledge Leader

James Tarrant, 2010/8/24

No to belabor the + vs. the – but, in fact, knowledge is not really a fixed quantum (or sum) that can be drawn down (along with resources by which I assume is meant the means for accruing knowledge, i.e. capital or funding, materials, other people’s time, etc.) in order to create new knowledge. Even some resources, if managed “sustainably”, can be treated as flows rather than fixed stocks.

Regarding what can or should be measured, I would slightly modify the apparent either/or nature of what counts and doesn’t count to say that some tangibles are important to be measured and counted – they may be the primary reason that an activity was implemented in the first place. It is just that the intangibles – while much harder to measure - are also so much more valuable for what they contribute to the distillation of experience into understanding, lessons, etc.

Christina Merl, 2010/8/26

The +/- seems to reflect prevalent management approaches and ways of thinking. Shouldn't we take one step back and look at the overall situation, ask ourselves a few questions. For example, is it useful to produce more and more of the same under different headlines? What makes sense? Isn't "doing" what counts at the end of the day? What makes "doing" possible? Is it practical experience, teamwork, motivation, peer exchange.. what else? And what hampers "doing" and thus minimizes outputs and outcomes? Phemomena like bullying, lack of teamwork, lack of motivation, lack of trust, vanities, coopetition...? Which environments are necessary to achieve "doing"? Environments of trust, mutual respect, loyalty, freedom of thought, innovative spaces, also guidelines or even restrictions...? And how can we achieve such cultures/environments in our different contexts? ...

Basically, we are all aware of the above but do we act accordingly? Maybe a context-specific matrix of qualitative and quantitative parameters has to be worked out by individual teams, organisations, cultures, which then could be applied in individual contexts extended according to needs. Maybe a more creative, out-of-the-box, qualitative approach will be needed to cope with the huge challenges of "measuring knowledge sharing".

Maybe this is the end of traditional +/- equations and the beginning of a more sensitive, humane, creative, qualitative, innovative approach towards measuring and management -- a "not everything that counts can be counted" kind of approach where microcosm counts and has to establish its knowledge-sharing culture as well as a flexible matrix that allows to "measure" the positive and less positive outputs and outcomes, practices... ?

What do we even want to achieve by "measuring knowledge sharing"?

Ueli Scheuermeier, 2010/8/26

Christina,

"What do we even want to achieve by "measuring knowledge sharing"?"

Hm.... I couldn't resist that one. My answer is: Behavioural change.

No, that's wrong. Sharing knowledge must lead to behavioural change, or in my opinion there is no point in sharing at all. We can keep on jabbering and babbling along with each other, if it doesn't make a difference in the real world out there, I couldn't care less. So the measure of knowledge sharing is the changes in behaviour.

Let's go check how behaviour has changed in the real world due to the shared knowledge?

Ueli

Christina Merl, 2010/8/26

:) I am glad you responded to that one, Ueli! I bet you would!!

BTW, sorry for sending two messages -- I fully agree .. though I'd love to know how you would define behaviourall change. Thanks.

Ueli Scheuermeier, 2010/8/26

Christina,

How would I define behavioural change? It's in the "doing" you refer to in your thread. If I can observe people doing something differently than before, that is behavioural change. And I mean real doing, ie. things I could film and publish on Youtube or record on an audiotape. Just changing our philosophy and mindset etc. ie. stuff in our heads, doesn't count. It's the real doing that counts, that makes real things happen in reality. Show me that (it's easy: Just film it or stick a microphone into the changed situation), and show me it's because of knowledge sharing, and I'll believe you.

That's my take

Ueli

Charles Dhewa, 2010/8/26

It's disheartening to see the bad side of behavior change stemming from knowledge sharing. Here in Africa, as soon as some people become more educated they tend to stay aloof and don't socialise the way they did before they were highly educated. They grew up heading cattle in the forests but now can't even remember the names of local trees. Their behavior has changed completely, they have become alienated from issues that affect ordinary people.

Maybe we need to look at both sides of behavior change and put more effort on how knowledge sharing can reinforce positive behavior change. Some of the worst decisions are made by people who have acquired a lot of knowledge and know how to manipulate it. I am not sure if such knowledge is valuable.

Regards,

Charles

Ben Ramalingam, 2010/8/26

Charles, thank you for a great, thought provoking comment.

You focus on Africa, but I would argue that that applies to the development sector as a whole, and the counter-productive bureaucracy that goes with it, and the knowledge we have to maintain in order to 'feed the beast'.

Our capacity for knowledge is not infinite, and I see many good, passionate people who - almost by bureaucratic necessity - have also "become alienated from issues that affect ordinary people".

How many of us dreamt of filling out logframes when we were young?

Jennifer Nelson, 2010/8/26

Hi Charles,

Just wanted to thank you for these words -- it was great to walk into work this morning and read this gentle reminder that we should remember our roots, stay humble, and always think about both the positive and negative consequences of our respective quests to make the world a better place. Thank you!

Jenny Nelson

William Cowie, 2010/8/26

It must be remembered that many people move not to remember but to forget. They want to put their past behind them and never look back. North America was populated by such people.

The economist John Kenneth Galbraith said “I grew up on a small farm in Chatham, and I lived there long enough to know that I did not want to spend the rest of my life on a small farm in Chatham”.

Cornell International Institute for Food, Agriculture and Development

Eric Mullerbeck, 2010/8/26

This is one of the most interesting discussions I've seen on this list.

Quite a few respondents seem to feel that the intangible benefits of knowledge sharing are more valuable than the tangible benefits. But in order for the intangibles to have real value they must eventually translate into tangible benefits as well (unless we are interested in supporting knowledge for knowledge sake). It may be very difficult to measure the "ripple effect" of long term benefits derived from these intangibles, but in order for KM to be properly valued and supported by decision makers, surely we must try to do so, rather than simply saying, "Knowledge sharing is mostly about intangible benefits" and leaving it at that. Maybe this is why so many KM groups have trouble getting resources within their organizations?

With respect to the significance of behavioural change as an indicator of improved knowledge, I agree with Ueli. There are systematic approaches to doing measuring behavioural including ethnographic research and activity metrics. The latter may be easy to gather, but they are NOT easy to use well; they must be selected and interpreted with care according to the context. It's certainly not enough to count downloads or page views.

And responding to Matt Moore's post on KS and repositories: My evidence of the value of repositories/document storage for knowledge sharing is from evaluations of projects to create improved repositories and better document sharing systems. System analytics were selected and compared with subjective satisfaction metrics from surveys and reported benefits from user interviews. The particular projects I'm discussing were in organizations that are geographically very widely dispersed, with high turnover and relatively standard operating procedures in many locations. The results tended to confirm that improvements in document repositories were correlated with large subjective satisfaction improvements. I suppose that much of this improvement was due to the nature of the organization; organizations with different operating conditions might see less improvement. I would still be interested in the research you mentioned on knowledge sharing taking place mostly through informal channels.

Your (Matt's) excellent article on 'justifying KM' has a number of very useful recommendations on measurement of KM. But from your posting to this list it sounds as though you consider measurement mainly as a useful tool for justification and not for impact assessment...?

Eric Mullerbeck Knowledge Management Specialist - Taxonomy, Documents & Records Information and Knowledge Management Unit Division of Policy and Practice, UNICEF

Matt Moore, 2010/8/27

Eric,

"My evidence of the value of repositories/document storage for knowledge sharing is from evaluations of projects to create improved repositories and better document sharing systems. System analytics were selected and compared with subjective satisfaction metrics from surveys and reported benefits from user interviews."

I was careful to state that knowledge repositories can have a beneficial role. What I disputed was the claim that they are the primary means of knowledge sharing for most workers. The evidence you quote doesn't really challenge my counterclaim.

Evidence to the contrary:

  • Chapter 6 of Tom Davenport's "Thinking for a Living" (based on IWPC studies). Users mostly used email and the phone for their information work rather than corporate website and databases.
  • CIPD's "Who learns at work?" survey indicates that learners prefer to learn via practice, coaching or their colleagues rather than from internet based tools (backed up by research from NIACE in 2007).
  • Marketing research that indicates people rely on word of mouth and peer recommendations as their most trusted resource in purchasing situations (Nielsen Online Global Consumer Study 2007). Not directly relevant but highly suggestive of human behaviour patterns in general.

"Your (Matt's) excellent article on 'justifying KM' has a number of very useful recommendations on measurement of KM. But from your posting to this list it sounds as though you consider measurement mainly as a useful tool for justification and not for impact assessment...?"

  • I believe that measurement is important for many reasons and that justification is only one of those.
  • I believe that impact assessment is an important activity, it is not identical to justification but it can overlap with it.

Cheers,

Matt

Brad Hinton, 2010/8/27

Matt,

Further to your comments on repositories, let me add that my work experience supports the general thrust of your observations.

At one professional services firm, there was an extensive repository of information about client work, lessons, and industries. Most people thought that the information in the repository was important.

However, the "most important information" was the name or names of the people who worked on a project/assignment/engagement so that people could be identified and contacted for direct voice or email communication. The upshot was that making visible the knowledge held by people was the "most important" element of the repository, not the documents themselves.

As to measurement, I could easily determine how many documents were viewed or downloaded from the repository. However, this didn't tell me anything about how the documents or the repository were being used and what impact they had. I only knew that by talking with people - the people who did use or didn't use the repository.

regards, Brad

William Cowie, 2010/8/27

Knowledge is lost as well as gained and always has been. As people move to new forms of livelihood and as new technologies emerge they lose the knowledge of the old livelihood and the old technologies. That knowledge is no longer relevant to their individual lives. Can they be blamed for this? I do not believe so.

Our political processes must give the different constituencies whose knowledge and understandings are very different – and who very often do not understand or appreciate each other very much at all – a voice and forums so that they can come to accommodations and not be victims of tyranny by majorities – or unrepresentative minorities. Rights based liberal democratic societies of a representative nature have done this best with social peace and overall stability being maintained despite wide differences in personal and group situations.

America right now is going through a sense of ‘what we all knew is being lost’ – the assumptions about what America is being challenged by a generation that is no longer so uniformly white and by countries fast moving up the economic chain. For the moment much of it is being expressed as anger – but the ‘loss’ will not be reversed.