The Miracle of the Spider

From KM4Dev Wiki
Jump to: navigation, search

Question at Origin

In January 2008, a discussion took place on some effects observed in the application and analysis of the LNGO Questionnaire. The initial input was:


From: marc.steinlin@i-p-k.ch

Subject: [km4dev-l] The miracle of the spider

Date: 04 January 2008 2:29:59 PM


Dear all,

First of all: all my best wishes for 2008, lots of prosperity, joy and success! (and KM!)

I am working on the analysis of a KM survey in a large international organisation. I - once again - have used the KM Benchmarking Tool (Learning NGO Questionnaire) of Bruce Britton (those among you who were at the KM4Dev Meeting in Zeist may remember we were running a workshop on the subject: http://www.km4dev.org/wiki/index.php/KM_Benchmarking )

Now, as I have been analysing the data, I came across a strange phenomenon: The questionnaire also contained the question, how much people know about KM. There were 5 answer categories: Don't know anything - Have a rough idea what it is - Heard/read more about KM - Experienced KM oneself, but not in this IGO - Experienced KM in this organisation itself. I have then disaggregated the data (ie. divided my sample into 5 sub-sample) and drawn a radar diagram for each of them. Now the interesting thing is: the more experienced people are, the worse do they estimate KM in their organisation (cf. diagram in the attachment).

One might expect at the first thought that the opposite occurs: the more they have seen, the better should be their impression of their organisation - no? But apparently that's wrong. I have observed a similar effect with other sub-samples...

Now my questions: a) in your imagination, what do you think are explanations for this? b) has anybody who worked with the same tool observed the same effect?

File:Spidermiracle.gif

(Just to explain the colours: pink line: people not knowing what KM is; blue: having a rough idea what KM is; green: have heared/read about KM; yellow: have experience KM somewhere; orange: have experienced KM in the institution in question)



Answers/ Contributions

The question triggered a whole series of answers.


Irene Guijt <iguijt@learningbydesign.org>

I’m not familiar with your KM Tool but have seen the phenomenon elsewhere. When I did a survey among managers in the early 90s at my then policy research institute and asked them how much they felt they needed management training, the ones that everyone knew were by far the worse managers indicated a much lower need for training than others. A tool/survey/question like this assumes that everyone has the same definition of ‘management’ in my case or ‘KM’ in your case. However, the more experienced people are the more refined and perhaps more critical they are about the concept they are being asked to reflect on. Hence you have people with more experience referring implicitly to a more sophisticated concept and those with less to a more basic concept – hence the first group having a more critical perspective and the second group basically feeling satisified more quickly.

Not sure if this resonates for you.

Greetings, irene


Johannes Schunter <Johannes.Schunter@unvolunteers.org>

Interesting observation. Perhaps one possible explanation could be found in typology of (un)conscious (in)competence laid out in the well-known 'Learning to Fly' book (see page attached). If you know nothing about KM, you're probably bad at it, but you're not aware. That's unconscious incompetence. If you asked about KM in your organsation at that point, you may rate it average or eben good, because you really don't have the capacity to judge it throroughly. That changes however, when you reveive training or even start working a little bit on KM. Then you suddenly realize all the things you (or your organisation) have done wrong in the past and the many items where you could improve. You moved to conscious incompetence. If you're asked about you or your organisation's performance now, you would probably rate it badly, because you understand how much better it could and should be.

You will probably only start getting good scores in the questionnaire, once the organisation moves from conscious incompetence to conscious competence. Which for most organisations is probably still a long way to go.

Actually, I remember that during the benchmarking session in Zeist, we also discussed the possibility that the ratings in a yearly Learning NGO questionnaire could worsen over time, even if the organisation actually improves - just because as people learn more about it they are more aware about what still needs to be improved, while taking the achievements of the past for granted.

Best,

Johannes


Jim Tarrant <JTarrant@irgltd.com>

I have not worked with the tool but it is quite possible that the apparent contra-intuitive pattern of responses that you are seeing reflects different levels of understanding of “KM” is by the respondents. Hence, respondents with little Km experience may have a rather diffuse notion of KM and hence may apply to a much wider set of situations than some more experienced KM users who may be IT wonks perhaps and only view sophisticated Internet based tools as KM. Just speculating here but that is one possibility.

James J. Tarrant, PhD


Joitske Hulsebosch <joitske@gmail.com>

I had a similar experience as you have, not with the Bruce Britton assessment, but with a capacity assessment tool in Ethiopia. The tool was to assess the organizational and programme capacities of local NGOs. The NGOs that were much more advanced would typically rate themselves lower; they would be much more critical and see how they could improve. The less capacitated NGOs would rate themselves higher, and would have the impression that only if they had money, the would be the best NGOs of Ethiopia.

I liked the explanation of Johannes (a theory I learned much later and did not link to this experience). If people want to read more there is a model online at: http://www.businessballs.com/consciouscompetencelearningmodel.htm

At the time, I thought a lower rate is almost an indication of a higher capacity. I think the trickiness is when you want to use such tools for benchmarking (as we wanted to do). You may rather use them for measuring progress in one organization (though even there, capacity may first go down). Our conclusion was that self-rating is hard to use for benchmarking, unless you use some external assessors. We continued to use it to increase awareness in organizations of what to work at.

Greetings, Joitske Hulsebosch


Urs Egger <urs.egger@skat.ch>

I agree with Johannes interpretation of the survey results. It also confirms the paradoxon that the more we know, the more we realise how little we know (or how badly an organisation performs...)

Just two examples from another context: I remember a radio broadcast where different music experts analyse classical music compositions and while I listened to them and the different music versions I suddenly started to hear differences I haven't been aware before. The same happened to me in arts exhibitions.

Cheers, Urs


Monjurul Kabir <monjurul.kabir@undp.org>

I tend to agree with James based on my UNDP experiences- in a number of small-scale mapping and survey(s), we often receive quite positive feedback on KM from practitioners with modest KM orientation. I have not worked with the tool though.

Best wishes-

Monjurul Kabir


Jaap Pels <jaap.pels@gmail.com>

Funny thread. My takes: - The question is strange to me: 'how much people know about KM?' Could have many answers amongst which the 5 answer categories. - Then 'the more experienced people are, the worse do they estimate KM in their organisation'. Is that experienced as in years 'KM'- practitioner? And the estimation? Is it in the same five categories? Hi Joitske, Great link! - Calling learning 'reflective competence'!!!!! - Intriguing statement: 'self-rating is hard to use for benchmarking, unless you use some external assessors'. Let go of benchmark as absolute point; even with external assessors all is relative of fluid.

So, on the issue: - Check on numbers: is it statistical allowed to divide in the groups along the 'experienced' - measure?

Cheers, Jaap


Nadejda Loumbeva <nadejda_loumbeva@yahoo.co.uk>

I completely agree with you Jim. I also very much agree with Johannes and Jaap.

So, in order to really understand why the analyses are showing what they are showing, it would be important to look into both of these as you and Johannes and also Jaap have already said:

1. Does the survey have any weaknesses from the point of view of the validity of what it measures - i.e., does it measure what it intends to measure? What do the sets of questions grouped under the different dimensions actually measure and is this according with what these are expected to measure? Are some questions ambiguous? How are they understood by the people who answer them? (levels of statistical significance are also important)

2. The unconscious/conscious incompetence factor that Johannes has already explained - which means that the respondents' understanding of the essence of KM would be interacting with their appraisal of the state of KM in the organisation and I guess skewing this in (some sort of) a systematic way. ...

Nadia


Urs Egger <urs.egger@skat.ch>

The points Nadia and Jaap raise remind me about some basics of surveys:

- Surveys are only a means, they show never the truth, but might give in the best case some hints where to dig deeper ("never trust statistics you have not faked yourself"). What you discovered, Mark, might be a question you want to explore with the staff of the organisation and what really matters is this exploration process among staff. - The way questions are asked matters a lot. Studies have shown that the results can be completely different if the questions are asked in a (slightly) different way. - Do people really understand the questions? I also suggested once to a client to use the Learning NGO Questionnaire and they rejected the survey because they found the questions too complicated and unclear. - Not only how the questions are asked, but also the order of the questions matters, and certainly also the context of the survey. - The samples of many surveys are too small to get robust results, i.e. it's just a coincidence.

One possibility to overcome these limits of surveys is a methodological triangulation, ie. using several methods in a KM/KS process such as electronic surveys, interviews, workshops, document analysis, etc. Through the combination of these methods gradually a more clear picture can be developed together with the organisation.

Best, Urs


Jaap Pels <jaap.pels@gmail.com>

Hi Mark, I assume the survey has been done to choose a KM intervention. May be you should bounce the results back to the organisation. I am sure they do not expected 'the figures'. See what kind of meaning the staff reads in the survey results - I would like to know :-).. And better; what to do next ..... Best Jaap

PS / Joitske One CAN assess results participatory: Google for "(the Quantified Participatory Assessment: QPA)"


Dr C Burman <burmanc@edupark.ac.za>

This doesn’t surprise me. There is a Chinese Proverb that goes something like this:

There are three types of people in this world: 1. People that know they know; 2. People that know they don’t know; 3. People that don’t know that they don’t know.

So – maybe the Spider Web gives us a very good example of the challenges involved with working with people [and power] who ‘don’t know they don’t know’.

With kind regards

Chris


Khan Alim <KhanA@who.int>

Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments 1999 article by Justin Kruger and David Dunning in the Journal of Personality and Social Psychology, Vol. 77, No. 6, 121-1134

and more recently: Skilled or Unskilled, but Still Unaware of It: How Perceptions of Difficulty Drive Miscalibration in Relative Comparisons 2006 article by Burson, Larrick and Klayman in the Journal of Personality and Social Psychology, Vol. 90, No. 1, 60–77

Now if I could only recall how and why I have these two articles in my collection :)

All the best for the new year, and for the survey analysis! Alim


Steff Deprez <steffdeprez@veco-indonesia.net>

I had exactly the same results emerging from the Learning NGO questionnaire in our NGO. The Learning and Information management Section (LIMS) estimated each of the 8 functions in average lower than any other section in the organization (same pattern for management team members, though less outspoken). We discussed with the team and came to the same conclusion. Knowing what KM is (or assuming that you know J), having an idea what is in place and knowing what could be done better, … seems to make the LIMS staff more critical on the actual situation. On the use of the questionnaire (I used this KM tool 2 times): I also experienced that the questions are indeed not always clear for everybody, ambiguous and they are pretty much written in KM language,… This came also out when we had to translate it into Bahasa Indonesia. However, as mentioned before – also in Zeist – the strength is that it gives a nice visual representation which can lead to very interesting and powerful discussions and analyses which in our case were more important and useful than the actual scores of each of the function.

Cheers Steff


Maarten Boers <Maarten.Boers@icco.nl>

This has become an interesting thread. Although much already has been said I would like to share my experience with the scan, which we have applied within the ICCO-Alliance a few months ago.

Indeed we had quite the same experience as you and many others describe. Just a few thoughts about the results: - Perhaps the greatest worth of applying the scan is not in the first place the result itself (the spider web). The mere fact of doing the scan stimulates the discussion about the learning and KS activities within the organisation. Ant that in itself is already an important result. - When discussing the results I always stress that the “notes” do not reflect reality, they are only (good?) indications about which elements should or could need improvement. (In our case clearly all eight elements ;-) - A great risk of the comparison of the results of for example two departments is that this could be seen as a competition. As many of the reactions to your mail indicate, it is very well possible that a department which in fact is “better” in learning and KS will score lower than another department which is not “as good”. Therefore I always warn colleagues not to use the results in this “competitive way”. - The results of the scan are only a starting point for further investigation on how to develop a “plan of action” on the improvement of the learning capacities of the organisation. What and how to do can only be defined on the basis of more in depth interviews and discussions with colleagues. - We do intend to use the scan as a monitoring tool. So we do plan to do the scan again in two years time. However I had the opportunity to discuss the results with Bruce Britton himself and he gave me a warning very much in line with the comments in this thread. He told me that the experience in other organisations is that a second scan most probably will give disappointing results (lower scores than the first time). In his opinion that is due to the fact that once organisations start to work on their learning capacities, everybody will get a better understanding on what learning and KS in fact is and therefore tend to answer the questions more critically (and thus scoring lower). To get a better insight on how colleagues are valuing the progress in the learning capacities Bruce suggests to ask a twofold question in the (second) scan. For every question one would have to answer how he/she thinks the sore would have been one or two years ago and also how the score would be a this moment. In that way the results will show how everybody is interpreting the process of improvement (or not) of the learning capacities. I must say I am already curious to see the results of the second scan in 2009 of the ICCO-Alliance.

About a month ago I prepared a poster presentation of the (results of the) scan for an event here in the Netherlands. If you are interested you can find it here.

And by the way if you are interested to know a little more about how we are trying to improve the learning and KS capacities within the ICCO-Alliance you can find some more information on the “learning wiki” of the alliance: http://iacdrc.pbwiki.com Of course we are still working on it and it certainly is not as we want it to be. So any comments or suggestions are very welcome!

Kind regards,

Maarten Boers