Talk:Understanding the role that trust plays in Knowledge transfer and our over reliance on technology
- 1 Daan Boom, 2018/04/03
- 2 Johannes Schunter, 2018/04/04
- 3 Michael Hill, 2018/04/04
- 4 Chris Zielinski, 2018/04/04
- 5 Julie Senga, 2018/04/05
- 6 Peter J. Bury, 2018/04/09
- 7 John Bordeaux, 2018/04/09
- 8 Peter J. Bury, 2018/04/09
- 9 John Bordeaux, 2018/04/09
- 10 Elizabeth Maloba, 2018/04/10
- 11 John Bordeaux, 2018/04/10
- 12 Chris Zielinski, 2018/04/10
- 13 Arthur Shelley, 2018/04/11
- 14 Matt Moore, 2018/04/11
- 15 Elizabeth Maloba, 2018/04/11
- 16 Mark Trexler, 2018/04/11
- 17 John Bordeaux, 2018/04/11
- 18 Mark Trexler, 2018/04/11
- 19 Robert Dalton, 2018/04/11
- 20 Brad Hinton, 2018/04/12
- 21 Elizabeth Maloba, 2018/04/12
Daan Boom, 2018/04/03
Gloria Origgi is an Italian philosopher, and a tenured senior researcher at CNRS (the French National Centre for Scientific Research) in Paris. Her latest book is Reputation: What It Is and Why It Matters (2017), translated by Stephen Holmes and Noga Arikha in English. Its an interesting read.
There is an underappreciated paradox of knowledge that plays a pivotal role in our advanced hyper-connected liberal democracies: the greater the amount of information that circulates, the more we rely on so-called reputational devices to evaluate it. What makes this paradoxical is that the vastly increased access to information and knowledge we have today does not empower us or make us more cognitively autonomous. Rather, it renders us more dependent on other people’s judgments and evaluations of the information with which we are faced.
We are experiencing a fundamental paradigm shift in our relationship to knowledge. From the ‘information age’, we are moving towards the ‘reputation age’, in which information will have value only if it is already filtered, evaluated and commented upon by others. Seen in this light, reputation has become a central pillar of collective intelligence today. It is the gatekeeper to knowledge, and the keys to the gate are held by others. The way in which the authority of knowledge is now constructed makes us reliant on what are the inevitably biased judgments of other people, most of whom we do not know.
Let me give some examples of this paradox. If you are asked why you believe that big changes in the climate are occurring and can dramatically harm future life on Earth, the most reasonable answer you’re likely to provide is that you trust the reputation of the sources of information to which you usually turn for acquiring information about the state of the planet. In the best-case scenario, you trust the reputation of scientific research and believe that peer-review is a reasonable way of sifting out ‘truths’ from false hypotheses and complete ‘bullshit’ about nature. In the average-case scenario, you trust newspapers, magazines or TV channels that endorse a political view which supports scientific research to summarise its findings for you. In this latter case, you are twice-removed from the sources: you trust other people’s trust in reputable science.
The paradigm shift from the age of information to the age of reputation must be taken into account when we try to defend ourselves from ‘fake news’ and other misinformation and disinformation techniques that are proliferating through contemporary societies. What a mature citizen of the digital age should be competent at is not spotting and confirming the veracity of the news. Rather, she should be competent at reconstructing the reputational path of the piece of information in question, evaluating the intentions of those who circulated it, and figuring out the agendas of those authorities that leant it credibility.
Whenever we are at the point of accepting or rejecting new information, we should ask ourselves: Where does it come from? Does the source have a good reputation? Who are the authorities who believe it? What are my reasons for deferring to these authorities? Such questions will help us to get a better grip on reality than trying to check directly the reliability of the information at issue. In a hyper-specialised system of the production of knowledge, it makes no sense to try to investigate on our own, for example, the possible correlation between vaccines and autism. It would be a waste of time, and probably our conclusions would not be accurate. In the reputation age, our critical appraisals should be directed not at the content of information but rather at the social network of relations that has shaped that content and given it a certain deserved or undeserved ‘rank’ in our system of knowledge.
These new competences constitute a sort of second-order epistemology. They prepare us to question and assess the reputation of an information source, something that philosophers and teachers should be crafting for future generations.
According to Frederick Hayek’s book Law, Legislation and Liberty (1973), ‘civilisation rests on the fact that we all benefit from knowledge which we do not possess’. A civilised cyber-world will be one where people know how to assess critically the reputation of information sources, and can empower their knowledge by learning how to gauge appropriately the social ‘rank’ of each bit of information that enters their cognitive field.
Attachment: 2017 Origgi Chapter 1 Reputation.pdf
Johannes Schunter, 2018/04/04
Thanks much Daan for sharing!
Without having read the book, and judging only from the paras above, I have to say, this is in my view an utterly short sighted view of the information overload conundrum. Reputation via statements from peers has always been at the heart of people's evaluation of given information, literally since forever. The only thing that changed is the massively larger amount of information we have to evaluate, and the massively larger number of peers we're using to do it. We can go back every single century in human history and we will find that reputatio was the only way people could make sense of external information received (since no one could practically confirm for themselves whether e.g. the earth was indeed round, whether Abraham Lincoln indeed got the most votes, and whether the Titanic really hit an iceberg). In fact, reputation mattered even more in history the smaller the community of peers was that was validating it, from nations to cities, down to villages, tribes and families. To speak of a "reputation age" as something new shows an astonishingly ahistoric view of the world.
Personally, I believe that instead it is Artificial Intelligence that will really change how we make sense of all the information we cannot compute personally anymore. Rather than relying on selected (and always biased) information sources based on reputation, which always gives us the illusion of comprehensiveness (when in reality what went into our evaluation was always only the tip of the iceberg) AI programmes will digest ALL information available and present summaries and conclusions to us. We will accept or reject those summaries and conclusiosn still based on reputation - though not anymore the reputation of the sources or peers who delivered the information to us, but the reputation of the alogorithm we trust to process all existing information sources for us in the best way (IBM Watson vs. Google vs. ???). This will create its own problems of course. But it has nothing to do with entering a new "reputation age". And in fact, in contrast to what the above author thinks, the skill of evaluating select information sources ourselves will become less and less important going forward, as we will rely more and more on software to do that for us.
Just my two cents. I'm sure there will be differing viewpoints, glad to hear and learn from those! ;)
Michael Hill, 2018/04/04
For a good historical read about 'reputation' and 'gatekeepers of knowledge' read Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time. Regarding algorithms, for awhile I used the attached in our Knowledge Management class as one of our homework readings. With my own background I'd become familiar with looking for 'knees' resulting from multiple runs of models, and searching for points where small changes produced large results, because while sometimes those were valid, sometimes they better illustrated the results of the model or algorithm. As a simple illustration, if I have .25 cents and want to buy a canned drink from the machine for a dollar, the model might aggregate multiple numbers of us with .25 cents and produce that for twenty people 5 cans will be bought, but in truth, none of us has a dollar. A simple model of behavior says none will be bought. A more complex one might realize 18 people will pass it by once they realize they don't have a dollar, but two may be so thirsty they'll stop and beg or borrow money to buy. Depending on the model you will have different 'knees'. In recent elections we've seen this, where the experts, and their models got it wrong. I think we are also seeing a backlash against AI - with AI being another way to say 'an algorithm we don't necessarily understand'. I think we're in for a dangerous ride if we start to believe 'past performance indicates future performance' simply because the new 'elite' is some unemotional AI.
When AI can make things simpler to understand, and tell a better story (If facts changed behavior no one would smoke and we'd all floss - you don't beat a story with facts, you beat it with a better story), then I'll start to buy that AI is all Asimov dreamed it to be. Until then it will be useful rather than a magic wand some claim it to be.
- Attachment: 7 A model world_R1
Chris Zielinski, 2018/04/04
Thanks to Daan Boom for the excellent thought piece.
I have always thought that ours was a culture destroyed by notions of fame, rather than animated by fame's smaller brother, reputation.Fame is reputation in a tuxedo - and I fear we left the reputation age some time ago.
Friedrich Hayek’s comment about civilization resting "on the fact that we all benefit from knowledge which we do not possess" reads uneasily in an age of false information being turned into false knowledge by its utterance through various gassy mouthpieces. Instead of benefiting from knowledge we don't possess, we are suffering from dross that is passed off as knowledge.
The problem is the apparent global disaffection with expertise. People are coming home from their admirable learning organizations and voting Trump. It should be no surprise that the loudest talkers have taken over the conversation. Will we be able to restore our more intimate relationship to knowledge again?
Julie Senga, 2018/04/05
This is quite an interesting observation. The world is on motion from real to unreal and most likely it will stop at real!
Counterfeit replaces authentic.
Peter J. Bury, 2018/04/09
Hi all, I very much agree with Johannes' thoughts.
What I am curious about in the context of A.I. and blockchain or better hashgraph is: can and will A.I. through for example an application based on hashgraph technology help us distinguish reliable news cq. information?
John Bordeaux, 2018/04/09
Peter, Thank you for writing what I was thinking. Trusting in the algorithm versus the reputation still means we inherit bias. The infamous case of Google’s image search results when asked to find examples of ‘unprofessional hair’ comes to mind. The algorithms embed existing bias just as did the Church or the folks at the barbershop. I think the answer to your question begins with:
Who defines reliable? We mark the “Age of Reason” by when we turned from the Church to science to understand our world (crudely overstated, I suppose). But the example of climate change below is a great one regarding the retreat - in some countries - from Reason. We now openly question science, and ask instead about agendas and funding sources. Perhaps a healthy turn, but what replaces systematic inquiry?
In this age, again: Who defines reliable? We should determine that, because algorithms are being written all around us without a shared agreement to the answer.
That could be of help in this age of information overload.
Peter J. Bury, 2018/04/09
I agree with you and in fact in first instance I was tempted to react with "In the end reputation and perceived reputation (not sure that exists) is 'manual' work, at times cumbersome, troubled by doubts due to contradictory or at least not homogeneous information. Tools, artificial or other, may help here and there, but as problems with Bitcoin and similar shows, most ways are not fool and/or criminal proof. So we have to accept errors and slipping. History is full of amazing cheats. Is triangulated information and common sense in the end the only way to go, now and in the future?"
So who defines 'reliable', well probably no-one should, we can only go by approximation, trial and error and permanent adjustments. So as we have done since 'homo sapiens with probably up to 40% of Neanderthaler' walks around on this tiny planet. And yes, maybe more environmental friendly alternatives for blockchain - hashgraphs? - may help but not garanty reliability.
As for religion (I avoid churches as organizations)... tja (= Dutch) many humans seem to need it. On a more pessimistic note, but I'm reading Pinker, some evidence seems to exist that humans may long for and need (?) wars every once in a while... cycles... waves...? Can KM4Dev help?
John Bordeaux, 2018/04/09
I was aware that we can observe waves in history, but had not considered whether these cycles were needed/desired. I think I was aghast after reading The Sixth Extinction and resolved to avoid macro observations about our current age for a while. (It’s a tad stressful over here in the New World these days.) Whether it’s a series of extinctions, or ‘adjustment’ events like pandemics, or a human need to burn things - I think KM4Dev can help (pretentious for a lurker to suggest, I know) in develop resilience strategies to augment risk management strategies. Plan for inevitable failure, rather than harden processes against disruption.
Something that may be more needed in the developed world, where we like to pretend to have insulated ourselves from catastrophe…
Elizabeth Maloba, 2018/04/10
It takes a lurker to draw out another lurker. I have been thinking exactly on the same line – resilience, preparedness for unpleasant surprises with the understanding that it is impossible to be 100% inoculated against them. How does KM contribute to preparing systems and/or organisations to respond when they find themselves in the situation?
Would be interesting to see how this discussion evolves.
The trouble with lurking, is that you find the tip of a conversation and it piques your interest – and then you find a moment to go through the entire thread and find you need to say a bit more. I do hope I am forgiven a double post.
In the words of one of my mentors, I have a lot of wood to burn in this discussion. Let me try to address just one issue - Relying on algorithms – I recently had a conversation about how modern recruitment is heavily reliant on algorithms and how that meant that organisations were not benefiting from the best talent they could access because people who were not adept at writing their resumes for the algorithm (or did not fit a given profile that was written into the algorithm) were locked out even though their skill set would have been ideal for the business. Recruitment algorithms have been found to contain bias against geography, race and gender – I refer back to my computer programming lessons – garbage in, garbage out. Cathy O’Neil in her book ‘Weapons of Math Destruction’ argues that in a variety of fields, from insurance to advertising to education and policing, big data and algorithms can lead to decisions that harm the poor, reinforcing racism and amplifying inequality. In other words algorithms are not neutral.
I did say I had a lot of wood. I don’t really have concrete suggestions for a way forward. I hope my contribution has helped to move the dialogue in some way and I look forward to hearing different and diverging thoughts.
John Bordeaux, 2018/04/10
I am reminded of the great phrase crafted by the Monty Python troupe - I believe we are at the ‘nub of the gist.’
By their nature, algorithms build on what is known. And we are a world that does not embrace the poor or people of different color from our own. It should be no surprise that the code we write to substitute for winnowing reflects our discrimination. (This is a crude representation, we should take some note of the AI engines who have apparently developed a new language to communicate among one another.)
One superb way to fail at resilience is to envision only that which has come before. That is, believing past is prologue is how surprise devastates.
One might say, therefore, that a world rich with AL algorithms substituting for human information processing is a brittle world indeed. Understand new information based only on what we already know is how brains make sense of the world around us (The Metaphors we Live By, others). It takes effort and intent to make sense of utterly new. There are organizational muscle movements designed to overcome this natural move towards the comfortable. Kahneman recently said he believed there was some promise in organizations overcoming bias through forced behaviors, but little hope individuals could ever change.
If KM is about making better decisions, it needs to be - in part - about these forced behaviors. We are tasked with improving organizational as well as individual decision-making. Perhaps it’s time to add AI design to this dyad. How can what we know about cognitive barriers to adaptation, surprise, resilience, overcoming bias, etc., aid in the design of AI that makes the world more pliable than brittle?
Chris Zielinski, 2018/04/10
Note that there is a split in artificial intelligence between what I term explicit algorithms, where humans articulate and code rules for the machine to follow (with the risk of encoding human biases), and implicit algorithms, where humans set the loose boundaries of the problem, dump a vast load of data on the machine and let it write is own implicit, inarticulate rules, deriving them from the data (with the risks - noted in Weapons of Math Destruction - that the results become incoherent and untransparent to humans).
For example, When Deep Blue beat Gary Kasparov in chess, it was using explicit algorithms and very rapid calculations; when AlphaGo beat Lee Seedol in go it was using two deep learning implicit algorithms and terabytes of data. But then, there are only 10^47 possible chess games, while there are 10^170 go games.
This leads to the unhappy conclusion that the more complex a human system, the less transparent the a.i. used to address it will be.
Arthur Shelley, 2018/04/11
Great conversation thanks. A quick dump of some thoughts whilst on a plane… Chapter 1 perhaps on the options for KM in future.
Knowledge flows between humans was traditionally a social phenomenon completely dependent on trust. In the past, we shared information about ourselves only with whom we had a relationship (I mean a real one – person to person), or with those with whom we chose to try to build a relationship with.
Trust is the fuel of knowledge flow - you don’t tell people you don’t trust anything.
Knowledge flow (and the actions we choose to take to apply this knowledge or choices NOT act on it) is the fuel of reputation.
One significant issue in the information age is we started to share all sorts of things with people who we assumed are trustworthy. “They”(the nameless organisations gathering and sharing our information for profit in real time) build reputation by appealing to your ego. They call us friends – implying that they can be trusted. They make very clever choices of engaging words and invest heavily in marketing backed by free applications and simply ask for the information. Suddenly we are drawn in and our information being shared without our knowledge or consent.
We are our own worst enemies. Too many of the “masses” are just sharing to get attention, seek fame or fortune and not being critical about what this is doing to, or for, them. What is the real price and ROI?
Human decisions are driven by ego and they “love” to be “liked” and this is being leveraged by many. Amongst all this liking of “people” who may be no more than just an avatar (or at the very best, how people WANT to be publicly perceived) is not relationship! It is artificial reputation based on subjective information and biased algorithms.
Our information is being inappropriately used for a long time and the message is clear that this is accelerating. Personal marketing is not the worst of it – what about inappropriate use or deliberate misinformation. This is not a new thing and is already happening and being used in the highest places. It is not hard for someone with the right skills and means to adjust ones public profile. Did Julian really rape someone in Sweden? I don’t know, but the media has been feed this story in order for our “trusted authorities” to achieve their aims. Is there a “truth”? Certainly perceptions are being cooked up and fed to the masses by the information chefs for all walks of life. We all benefit from being more sceptical and choosy about what we choose the read and how to challenge its evidence, rather than accept plausible possibilities with hidden agendas being shared by the gigabyte daily.
Hacking of accounts is not unusual and as we know (just think recent Facebook – Cambridge Analytica scandal), nor is sharing of all that information you have been giving away free by accepting free apps and cookies.
Unfortunately for the KM profession, this is using the techniques we aspire to apply to help people (monitoring the flow of knowledge, understand the patterns and gaps in knowledge and using them to strategic drive value creation - sound familiar?). Any powerful approach can be used for or against you and this is why reputation is so important.
However, who CAN we trust?
This is the BIG question now… it depends on who we want to build a relationship and reputation with!
Answering this is one of the most important KM questions ever asked and will become increasingly critical as we move ahead.
I suggest watching a few of these (sure they are basic and not great entertainment, but they contain some interesting perspectives to reflect on about where we are going:
- You are soaking in it
- The circle
Perhaps the world could do with a little more deep reflection and less shallow “liking”? Last year the OECD (or was it the WEF?), highlighted that the most valuable future skill is Critical Thinking. We are not doing enough of it and society has become too shallow and distracted. For example, someone from Hollywood breaks a nail and the world gasps, whilst child molesting, family violence and racial inequality get skimmed over.
There is HUGE scope for improvement in how we manage our knowledge and how we act on what we come to know… The trouble is there is a huge NEED for what we KMers do at the highest levels, but little demand from those with means to support better decisions and actions.
This is why we need to form a stronger and recognisable international knowledge profession. So that we can rebalance the conversation and build a reputation as a voice to be included. Currently we are largely preaching to the converted and not influencing those in charge.
Let’s change this…
Matt Moore, 2018/04/11
So I’m glad that there have been references to books like “Weapons of Math Destruction”. In the last 12 months, there had been a lot of focus on this topic - altho it’s unclear how much of his has actually filtered down to data scientists (software engineers tend to view ethics as even more “girly” than UX).
What I would note is that human systems are hardly free from bias. That’s part of the problem - we are the training data set and we are sexist and racist (humans suck). The question is whether machines will be worse than human beings. We have set a particularly high bar of awfulness as a species.
Elizabeth Maloba, 2018/04/11
Besides reading Cathy O'Neill, I have recently read Stephen Pinker's 'Better Angels...' and Yuval's 'Sapiens' (I also have to admit to having optimistic tendencies). I lean towards expecting better for our future compared to our past - despite some of our catastrophic choices with disastrous consequences, we seem to lean towards improvement - some practices that were acceptable as recently as the 60s (I am a black woman - I know how far that double minority has moved) are completely unacceptable now.
I however agree with the view that 'we preach to the converted' and 'speak in our echo chamber' and recognize the need for us to speak meaningfully with people outside our core group. I would like to know: what does speaking outside our chamber look like? What can we do together to engage decision makers, leaders, managers, better?
I am roll my sleeves and shovel kind of girl so I am open to suggestions where we work together and co-create an approach to addressing this need.
Mark Trexler, 2018/04/11
Elizabeth, following up on your "preaching to the converted" query, it was about 7 years ago I heard the authors of "Influencer, the Power to Change Anything" speak about their research into human decision-making, and the importance of delivering the "right message" if you want to influence decision-making. The right message being information that will be considered actionable by the person/audience you're dealing with. As opposed to simply delivering a message you might consider actionable. It's clear in my field of climate change that:
1. 95% of information delivered is repeating prior information, and really just serves as fodder for confirmation bias 2. 4.95% of information delivered is relevant but misses the mark for various reasons 3. .05% of information delivered might serve actually serve as actionable knowledge for a specific decision-maker
Not very encouraging even though these are not intended to be scientific numbers. There is of course a well-established literature on how to communicate about climate change with specific audiences, including people who might totally disagree with you: https://webbrain.com/u/1AwI . And there are many other literatures surrounding all kinds of specific climate communications strategies (a bunch are can be seen here: https://webbrain.com/u/1AwJ ). But as long the experts' incentive structures are focused almost entirely on simply generating more information, it's hard to see getting off the hamster wheel. I have not read the book that started off this thread, but to suggest that reputation is a substitute for information seems mistaken to me. Obviously reputation is an important element in making information actionable, but it's just one element. The real problem is that we're focused so little on getting the right information to the right person at the right time, which is where I see KM's potential in the climate change space.
John Bordeaux, 2018/04/11
Ah, a trigger phrase for me! (I have several, fair warning)
"The real problem is that we're focused so little on getting the right information to the right person at the right time, which is where I see KM's potential in the climate change space."
This is a laudable goal and has been cited as the goal for KM as well as IM for some time now. What I lament about this phrase is that it appears as a push function entirely. I have worked with CIOs who believe their mission is this phrase, and yet are perplexed at the idea of understanding edge user context, co-design workshops, etc. To me, this is an impossible task to do, because the end user sets the context. What is the “right person,” “right time,” etc.? I don’t want to deconstruct the entire phrase, and I think my bleating here has me largely agreeing with the intent of Mark’s note. It’s just the catch phrase that fails to capture the challenge we face. As one example for what I’m saying:
If the end user, the decision-maker, only uses information from trusted sources (thank you Arthur); then how does a push function determine the ‘right information?’
We should focus on this as a goal, but we need to explicit that success in this phrase is a marriage of the moment, with many layers to the exchange of information. (Information Quality characteristics form one of those layers, trust networks another, etc.) This is covered in, among other works, much more eloquently in Hagel, Brown, Davison (The Power of Pull).
Mark Trexler, 2018/04/11
JB, thanks for that note, and I think I agree with your point. I’m not suggesting a push function, or at least I’m suggesting a push that is specifically informed by “the pull!” I’ve attached 2 slides from a long presentation about actionable climate knowledge that identifies why someone might want to know about climate risk, but then points out that the information they will be receptive to will depend on all kinds of other variables that you would like to know. You never can know it all, but our fallback position of simply assuming “one size fits all information” has not served us well. As Carla O’Dell said, “if only we knew what we know.” Since I’m convinced that we know plenty when it comes to radically altering decision-making course on a problem like climate change, is there a way to get the information that would matter to the person that it would matter to (given everything going into their own decision-making calculus). Because we’re not today.
Robert Dalton, 2018/04/11
You can’t force knowledge and experience transfer. You can only set up the correct environment and conditions in order to allow that to happen between those who need that knowledge and experience from those that have it, and are willing to share it.
Experience has shown that you need two major things for knowledge and experience to be transferred between individuals. These are:
- An appropriate place to do it, such as a community of practice, or some face-to-face event.
- A certain degree of trust. The more trust that exist, the greater the chance for needed knowledge and experience to be transferred.
In my experience trust building is one of the most under-appreciated factors by most knowledge managers and is often a major reason that their KM initiatives fail.
Too many of todays KM initiatives rely solely on technology, which is a big mistake. It is VERY hard indeed to build trust through technology and a LOT easier to do it face-to-face, although sometimes that is not practical.
Brad Hinton, 2018/04/12
You are absolutely right when you talk about setting up the necessary conditions. And CoP's are useful in getting the right information to the right people in the right time (well, mostly) if they are effective CoP's and are well facilitated.
When I was managing over a dozen of communities of practice at a previous workplace, knowledge sharing and trust were built up over time. Contributors brought both their knowledge and their identity to the CoP (the organisation was decentralised with many officers all over Australia and New Zealand). The CoP's allowed individuals to become known through their participation and knowledge sharing within the CoP. As a result, the knowledge quality and helpfulness (or otherwise) of an individual were made visible and others could determine over time whether an individual was providing quality assistance. I should also add that this build-up of trust and knowledge identification spilled over to those times where individuals met in person at a particular event. Despite not having personally met each other, the CoP enabled quicker and deeper interpersonal relationships at the one-to-one level, sometimes even overcoming some internal divisional politics at the same time!
In addition, some CoPs included banter at the margins of some of the discussion topics. This banter had no direct commercial content. However, this banter was useful in deepening the online relationships of CoP members and helped accelerate trust.
Knowledge and trust were made visible within a CoP and this enhanced interpersonal relationships throughout the network which led to other forms of communication. For example, sometimes a particular problem was very specific or required greater contextual understanding and an individual might actually telephone a person (in different locations) they had identified via a CoP as being knowledgeable about this problem. While the exchange and relationship were now not visible on the CoP, the interaction and knowledge exchange still occurred and further cemented trust. One of my jobs was to encourage these "offline" exchanges to be summarised and added to the CoP in the form of a summary. I was not always successful, but when these summaries appeared, they also enhanced the reputation and trust of the two individuals. Often, further discussion would take place.
Elizabeth Maloba, 2018/04/12
I am increasingly aware that I am a product of the post-modernism era, and therefore I am drawn to being skeptical. I sometimes wonder how much (if at all) many of the models developed for data collection and analysis take into account the plurality that arises from the rejection of meta-narratives and ideologies that characterizes post-modernism; whether models actually realize that in real life we are calling into question universalist notions of objective reality, morality, truth, human nature, reason, language and social progress. In other words – whenever the argument of real vs counterfeit comes up I want to know the context. And I wonder how much this contributes to the failure of models to reliably predict behavior – e.g. the failure of polling to predict election outcomes.
I do also agree that models fail because they claim to achieve in the virtual world what would not happen in the real world – I like Mike’s analogy with smoking (facts do not change behavior), I believe Cathy O’Neill uses weight loss as her example. In other words, if a model claims that it will make us all make healthy decisions and take care of our long term interests (something we do not do naturally in real life) then we are in ‘too good to be true territory’.
Having said that, I would argue that knowledge can be co-created and shared virtually – technology is able to provide us with one ingredient – PLACE, and we have to bring to it methods that build TRUST so this can happen. My experience with communities of practice – global and regional has been that the role of facilitation cannot be overemphasized. Communication over technologically leveraged platforms is by definition reduced (it lacks physical contact and visual cues) and therefore misunderstanding, already inherently present when two people are communicating is so much more enhanced. The facilitation role is therefore crucial in ensuring a safe space is created where trust can be built and different perspectives can be explored without fear of harassment and marginalization.
This brings me to the discussion Mike and I started to have – how do we build this feature of safety (and facilitation) into platforms where ‘communication’ is already taking place? (I can think of climate change, financing SDGs, enforcing the Rome Statute as examples of places where there is a lot of talking going on without clarity on how much communication there is). How do we have conversations with decision makers that enable them to see that there are better ways than ‘talking at each other’ , ‘talking across each other’ etc.? How do we frame the KM conversation so that leaders are not only aware of its existence but are bought/ sold into the value it brings to their communication and are actively looking to build it in so as to embrace diverse perspectives and make informed decisions? Who should we have this conversation with other than among ourselves?