Talk:Metrics for the success of implementation of knowledge platform or Web 2.0 tools
|See the original thread of this E-Discussion on D-Groups|
Johannes Schunter, 2010/04/07
Would anyone of you have suggestions for good metrics and indicators for the success of a knowledge platform implementation or implementation of Web 2.0 tools in a (development) organization? Something that goes beyond quantitative numbers of course, but metrics which help evaluate the impact on organizational efficiency, communication, culture and strategic results?
Thanks and greetings from a sunny NYC with 32∞ C!
UNDP United States of America
Alain Berger, 2010/04/07
So, you want to have a kind of thermometer for your organization, to see if it works better after than before ?! During our last Ardans User's Group Meeting in Paris last week, some orators talked about KPIs and others about /conviction/! This will not help you... but in fact its quite difficult to give a good answer when we don't know about your managers, and when we don't share your organization's biotope! The idea to follow up the buzz on the platform doesn"t indicate anything about the quality of the content inside your Kbases. in the other hand the fact to appreciate the traffic can help to measure the users'interest on the topic. But as we say in France "you have only one occasion to give a good first impression" and if it's not well prepare, the cost ownership will increase a lot.
ARDANS SAS France
Joitske Hulsebosch, 2010/04/07
Here's a wiki with resources about social media metrics. Maybe you can find something there? http://socialmediametrics.wikispaces.com/Links
By the way: here's our slideshare presentation about monitoring knowledge management strategies, showing that it is about more than just metrics. Combine metrics with narrative to understand what's happening. Here's the link: http://www.slideshare.net/joitske/monitoring-and-evaluation-knowledge-management-strategies-2009
Alfonso Acuna, 2010/04/08
I guess numbers can give more than enough qualitative understanding on how your tools are being used. Visits, frequency and logon time are important to know if they access the resources of not. The numbers of threads in forum can be very helpful in displaying levels of engagement and participation. Word analysis can give you some indications on the content of what they are discussing. Monitoring blogs entries: how often people write, people posting comments, people reading the entries, etc. can give you help you to understand if you have a flourishing garden or a cemetery.
The health of the resources gives you indication of the acceptance of these tools by the organisation but it doesn't help you in assessing changes in your organisation. For this you will have to trace where the information and communication took place and if these are embedded in the new tools, and for this numbers can be helpful too.
Eric Mullerbeck, 2010/04/14
At UNICEF we are using a web 2.0 platform to support Communities of Practice. We've developed a set of proposed metrics for monitoring and measuring results. Below is an outline. Note that this is still a work in progress and is mostly still at the discussion stage. The only metrics we're actively looking at are membership and visits, but we do plan to put measurement into practice over the coming months. (With thanks to Ian Thorpe, Paola Storchi and Johannes Lammers who helped evolve this to its current stage).
1. We are looking mostly to monitor specific communities, rather than a platform as a whole (also noting that while we have developed a platform for communities of practice we recognize that not all communities will use the platform and they won't all use it the same way). For the platform as a whole we might consider to look at metrics such as the number of members, number of communities created and levels of active use as well as user feedback on usefulness and usability of the platform.
2. The goals of each community are different and so the monitoring indicators need to reflect this (i.e. be based on priorities of users), although there are some things that you might want all communities to monitor (see below).
3. For any community we are thinking of monitoring the following output measures: ï# of members ï% increase in membership over time ï# of content contributions per month. ï# of total content contributions We might also look at number of unique visitors per month and number of visits
4. If a community's goals include creating specific knowledge products we could add: ï# of knowledge products created through community processes ï# product development milestones
5. If a community's goals include knowledge sharing and collaboration across organizational boundaries (within organization or with external partners) then we could also add ï# of Divisions/Offices (Field, Regional, HQ) represented in the CoP (relevant for cross-fertilization of knowledge) ï# of non-UNICEF staff who are members
6. For what we are calling outcome measures, this really depends on the goals of each community and needs to be developed with the specific community managers. That said there are a few general questions we have been discussing such as: - overall user satisfaction with the community - perception as to whether community is useful for their work - Have participants learned anything useful from the community? -usage metrics for knowledge products developed through the community (see #4 above; product usage metrics will vary depending on the product, e.g. website, document/publication, etc.) -success stories, perhaps gathered through an approach like MSC.
7. Strategic impacts are the most difficult to measure. Web 2.0-enabled communities of practice could be successful as judged by their members, but what evidence is there that the communities have actually helped the organization achieve its strategic goals? One approach might be through comparing the results achieved by different parts of the organization -- some that adopt and use web 2.0 tools and others that don't. If there are demonstrable differences in success at achieving strategic goals, then there is a possibility that the web 2.0 tools have contributed (or not!) to the achievement of these goals. Further analysis could confirm this.
We would love to hear your feedback on this and also to learn from what you are doing.
UNICEF United States of America
Jessica Robbins, 2010/04/14
Thank you so much for sharing these metrics. It was very timely as I was just about to start working on an M&E strategy for a community today. I think your quantitative indicators look comprehensive and easy enough to keep track of.
In relation to measuring the outcomes and strategic impacts, I think this is where it becomes really interesting... and more complex. I would assume that for any good M&E you need a good baseline to be established to then show progress. What would be a good methods to establish qualitative (and quantitative) baseline information? Also, are there some key early identifiers or behaviors that a community facilitator could look out for leading towards outcome or impact? (i.e. formation of action groups as a result of community discussions, comments being incorporated into policy or planning... etc.)
Looking forward to hearing more experiences and ideas on this.
Eric Mullerbeck, 2010/04/14
Jessica, concerning baselining, I'll address some of the quantitative aspects. One approach would be to find projects similar to yours in goals, approaches and environment, and set baselines from those metrics. The difficulty here will be finding projects that really are similar. There are many variables impacting project success including organizational culture, previous experience, resources etc, so projects that appear superficially similar may not actually be so -- and so the baselines should perhaps not be the same.
Another way is simply to let the project find its own baseline over time. One would want to ensure progress by doing self-to-self comparisons and ensuring that change is positive (i.e. compare key metrics for successive months to confirm that membership is growing, content contributions are increasing, etc.). Once a project stabilizes and achieves some degree of success, its metrics can then be used for baselining other efforts within the organization.
These approaches could be useful for metrics that are relatively easy to capture. However in my view the really key question is confirming that the project has a positive impact on strategic organizational goals (which for most or all of us will mean impacts for target populations). This is a broader issue than simply setting baselines for outputs (e.g.number of users, frequency of contributions) or even for project outcomes (e.g. quality of knowledge products).
Concerning early identifiers of success (or early warning signs): the metrics we're considering as 'outputs' are probably a good place to start because they are concrete and are available in the early stages of an implementation. These are the ones I identified in item 3, e.g. growth in membership over time, etc. The things you mention (formation of action groups, incorporation of comments into policies) seem more likely to happen once the project is already fairly well established. If membership does not grow consistently, or if members are not contributing, you might not ever see them....
Jessica Robbins, 2010/04/15
Thanks Eric. Your advice is really useful. I think it will be useful to combine the two approaches. To share a little more background, I am currently working on setting up a community called the Pacific Peace Community and have established a good relationship with a community in Jamaica called Jamaica Partners for Peace (http://www.jamaicapartnersforpeace.org/). This relationships thus far has really helped us to set, I hope, realistic expectations for our community as well as should help in defining the baseline.
If anyone is involved in peace building communities, particular in South or SIDS, please do get in contact with me as I would be keen to have a chat.