How do you measure impact of your knowledge products and publications?

From KM4Dev Wiki
Revision as of 14:33, 5 May 2015 by Johannes Schunter (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Original Message

From: Johannes Schunter, posted on 2015/04/19

"Dear all,

I would be glad for any experiences on how you measure the impact of knowledge products and publications produced by your organization?

Obviously, mere downloads don't tell us much, when what we actually want to measure is our partners' use of knowledge generated by the organization, and the extent to which that knowledge is supporting and influencing policies and decision making by partners.

Citations (e.g. via Google Scholar) don't help much either, as our publications are not really meant to be academically cited, but rather support governments and partners.

Thankful for any ideas!

Johannes"


Contributors

All replies in full are available in the discussion page. Contributions received with thanks from:

Johannes Schunter
Daan Boom
Kishor Pradhan
Christian Kreutz
Nancy White
Jaap Pels
Pete Cranston
Serafin Talisayon
Ewen Le Borgne
Riff Fullan
Eva Schiffer
Simon Hearn
Ian Thorpe

Related Discussions


Summary of Contributions

  • Daan Boom referred to the World Bank which in its 2014 study ”Which World Bank Reports Are Widely Read?” uses citation count via Google Scholar, since it includes not only journal articles, conference proceedings, and other academic reports but also books, working papers, and business and government reports, as well as non-scholarly citations of articles. However, the World Bank acknowledges that using Google Scholar is not without problems. World Bank documents are not necessarily meant to be academically cited, Google Scholar sometimes does not detect false positives or double-counting, it does not have a large coverage of older, pre-digital publications, and some publications might just not be included in the Google Scholar database. Still, the World Bank considers citations via Google Scholar to be one useful component for measuring performance of publications. Daan also pointed to the Asian Development Bank which in their Special Evaluation Study “Knowledge Products and Services: Building a Stronger Knowledge Institution” explains that while tracking visitors and downloads of publications on its website, the ADB used among its quantitative measures also citation counts of publications using the Social Sciences Citation Index, and even compares them to citation benchmarks of other organizations.
  • Kishor Pradhan (Development Knowledge Management and Innovation Services) mentioned case in which they conducted for a client organization a partner survey, as well as a series of bilateral interviews to understand the usefulness of their different types of the organization's knowledge products (not individual publications), in terms of readability, content, relevance and accessibility. The survey also included a question on how users used these publications (e.g. for research, study, writing, presentations, training, teaching, use in the field, etc.) which provides further insights on impact.
  • Christian Kreutz (Crisscrossed.de) suggested requiring users to leave their email address before downloading a document, so we can (a) compare here the page view compare to the sign-up rate to see the overall interest vs. people giving an email address, and (b) ask these people via email after a few weeks about the publication to assess its usefulness and whether it had an impact. Additionally he suggests searching for the document on social media and count shares and reviews, which is another indicator.
  • Nancy White (Full Circle Associates) highlighted that authoring business units need to think about indicators of use before/when writing publications, to be clear about who are the intended audiences (so then we'd follow up with some of them), and what are the hoped for actions and impacts.
  • Jaap Pels stated that impact of knowledge products and publications is about use, which means we have to look for citations, look for recommendations (e.g. on Twitter) and visit projects and people after one, five and ten years.
  • Pete Cranston (Euforic Services) proposed to not measure every single product and publication that an organization produces (of which there are always too many anyways), but rather create aggregation services that highlight important content to specific audiences. He presented as example of a curated WASH update which uses people to filter and provide a monthly curated update of items they think are worth reading that month within that sector, for that audience. Measuring the uptake and impact of such a targeted service (rather than each publication), would be interesting and also easier to measure. Regarding measuring update at the moment of users accessing/downloading a document, his experience is that busy people tend not comment and rate much, and a gamification approach might be needed. Potential approaches could be putting in requests for feedback in each article and provide a link for those who read online, or having a feedback form or ratings system as the default home page for any IP address that’s visited your site before, asking for feedback on the last visit and/or download? Another approach could be to feature star users, those who rate and respond (see this example). Regarding the question what to measure, Pete is convinced by the Outcome Mapping principle, that we can only influence people in our direct circle. So maybe it is enough to focus on those, and their behavior. If they download, share, rate, maybe that is all we can be expected to measure? It then becomes more important to invest in identifying those people who are influencers or prolific sharers, and measure the impact on them? An lot of the downstream impact attribution he finds unconvincing, because change happens from a multitude of influences and relationships, and each one attracts a lot of organizations all claiming agency! If a specific K-product does well to be downloaded or shared by someone involved in change processes, that should be enough.
  • Serafin Talisayon from CCLFI pointed to the following principle for measuring actionable results, based on his work with ADB:
    • 1. The essence of knowledge, in contrast to other forms of information/assets, is that it enables effective action.
    • 2. Value is created when knowledge is used in effectively performing an action or decision.
    • 3. The benefit from KM can be better assured if it is linked to action or performance; it is measured through performance or productivity metrics.
    • 4. The essence of a knowledge product (KP) is that it is intended to enable a specific action/decision by a specific stakeholder. KP can be produced in small manageable knowledge units, namely, replicable exemplary practices or REP.
    • 5. The benefit from a KP can be better assured if stakeholder demand is present, KP is written in actionable format, and KP delivery is integrated with KP production.
    • 6. Knowledge translation (KT) is the process of converting a less actionable KP to a more actionable KP. Effectiveness of a KP is judged by the user/stakeholder; it can be better assured if they participate in the processes of KT or KP production and KP delivery.
    • 7. Thematic/sector knowledge is critical in KP production while local/country knowledge and operations knowledge are critical in KP delivery.
    • 8. Benefit from KP is realized and measured after the KP is delivered to the user/stakeholder and it enables effective action/decision by the user/stakeholder.
  • Sophia Treinen from FAO argued that after we count how many have times a document has been distributed, downloaded, uploaded or transferred to other platforms, we may want to know whether the information was used and how. This should be done via a survey or a series interviews a few months after the product has been disseminated.
  • Ewen Le Borgne at ILRI suggested to focus metrics that relate to
    • 1. Interaction/engagement: Comments, likes, votes etc. that seem to suggest that people have actually read the publications and liked them
    • 2. Change/influence:
      • a. Citation analysis, even if you’re not targeting an academic group, people still refer to other peoples’ work
      • b. Referrals, links of all kinds to your publication, embeds and references etc. that show that not only people have read but actually liked so much that they suggest it to others
    • 3. Impact:
      • a. Testimonies and stories of change: In focus group discussions, through surveys, you can target specific publications and ask your colleagues, partners, a random sample of people whether reading publications has helped them change their views, discourse, behavior and/or actions.
      • b. Collecting feedback about how the publication has led their readers to change their views, discourse, behavior, actions.
      • c. Specifically target some changes you would like to see around e.g. the discourse by using specific terms that you think will be easier to track in other peoples/organisations’ discourse.

However, he acknowledges that this will always remain a very murky monitoring exercise. For tracking scholarly impact, ILRI is using ISI Thomson Web of Science as well as the free options RePEc, Google Scholar, and Harzing- Publish or Perish (which uses Google Scholar behind) for citation counts. Ewen recommends not relying solely on Google Scholar, without cleaning, as the scholar database Google often has includes flyers or newsletters that contain an announcement of the publication, but these are not considered academic. Also, Google citations go up and down like the stock market. They also complement the information with download counts, Google books, Mendeley, SSRN and others. As of late, there is a whole new arsenal of tools that are very useful to provide citations through a better bibliometric approach: Altmetrics, h-index, journal impact factor, institutional ranks, etc. (http://crln.acrl.org/content/73/10/596.full).

  • Riff Fullan from HELVETAS suggested that Ione way to increase the likelihood of a K output having a bigger influence is to embed it in an ongoing process rather than thinking of the publication itself as the end goal. What is most important is not the output itself, but the knowledge that the output is meant to illustrate. Examples of this kind of thing are manuals or guides, which often provide very useful step-by-step illustrations of how to go about a certain set of activities, but I am convinced that the potential of many of these outputs is generally not met. Now, if they are part of an ongoing training or capacity building program, then the odds increase dramatically of those outputs being used in practical ways, and they become much richer as references because people experience learning by doing in a real context, and can then go back to the K output to remind them of things they have already experienced. If we imagine the change(s) we expect or would love to see, then we have a better chance of seeing how we create processes to support the appropriation of knowledge that we hope to describe in the K outputs we produce. This could complement efforts to identify and use indicators of K-output use. The Working Paper “Monitoring and evaluating development as a knowledge ecology: ideas for new collective practices” points into this direction by conceptualizing monitoring and evaluation (M&E) as a collective inquiry with a learning focus on knowledge management for development (KM4D).
  • Eva Schiffer shared an experience of applying participatory network analysis (Net-Map) in a project where the International Food Policy Research Institute wanted to understand how their research influenced policy making, by asking people who were familiar with the policy processes in country about the different actors who influenced the decision and the flow of research products and other things in the network. They found that while research products (reports, papers) and their content didn't have a strong impact on policy making, they provided legitimacy to their authors and put them in a position where they were hired as consultants to draft policy. And as they were drafting policy, all the knowledge which they had gathered for the more abstract process of writing peer reviewed papers was used to inform policy. The lesson for them was that we can go about impact evaluation in a product driven approach (Look at your product and see how often it is downloaded, cited etc.) or/and we can look at the results we are interested in and then do a system diagnostic to understand: How are these kinds of results achieved in this system and what are all the different things and actors influencing it.
  • Simon Hearn from ODI shared ODI’s ROMA guide to policy engagement and influence, which has a chapter on monitoring and learning and applies the approach of Outcome Mapping (featuring the ‘expect to see, like to see, love to see’ nomenclature which comes from OM). It focuses on the people involved and the behaviors which will tell us things are changing. More on Outcome Mapping can be found here.
  • Ian Thorpe from UNICEF shared the approach their approach they are using to measure impact of knowledge exchange, which looks at
    • 1. Business Unit results: Uptake and use by staff (measured via, downloads, shares) etc. as well as feedback/satisfaction (workshop evaluations, periodic surveys on our products and services etc.).
    • 2. Results in the organization at large: How has knowledge exchange impacted on the organizations work and on the achievement of results

In order to do this, UNICEF is adapting the measurement framework developed by Wenger-Tayner, collecting examples of how the products and services had an impact on the work of the orgainzation at several levels:

    • i) Did people consultthe products (were people aware of it, did they look at it and if not why not)
    • ii) Did people change what they did in their work as a result of the KE process or product? If so how?
    • iii) Did the application of the product achieve better (or different) results? If so how? (e.g. saved time, saved money, improved quality, better solution etc.)
    • iv) Did the organization or sector change its guidelines or standard way of doing things (or defining success) as a result of the KE process or product.

Since this cannot be measured systematically UNICEF is planning to collect qualitative use-example stories that illustrate how knowledge was applied in practice.


Recommended Resources