Google+

Thursday, March 05, 2015

R4D dashboard: Visualize web access to DFID funded research

Collecting traffic and usage statistics for a website or portal can be a very time consuming and tedious task. And in most cases you end up compiling monthly or quarterly report for managers and donors that will be shared as email attachments - and at best skimmed, since there is so much information. But there are smarter ways you can do this process and bring life into your data, as as I explained in my previous blog.

Our case study is the R4D portal, a free access on-line portal containing the latest information about research funded by DFID, including details of current and past research in over 40,000 project and document records. Until 2013 we were part of the team supporting and managing the site.

As part of our work packages, we developed an online, interactive visualization of web traffic and usage of the R4D portal and its social media channels. The R4D dashboard, built using Tableau Public, is still updated and in use. However, since the termination of our support contract, it hasn't been iterated and improved since 2014.

This posts presents the process we followed to develop the dashboard, the tools used and the lessons learned in what was very much a learning by doing journey.

 Why develop the R4D dashboard? 

The collection of usage and traffic data for R4D used to be pretty much standard: a series of excel files updated monthly to generate charts and graphs. They were then put together in a PDF report and shared with project leads at DFID. The idea to develop instead an online, public dashboard of R4D web traffic and usage was inspired by the excellent work from Nick Scott and ODI, which he shared with us during a Peer Exchange session we organized back in 2012.

Donor organisations such as DFID collect a lot of statistics and indicators but these are often kept within projects and programmes and not made available for all staff, as was the case for R4D. So the reason behind the R4D dashboard was primarily to open up our stats and make them more accessible to anybody interested in it, not just the people that had sign off on the project.

Also, by encouraging a more open approach to web stats, the idea was also to have more terms of comparison: it is difficult to evaluate how well your website is doing if you can only compare against yourself. So being able to see how much traffic similar websites are generating will help you assess your own effort and performance.


So what did we do?

Process wise, we pretty much followed the steps indicated in my previous blog posts. With the primary audience well in mind, we started to select the metrics to include in the dashboard:
  • Website stats: Visits and visitors; referring sites; visitors by country; PDF downloads and top pages. 
  • RSS feeds subscribers Twitter clickthroughs and Facebook Insights data (later removed)
  • Number of outputs added to the R4D database (by type, for example open access articles, peer review articles, etc…) 
We decided that it was feasible to collect this data monthly as xls or cvs files exported to from the site(s) and save them into a shared Dropbox folder. This was the most effective way as data collection was decentralized with different people working on different platforms. With our limited budget, it was not possible to automate the data collection process, so this was entirely manual.

Software platform selection took quite some time in the initial phase of the process. We selected Tableau Public as our dashboard platform, and then had to invest more time in learning its features and functionality. But it was totally worth it!


Why Tableau? 

Tableau Public is free software that can allow anyone to connect to a spreadsheet or file and create interactive data visualizations for the web. There are many tutorials out there if you just Google for it, so I’m not going to tell you here how it works in details. But here are my top reason for using Tableau Public:
  • It's free! Well, that's a good reason already if you don't have resources to invest in business intelligence or visualization software - and normally the cost for these are steep and way outside the budget of the organizations we work with; 
  • It's intuitive. You don't need to be an expert to use the tool. The interface is very simple (drag and drop) and you can easily find your way around. 
  • It's rich and deep. There are so many charts you can choose from and you can play around with different visualization until you are happy with the result. It also goes much deeper than Excel with analysis and interactions.


What did we learn? 

Besides learning how to use Tableau Public itself, here are the main things I learned along and around the process of developing the R4D dashboard:
  • Google Analytics is the industry market standard - but it tends to under-count your traffic.
    We ran two different website analytics packages on the main R4D portal - Google Analytics (GA) and SmarterStats - and noticed a huge difference in the results, with GA massively under-counting visits and visitors. So it's always worth installing another tracker to be on the safe side. 
  • Updating Tableau is quick - but getting the data manually isn't
    Once your dashboard is set up, the process of updating it with new data is rather quick, just a few clicks and you are done. However data collection from the various sources in our case was mostly manual and it can be time consuming (and not much fun either!). If I were still working on the project, I’d look into ways to automate as much as possible data collection - while also looking at what additional (useful) data I could collect in an automated way. 
  • Build it once - and then you *must* iterate
    When you're done building your dashboard, you're actually not done. We had a couple of iterations before arriving at the product that is now online. And I'm sure this would be different now had the project continued. This is because you have to evaluate whether the visualizations in the dashboard are actually useful and provide you actionable insights that can inform your strategy. Or simply because the software keeps evolving and can give you new possibilities that were not there before.

In the next post on this series I'll present a different approach to develop an M&E dashboard, this time using a combination of Google Forms, Sheets and Charts, together with Google Apps Scripts and Google Docs Add Ons.

In the meantime, if you have experience with Tableau or use other tools to create interactive dashboards, why not share it in the comments here?

Thursday, February 26, 2015

How to create a monitoring and evaluation dashboard

So you have a website, a blog, the usual social media channels on Twitter/Facebook/YouTube, maybe a series of RSS feeds. On top of this, your organization or research programme also publishes original content, or indexes content produced by others into an online portal. And you also organize events and workshops and maybe offer grants and awards.

With all these online spaces, outputs and products that you produce, how are you going to collect and aggregate this data as part of your monitoring and evaluation activities? And how are you going to display and present it in an effective way that can be easily understood by your co-workers, managers and donors?

For the past couple of years, I’ve been experimenting with tools to display data and information in online dashboards. This post presents a short introduction to the topic. It’s the first of a series of posts that will look into online tools for data collection, storage and display.

What is a dashboard? 

According to Stephen Few’s definition “A dashboard is a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance.”

More generally, a dashboard is a visualization of data related to the operation and performance of an organization, program or service. Dashboards are helpful tools to provide you with a quick overview to see whether you’re on track to reach the objectives stated in your logframe or Theory of Change.

Note that information about a wide range of channels can crowd one screen. So it’s important to be flexible and keep users in mind - keeping scrolling *very* limited and using features like tabbed navigation to view different set of metrics and indicators.

What are the steps to follow to build a dashboard? 

The Idealware report Data at a Foundation's Fingertips: Creating and Building Dashboards presents an excellent and detailed step-by-step description of the process to design effective dashboards for non profit. Ultimately the process boils down to 4 main phases:

  1. Define your audience
    Of course this is absolutely critical, determining the way you design it, the graphs you include, their order and sequence. The dashboards I’ve developed in the past were mainly designed for managers and executives, to tell them about the progress of a program or service at a quick glance.
  2. Identify the metrics to display - and how you collect them
    With the tons of metrics that you could collect, and the space limitations of a dashboard it is important to agree upfront which ones will be displayed in the dashboard. So it requires a bit of negotiation to agree upon what’s in and what’s out. Of course, the metrics should be useful in terms of monitoring progress towards the objectives in your logframe and theory of change. In this phase it is also important to discuss collection methods, frequency and access. 
    • Are there any process that you can automate? 
    • What is only possible instead through manual data collection?
    • And is it realistic to collect this data monthly - if the properties are high traffic or include active campaigns, for example, - or is quarterly more realistic?
    • Where are you going to store the raw data and who should have access to it?
  3. Identify your dashboard platform
    This is a maturing market so there are a lot of possible solutions - from expensive business intelligence software to low cost or free tools. Generally the decision is defined by the resources available as well as the time you and your users have to invest in learning new tools. Note that while potentially you can build a dashboard in Excel, investing some time in learning how to use a powerful and flexible dashboarding tool such as Tableau Public can enable you to design more complete and effective dashboards.
  4. Sketch, prototype and roll out
    In the design of the dashboard you need to find a good balance between the amount of information you want to display and the limited space available. So you have to carefully decide what graphs and chart you will use, what explanatory text you should include, which colours to use when...This will take a lot of testing and iterating to find the optimal design. Bottom line, your final product should: 
    • Be simple, intuitive, easy to read and understand; 
    • Present data together from different sources in an uncluttered way and following a logical sequence or order; 
    • Offer a quick overview of the key metrics and indicators to assess progress towards the objectives of your program/organization/service. 
In the following posts, I’ll be presenting two case studies about work we did recently on visualizing monitoring and evaluation data into online, interactive dashboards. I will look specifically at the tools  used to put these dashboards together, as well as the individual tools used to collect and store individual indicators and metrics. 

For the more techie readers, I’ll also share the details of what I’ve learned recently using Google Apps script to automate some data collection and storage processes, as well as tips and tricks to monitor activities and engagement around Twitter, which I’ve been experimenting a lot with lately. So stay tuned!

Friday, February 20, 2015

Facilitating emergent conversations – variants on Samoan Circles and Fishbowls

Trying to ensure that the brains and experience of all participants are brought into the room is one of the more enjoyable challenges of facilitation. It’s mainly a question of finding the right balance of different approaches since there are so many formats that provide opportunities for different combinations of people to share knowledge and questions. Our knowledge of those formats comes from ideas and stories freely shared by other facilitators, in person or via resource bases like the KS Toolkit. We do a lot of event facilitation using those ideas so to give back to the Commons we’re sharing here some recent workshop experience.

Samoan circle discussion during 2015 BDS convening 
One element of a good balance is to do with mixing up deliberately ‘leaderless moments’, where natural leaders or burning discussion topics can fill the space, with more structured processes such as those that promote particular people as conversation guides, or even gurus, around whose ideas and presentation discussion flows. Samoan circles and Fishbowl formats can lend themselves to most points along that spectrum of options, and we happily experimented with two variants at the recent annual workshop of a large Sanitation programme.

At its core it’s a simple method: a small group of people have a conversation amongst a wider group of participants. The difference with panels, for example, is that the small group sit in a circle surrounded by the participants. Samoan circles are possibly the purest form of the approach[1]. In this format the central group begin discussing the topic. People in the outer circle cannot speak unless they replace one of the speakers in the centre. If somebody wants to participate, she taps one of the current speakers on the shoulder as a sign of intent that she wishes to replace one of the current speakers in the circle. The conversation continues until the time is up or the conversation dies.

The democratising nature of the format generates a particular energy that drives people into the inner circle, in an active and engaged way. And crucially, people are able to intervene at precisely the point in the conversation which engages them, rather than having to wait and ask questions later, that then take people backwards to an earlier point. As a consequence conversation tends to flow organically– assuming of course that the chosen topic is interesting to the participants and that they are comfortable with and trusting of each other. It’s not a tool to use very early in a workshop.

The Feldman variation

The Feldman Variation
The excellent Liberating Structures group propose a variant, in which the outer circle ask questions, but not randomly. At a given point the conversation in the middle stops and the outer circle talk among themselves, agreeing questions, which they then put to the speakers. Peter Feldman, one of the main organisers of our recent sanitation workshop, proposed a variant in which there were two spare chairs in the central circle. The central speakers stayed in the ring and other participants could join the conversation by sitting in one of the empty chairs, or join by following the tapping convention to replace one of the speakers, but only those in the extra chairs.

We used the Samoan circle and the Feldman variation in the workshop, in two sessions, one focusing on Sanitation financing and the other on Behaviour change. The choice of topics meant that there were many people with ideas and opinions to contribute but it was interesting to see how the two formats operated. We used the Feldman variation for the Financing discussion, partly because we believed there was a great divergence of experience amongst participants, so having a group more familiar with different approaches operating as an expert panel seemed appropriate. The format engaged more participants in the conversation than would be normal in a traditional panel discussion, partly because the conversation didn’t always return to the experts but followed on from ideas introduced by the ‘outer circle’. However, having one group of people always present meant that the conversation was anchored by their experience and confidence in speaking about the topic.

For the Behaviour Change conversation we used the Samoan circle format. The topic and the format generated a lot of debate, lasting a full 90 minutes - at the end of a long day, and the fourth day of the workshop at that. However, the conversation ranged around the interests and opinions of participants and wasn't anchored in the same way. Our conclusion was that in this format someone, either the facilitators or a participant, needs to step forward pro-actively, intervening to summarise, reflect back opinions so far and point out questions that hadn’t been properly answered or addressed.

The workshop was organised around wide range of activities, including two straight-forward presentation and discussion sessions, world cafes, 1-2-4-All, field visit and feedback sessions, spectrogram exercises, group discussions - and the emergent conversations above. That variety scratches all the itches - allowing participants time to listen, reflect and engage participatively both individually and collectively. It's probably one of the reasons why participants were so positive about the Fishbowl exercises, which they were. Organising opportunities for participants to stretch both their legs and brains in stimulating conversations about issues that matter to them is a great way to earn a living!











[1] And apologies to co-facilitators and participants at the 2015 BDS convening, I was calling this a fishbowl!

Monday, February 16, 2015

How do we know we’re learning?

“We will live or die by our critical reflection and ability to internalize learning”, said Darren Saywell, Wash Director, Plan International, in a recent online Q&A on sanitation. That “there is an over-emphasis on Knowledge products and outputs and not enough emphasis on the reflection and learning processes that produce sustainable change within projects and organisations” is something we’ve long argued.  

And in the KM work we’re doing with a large sanitation program, we explicitly built in activities that foster a self-consciousness about learning, believing that in this way the process of learning is enriched and has a better chance of becoming embedded in how people work and interact (and thereby increasing the likelihood of sustainable change). But it’s precisely this kind of critical reflection that is so often squeezed out of operationally demanding jobs. One programme grantee illustrated the point by recounting how he’d hardly noticed an important innovation when it passed by in an email. It took a visit to the site in question to engage his attention and jog his memory about the email.

We’ve engaged the inimitable Nancy White to work with us on this Learning about Learning process. While talking about preparations for the recent annual convening of programme grantees Nancy suggested we, the organisers, be, “on the watch for those moments when reflection and learning is visible and to note when it’s happening, in what context, why and as part of what process” suggesting that, “understanding these things may help us better architect time/space/structure for learning”.

Learning Leaders

The portfolio manager, Jan Willem Rosenboom, rose to the challenge wonderfully, agreeing to lead group conversations and reflections about learning. Senior staff agreeing to lead and model the process is all too rare, and his stepping forward set the tone for the event. Jan Willem introduced an intriguing approach to the process, known as art-form conversations[1], developed by Brian Stanfield of the Institute of Cultural Affairs (ICA), with whom he’d worked in Kenya and Europe.

At the end of Day One Jan Willem held up one of the flowers sitting in the middle of the tables in the room and asked people to contemplate it, describe what they saw – list its’ attributes. You can imagine the looks in the room, but people began contributing. We were then asked to think of how it related to other flowers that we’d seen, compare it. The group (nearly 50 people) was getting restive, a bit ribald, but answers kept coming. Next, what name would we give it: guffaws and some gently mocking answers, including the ‘Rosenbloom’. And finally a question about what difference this might make to our how we use flowers in the future, at events or at home. There was less reaction, people were acknowledging the process underway, which was reinforced in the next question, “so what did you learn from that process?”

The group had been taken through an aid to reflection, developed by Stanfield, summarised below

OBJECTIVE
Facts
e.g. What can you see?
REFLECTIVE
Reaction

e.g. Where have you seen something like this before?
INTERPRETIVE
Implications
e.g. What does this mean to you?

DECISIONAL
Actions
e.g. How might this principle be used?

The process was instructive in itself but, more importantly, triggered a reflective conversation about learning, with participants noting things like the fact that knowledge is contextual, that our previous experience defines what we see, that we all have different reactions to the same thing, and so on. Jan Willem closed with a request for participants to reflect on their learning on that first day, first alone, maybe noting some things down, and then chat to another. The whole process worked well with the group, people were quiet and reflective by the end of the session.

And once the tone was set the process continued throughout the workshop. The second day was taken up with field visits, which were discussed in a feedback session at the beginning of Day Three. At the end of that session, just before coffee, Jan Willem asked people:
  • Give me a word, or phrase that you remember from presentations?
  • What surprised you?
  • What would you like to learn more about?
  • What are we learning?
  • Where do you see that can influence your work back home?
Again, the simple process encouraged people to reflect on both the activity and on their own learning processes, which triggered the reflection from one participant that it is very “difficult to be influenced outside our expectations and learning frameworks”.

Small reminders continued: one lunchtime there was encouragement to think about a question that was triggered in the sessions and to share them with one or more people. Another lunchtime participants were encouraged to think about whom in particular would be a good person to have a conversation about the issues of the day.

And what difference did it make?

The workshop was designed to maximise opportunities for exchange, conversation, discussion, story-telling. Overall feedback has been very positive, people appreciating the opportunities to dig deeper into issues, share experience, exchange ideas and build relationships. And while we don’t have objective evidence – it’s not something that would come out from an evaluation survey - my own experience of facilitating and participating was that there was a richness of texture to the exchanges, a greater criss-crossing of exchange than in many workshops. And the high profile leadership meant that learning about learning was explicitly on the agenda, an issue that we’ll follow up in other blogs. 


[1] There is a nice description of its genesis in the book “The art of focused conversation: 100 ways to access group wisdom in the workplace” by Brian Stanfield .


Thursday, February 12, 2015

Collaboration for teams and projects using Google Apps

Online collaboration is a fact of life in International Development programmes. Teams and individuals are scattered across organizations, across countries and regions, as well as time zones. Sadly, we hear more grumbling about platforms and processes than success stories. So working with organisations to improve how they work together online takes up a lot of our time.

The problems vary with the size of organisation, to some extent. Large organizations normally have Intranets or private networks, but these aren’t always open to project partners from other organisations. But their rich features often make them difficult to access from international locations, especially where Internet access is still expensive and/or unreliable. However, there is also a downside to the use of free tools, which we at Euforic Services have often promoted. That option normally means using several different platforms, like wikis, blogs, project management systems, etc., that need to be brought together somehow and may require multiple sign in and entry points for users.

So sometimes it’s worth investing a little in developing an integrated platform. But money is always an issue, so how easy is it to customise a collaboration platform on a tight budget?

We had the chance to learn and experiment last year. With our friends Antonella Pastore and Tania Jordan, we were contracted by IDRC to configure and deploy an online platform that would support knowledge management and communication in a new collaborative research programme.
Homepage of the KM platform

The platform was to be based on the Google Apps infrastructure, previously selected by IDRC during an earlier phase of the project.

In Euforic Services we’ve been using Google Apps for several years now in our own work and for external projects and consultancies, in relatively small teams. But this was a new challenge, how to deliver a fully fledged collaboration platform and get over 200 people on board? So what does it take to configure and deploy a platform that is functional and it is actually used by teams and individual users? And what did we learn in the process?

Our project plan and implementation followed a rather standard approach and sequence - from configuration and set up to pilot phase and iteration, to soft launch and go live. But for each of these phases there are some specific elements that is important to consider for a successful deployment of the Google Apps - as well as for any other online collaboration solution.

Decide what is in and what is out - and think 'around' the box

From the initial scope of the work and during the inception phase, we agreed with the IDRC team on which Google Apps would form the core of the collaboration platform. These included: Sites, Drive, Calendar, Google+ and Hangout, Picasa Web Albums.

If you are familiar with Google Apps, you may have noticed that Gmail was not included in the configuration of the platform. Indeed, each of the potential end users already had an institutional mailbox and the idea of having to manage a secondary email address was not negotiable.

So how to get around it, when Gmail is central for the Google Apps? The solution we found was to set up the new user account with an automatic forward so that all emails sent to their new Gmail address would be automatically sent to their primary email box. As a consequence, we also created a Site as central entry point for the whole platform (see image above) and worked with Google Groups to define and manage the sharing and access rights to documents on Google Drive.


Walk the talk - use the tools to support project management 

From the very start of the project, during the inception phase, we used the exact set of tools that had to be deployed in the new platform. While this was business as usual for us (the combination of Google Docs, Calendar, Hangout etc is our standard modus operandi), this also allowed the project team at IDRC to improve their own skills and knowledge of the tools in a learning by doing approach. As a result, not only they learned the ins and out of the Apps but could later act as super-users and platform champions.


Make it real - pilot and soft launching 

When defining the pilot, we wanted users to go through a set of test cases. These were documented in a step by step pilot guide, developed using a combination of Google Sites and Docs. As the pilot users were all from the same working group, we defined the pilot using real content and for a real purpose: finalizing the organization of a working group meeting. Working with this real live case and with real content was instrumental in getting pilot users on board, as they saw the results of their participation in the pilot coming together and being used in the actual face-to-face meeting.
KM Platform training guide


Get your users on board - training, demos, helpdesk 

As we opted for a managed rolled-out in the deployment of the platform, we were able to consolidate the training and support documentation, as well as the platform navigation, with the feedback received by the first pilot users. At the opening of the platform to a larger number of users (around 100), we run two online demos sessions (using Hangouts - again, learning by doing) which were also recorded for later playback. Additional webinar sessions were planned after the third round of accounts creation. A helpdesk was (and still is) also in place for troubleshooting and additional users’ support, with guaranteed response time within 24 hours.


Give users solutions - not tools 

With a lot of focus on the tools and technical solutions that are available out there, we should not forget that what is actually needed by users are effective (and possible simple) solutions that can enable their work. For example, working in geographically dispersed teams, one primary need of platform users was to held online meetings - so Hangouts is the solution. Adopting a functional perspective (instead of a tool focused perspective) helps users to understand how to work more efficiently and effectively - and to buy in into a new online platform.

With a new batch of users accounts created recently, the platform has now over 200 active users. While it’s still too soon to evaluate users’ engagement and activity, the usage statistics look promising so far. However it’s clear that the success of the platform will now depend on getting most of the users to work along - or at least get a large enough critical mass of users doing it.

Tuesday, November 25, 2014

Payment by Results - the conversation continues

The Vietnamese Women’s Union (VWU) is a key partner in the sanitation program run by Thrive Networks (ex East Meets West). Keen to understand how the program operates on the ground, and especially to understand more about the Payment by Results approach under with the program operates, we travelled to Ninh Binh province to meet those involved in the programme. We first met members of a Commune People’s Committee, who briefed us on the progress in their locality. After visiting project sites, we  spent several hours with officials from the provincial office of VWU in Ninh Binh.

As part of the conversation I asked Ms Tinh from the VWU, who was briefing us thoroughly about the programme, just how significant had been the OBA approach. MS Tinh, a qualified doctor, had already explained how pressured the work had been. Ambitious targets had been set by the National Office, which had initially concerned Ms Tinh, especially given their limited resources. However, targets were met and Ms Tinh believes the OBA approach played a significant role in that success.




However OBA continues to polarise opinion within the development sector. And when someone as widely respected as Robert Chambers lambasts UK DfID for promoting the approach it’s obviously important to address his and others critiques. In this extract, Minh Chau Nguyen of Thrive Networks comments on concerns about the impact of PbR or OBA on the motivation of those working in Development projects, on their reporting from those schemes and on the narrowness of OBA/PbR as an instrument to address complexity



My own reflections are based on very limited exposure, a project visit and conversations with Thrive Networks staff in Vietnam. Initially uneasy, I came away very impressed by what the CHOBA project has achieved. The staff and partners have been under enormous pressure, as was described in the previous blog interview, and have succeeded in scaling up the program to an impressive level. To some extent that has to be attributed to the passion, the commitment, the experience skill and learning from among the project team and partners. Like most successful projects, at the core is a phenomenal team, well led, and committed to achieving change. And it’s important too, I think, to acknowledge the impact of working in a country like Vietnam with fully functioning local Government apparatus reaching down to village level, supported by effective institutions like the Women’s Union. All that being said, it’s clear to me that the PbR approach shaped the programme and the institutions quite fundamentally and was an integral part of the complex mix that has delivered success. I would attribute part of that success to the way the project was led by a team knowledgeable about and convinced of the value of the approach – Minh Chau Nguyen, worked for many years at the World Bank during which time OBA evolved as an approach to funding Development. I remain uneasy about whether it can work as effectively in other contexts, especially in situations where it may be presented as the only funding option.



Sunday, November 16, 2014

Learning in CHOBA – an Output Based sanitation programme in Vietnam

Output Based Aid (OBA) – or Payment by Results (PBR) is a controversial approach to the funding of Development programmes. It’s radically different to what we might call the trust-based approaches that have been dominant in Development and it is shaping increasing amounts of funding from important donors. There is a strong current of criticism and concern about OBA/PBR within Development since the approach challenges many fundamental principles and assumptions, including about what motivates people, about how learning happens and about how to achieve at least a semblance of an equal relationship across the financial nexus. A useful, balanced review from the UK network of Development NGOs, Bond is summarised in a guest post on Duncan Green’s blog.

Minh Chau Nguyen is the Vice President, Sanitation programs for Thrive Networks, the organisation formerly known as East Meets West (EMW). While the Vietnam country Director, Minh Chau introduced the OBA approach to the EMW ‘s work in water and sanitation, the latter growing into a large scale, national programme. In this interview, MinhChau tells the story of the CHOBA programme. This first part covers the background and development of the project, including how EMW managed the enormous risks faced by small or medium sized organisations that accept the challenge of OBA, given that they have to finance the activities up front with no guarantee of receiving the funds if the targets aren’t met. And as MinhChau explains, half way through the project they indeed faced a crisis.



To be honest, my own natural home in Development is with the body of ideas and people that is sceptical about, or opposed to, OBA. But I knew that the CHOBA program was making a lot of progress so I was keen to meet and learn from the project team and some of the project partners. And from a KM point of view, I was particularly interested in understanding more about the learning processes within such a project, concerned that the logic of OBA would be inimical to the kind of reflective stance we’ve argued is essential for effective learning and change. In this second part of the interview Minh Chau describes how EMW responded to the crisis in the project, caused by the slow progress towards the project’s ambitious target, and how in fact the pressure triggered rapid learning and change.


A follow-up blog about CHOBA will include Minh Chau’s responses to some of the common criticisms of OBA/PBR and reflections on what we learnt from visiting the project, talking to EMW partners and team members.