Google+

Tuesday, March 24, 2015

Learning - sharing what we know we know

Here's a terrible story whose details we'll hide. It’s from an excellent, well regarded development agency where around 1998 a smart, experienced project manager learned in a country programme that a particular approach didn’t work, it upset people and their lives and was a waste of money. As s/he recounted the experience another equally smart experienced person stood up and said s/he’d learnt the same lesson working in the same organisation in another country around 1989. And I later spoke to someone who works for the same organisation who was too embarrassed to admit in plenary that s/he had learnt the same lesson for the same organisation in another country in 2004!


Stories like this are dismayingly common, and not just in international development cooperation. DfID’s Learning efforts, to take just one, scored Amber/Red 1 in a 2014 assessment by the UK’s Independent Commission for Aid Impact (ICAI). So what can organisations do to learn better? This perennial question is at the centre of a review we’re doing with Water Aid UK on Knowledge Sharing and Learning. It overlaps with the other sanitation work we’re doing, KM in the Building Demand for Sanitation (BDS) programme. Three sub questions are interesting to both projects:
  • Are we too cautious about saying what we know we know
  • How do we record what we know and have learned in ways that people will pay attention to? 
  • How do organisations develop cultures where it is “socially unacceptable not to learn”, as one grantee put it recently? 

Known knowns

On the first point, a conclusion from the terrible story is that the first smart experienced person, who told the story, groaned as he learnt that the same error had repeated, in the same organisation. He suggested we don’t declare loudly and clearly enough what it is we know we know. We are often too tentative and vague, delivering high level bullet point recommendations or simply not sharing our conclusions. As part of the BDS KM programme we're supporting a Learning Exchange where he is going to sit down with two others from two organisations and try to write down what it is they have all learnt, what they know they know (about Sanitation Marketing, in this instance). We're encouraging them to tell the story using a range of media, to try and make their ideas sing and dance.

We'll also be encouraging the group to produce content that makes people think. If there is a document that tells you how to do something, and doesn’t require you to think, then it's probably only a technical fix: important for sure, in specific contexts, but not necessarily generalisable nor stimulating to other people's learning. Meaningful outputs that might enable people to learn across contexts are those that require people to talk together, question and reflect on the basis of what they read/hear/see in the documentation - to learn socially.

But it’s not easy to pronounce on what we know we know. It’s quite a bold thing to do. It’s much easier to ask questions, be tentative. I tried in a long, excellent conversation about knowledge and doledge on the KM4Dev discussion list, and I still feel uneasy about being so definite. A better example is a great blog, "Do we learn enough and does learning lead to improved sector performance?" The authors are two more smart, experienced, WASH specialists and the blog reflects on learning from the recent BDS annual convening meeting in Hanoi. The authors described elsewhere how, when they first re-read what they had come up, with they were startled at how obvious a lot of it seemed. But the blog has been well received, possibly because by stating the obvious, statements about which they were confident, the authors are providing navigational markers by which other people can steer.

But it takes time – and a learning culture - to mainstream that kind of reflection and recording. To quote from the ICAI report on DfID: “DFID is not sufficiently integrating opportunities for continuous learning within day-to-day tasks. In particular, staff do not have enough time to build learning into their core tasks. DFID is not fully ensuring that the lessons from each stage of the delivery chain are captured, particularly in relation to locally employed staff, delivery agents and, most crucially, the beneficiaries. Heads of office do not consistently define a positive culture of learning".

We'll be addressing culture in the next blog.


1. [programme performs relatively poorly overall against ICAI’s criteria for effectiveness and value for money. Significant improvements should be made]

Friday, March 20, 2015

Power up your Google Sheets with Apps Scripts


Google Sheets and Docs are very powerful, flexible tools for data collection and analysis. But do you know that there's a lot more you can do with both Sheets and Docs, using free tools or just a bit of extra coding? And even if you are not a programmer? Do you know you can:
  • Enable users to edit responses they have made in Google forms? 
  • Automatically copy (part of) data from one Sheet into another one? 
  • Simultaneously collect various metrics for your Google Analytics, YouTube and Twitter accounts?
  • Automatically track Twitter posts around a Twitter handle, hashtag or search term?
  • Automatically count the number of Twitter followers of various accounts and add them dynamically into a Google Sheet? 

In this post and the next ones, I'm presenting few different options I’ve used to 'extend' Google Sheets and how I used them in the development of a program M&E system and dashboard for IDRC.

Today I'll look specifically at two possible use of Google Apps Scripts for Google Sheets.

Google Apps Script editor

About Google Apps Scripts 

Google Apps Scripts "is a JavaScript cloud scripting language that provides easy ways to automate tasks across Google products and third party services and build web applications."

With Apps Script, there's quite a lot that you can do, such as write custom functions, create macros and menus for Google Sheets. Google itself provides quite some guidance on how to work with Apps Script, but sure this may not be easy for a total beginner.

Luckily, there's plenty of kind (and clever!) people out there that have developed Apps Scripts and make them available to others online. And those you can just use!

Use Apps Scripts to collect Forms "edit response links" 

In M&E system and dashboard developed for IDRC program, as we saw part of data collection is manual, with users inputting data for research outputs and pilots through a series of Google Forms. So what if users want to update/modify an existing entry?

If you are familiar with Google Forms, you probably know that responses can be collected into a Sheet. You may also know that you can set up your Form so that, after an entry is submitted, it sends an email to the person that has contributed that submission. The email contains a link that the person can click to in order to modify/edit the entry.

Well this is sure nice and useful!

But wouldn't it be better if the edit response links were also added to the Sheet where the responses are collected, nicely ordered in line with the relevant form entry?

You can do this, with this Google Apps Script I've found browsing online.

To use this Apps Script, what you have to do is the following:
  • Click on Tools >> Script Editor in your destination Sheet (as in the image on the right);
  • In the Script Editor, copy this piece of code here
  • Change the parameters as indicated in the code; 
  • Save the script and run it; 
  • Click on Resources >> Current projects trigger and set the script to trigger at every new Form entry; 
  • Check that the edit response links are added in the right column on the destination Sheet.
Done! You set it up once and the script will continue to run and collect edit response links every time new responses are added via the form.

Importing data from a different spreadsheet using scripts

While this Apps Script is very specific for when you use Forms, there are few others that can come in handy in more occasion. For example, to automatically copy (part of) data from one Sheet into another one.

While you can actually do also this using in-line cells functions, as nicely explained in this post, I've found this to be not very reliable and not to always update automatically. So I would recommend to take the slightly more technical route and use Apps Scripts. You can find the link to the code and the explanation on how to insert this script.

Give it a try and see for yourself how it works. And let me know in the comments here if you are using other useful Apps Scripts that’s worth sharing.

Thursday, March 12, 2015

How to create an M&E dashboard using Google Apps

Last year we did a fair amount of work with IDRC to set up a KM platform for a new collaborative research program. In the follow up to that project, we developed an M&E system for the program, using the same technology infrastructure used to build the platform itself - the Google Apps for Business.

After last week’s case study on building the R4D dashboard with Tableau public, in this post I’m presenting how to set up a M&E system and dashboard using a combination of various Google Apps and free third party tools. This post is very much an overview of the process and the final product we delivered. In the next blog posts in this series I’ll look at the specific tools used from a more technical perspective.
M&E Dashboard - Click to enlarge

Who needs a dashboard, and why? 


This IDRC program is made of 4 different research consortia and the IDRC program team in Canada. Further, each consortium works on a specific issue related to climate change and adaptation. In doing so, it brings together different organizations geographically dispersed. So as collaboration is the basis of the program the M&E system had to follow this principle. So our brief was to “design platform-based, collaborative tools to collect monitoring data on up to eight key indicators in the Monitoring Framework.”

Ultimately, these data had to be brought together into a M&E dashboard that could be easily shared with donors and program leads as a link or quickly printed in PDF at regular intervals. As for the R4D dashboard, also this dashboard had to provide a “snapshot of progress against key indicators in the program's monitoring framework using data entered by consortia and the IDRC Team.”

So what are these indicators?

What to measure? Theory of change and monitoring framework 

The program M&E working group had already produced a solid Theory of Change with three clear objectives; for each they had defined the dimensions and potential metrics to be included in the M&E system. This made our job easier as it was clear from the outset what had to be measured and for what purposes. So we just had to help the team unpack a bit the various metrics and dimensions, and define the exact indicators and values to be tracked in the system:
  • Research outputs and pilots, including indication of type of outputs, authorship (gender and country), quality of outputs and their accessibility (whether peer-reviewed and/or openly accessible on the web), etc... 
  • Web traffic, social media and engagement data, such as web sessions and downloads, Twitter followers and number of conversations and Tweeps around specific accounts and Hashtags, media tracking, number of events and participants rating, etc. 
  • Grants and awards distributed, including gender and location of recipients.
M&E Dashboard - Click to enlarge

When and where? Data collection process and storage 

While the system (and resulting dashboard) was planned to be updated quarterly, we agreed on the principle that data collection would be automated when possible, and manual when other solutions were not at hand. As a result:
  • Data around web traffic and social media are automated or semi automated, using a series of third party tools and applications (I’ll talk about this specifically in the next blog post) 
  • Data around research outputs, pilots, grants and awards are entered via users’ submission forms, using Google Forms. While forms can (potentially) be submitted by anyone who has a user account on the KM platform, in reality specific users for each consortium are responsible for this process, while others are responsible for quality control, to ensure that entries are complete and there are no duplicates. 
Regardless of how the data are collected - manually, automatically or semi-automatically - they all feed into one of the 3 separate log files set up for the the three objectives in the Theory of Change. Google Spreadsheet are used for these log files, and the appropriate sharing and editing permissions are in place.

How to display the data? Platform, design, prototype and production 

On the basis of an initial sketch of the dashboard produced by the IDRC team, we populated the log files with dummy data and produced two different prototypes, one using Tableau Public and one using Google Charts and publishing them into a Google Site. We agreed to use Google tools to avoid adding another layer of complexity to the system and keep it all inside Google Apps. Additionally, as Charts are generated by the log files, when the log files are updated so are the charts on the live site, which is a great short-cut, cutting down work on updating the dashboard.

Similarly to the R4D dashboard, this program dashboard presents a tabbed navigation at the top, with one tab for each of the objectives monitored in the framework. This way we could present objective-specific charts, tables and figures in a clean, uncluttered interface.

Additionally, the main tab of the dashboard presents what we called ‘curated content’, such as a selection of recent publications, blog posts or key events that are hand picked by the dashboard administrators to highlight specific information.
M&E Dashboard - Click to enlarge

What next? Possible platform iteration and next blog posts 

This dashboard became at the beginning of 2015 and its second update is planned for the end of this quarter, so it’s too soon to evaluate it and think about possible iterations. However, feedback received from users has been positive so far and the system delivers the required information to the different target users.

In my opinion, a possible way to improve it would be to add filters and controls to the charts currently published on the dashboard, so that users can interact with them, browse for specific period of time, make comparisons, and get more out of this visual representation of data.

To do this requires working with Google Apps Scripts, JavaScript cloud scripting language that provides easy ways to automate tasks across Google products and third party services. I’m not a programmer but I like learning new things and findings solutions that others have already implemented. So also in the current version of this dashboard I’ve made use of Google Apps Scripts to collect data and to copy them from one spreadsheet into another.

If you are interested in what Apps Scripts I’ve been using and what they can do for you, subscribe to the blog and sit back till you’ll get my next post in this series - or share your experience in the comments below here.

Thursday, March 05, 2015

R4D dashboard: Visualize web access to DFID funded research

Collecting traffic and usage statistics for a website or portal can be a very time consuming and tedious task. And in most cases you end up compiling monthly or quarterly report for managers and donors that will be shared as email attachments - and at best skimmed, since there is so much information. But there are smarter ways you can do this process and bring life into your data, as as I explained in my previous blog.

Our case study is the R4D portal, a free access on-line portal containing the latest information about research funded by DFID, including details of current and past research in over 40,000 project and document records. Until 2013 we were part of the team supporting and managing the site.

As part of our work packages, we developed an online, interactive visualization of web traffic and usage of the R4D portal and its social media channels. The R4D dashboard, built using Tableau Public, is still updated and in use. However, since the termination of our support contract, it hasn't been iterated and improved since 2014.

This posts presents the process we followed to develop the dashboard, the tools used and the lessons learned in what was very much a learning by doing journey.

 Why develop the R4D dashboard? 

The collection of usage and traffic data for R4D used to be pretty much standard: a series of excel files updated monthly to generate charts and graphs. They were then put together in a PDF report and shared with project leads at DFID. The idea to develop instead an online, public dashboard of R4D web traffic and usage was inspired by the excellent work from Nick Scott and ODI, which he shared with us during a Peer Exchange session we organized back in 2012.

Donor organisations such as DFID collect a lot of statistics and indicators but these are often kept within projects and programmes and not made available for all staff, as was the case for R4D. So the reason behind the R4D dashboard was primarily to open up our stats and make them more accessible to anybody interested in it, not just the people that had sign off on the project.

Also, by encouraging a more open approach to web stats, the idea was also to have more terms of comparison: it is difficult to evaluate how well your website is doing if you can only compare against yourself. So being able to see how much traffic similar websites are generating will help you assess your own effort and performance.


So what did we do?

Process wise, we pretty much followed the steps indicated in my previous blog posts. With the primary audience well in mind, we started to select the metrics to include in the dashboard:
  • Website stats: Visits and visitors; referring sites; visitors by country; PDF downloads and top pages. 
  • RSS feeds subscribers Twitter clickthroughs and Facebook Insights data (later removed)
  • Number of outputs added to the R4D database (by type, for example open access articles, peer review articles, etc…) 
We decided that it was feasible to collect this data monthly as xls or cvs files exported to from the site(s) and save them into a shared Dropbox folder. This was the most effective way as data collection was decentralized with different people working on different platforms. With our limited budget, it was not possible to automate the data collection process, so this was entirely manual.

Software platform selection took quite some time in the initial phase of the process. We selected Tableau Public as our dashboard platform, and then had to invest more time in learning its features and functionality. But it was totally worth it!


Why Tableau? 

Tableau Public is free software that can allow anyone to connect to a spreadsheet or file and create interactive data visualizations for the web. There are many tutorials out there if you just Google for it, so I’m not going to tell you here how it works in details. But here are my top reason for using Tableau Public:
  • It's free! Well, that's a good reason already if you don't have resources to invest in business intelligence or visualization software - and normally the cost for these are steep and way outside the budget of the organizations we work with; 
  • It's intuitive. You don't need to be an expert to use the tool. The interface is very simple (drag and drop) and you can easily find your way around. 
  • It's rich and deep. There are so many charts you can choose from and you can play around with different visualization until you are happy with the result. It also goes much deeper than Excel with analysis and interactions.


What did we learn? 

Besides learning how to use Tableau Public itself, here are the main things I learned along and around the process of developing the R4D dashboard:
  • Google Analytics is the industry market standard - but it tends to under-count your traffic.
    We ran two different website analytics packages on the main R4D portal - Google Analytics (GA) and SmarterStats - and noticed a huge difference in the results, with GA massively under-counting visits and visitors. So it's always worth installing another tracker to be on the safe side. 
  • Updating Tableau is quick - but getting the data manually isn't
    Once your dashboard is set up, the process of updating it with new data is rather quick, just a few clicks and you are done. However data collection from the various sources in our case was mostly manual and it can be time consuming (and not much fun either!). If I were still working on the project, I’d look into ways to automate as much as possible data collection - while also looking at what additional (useful) data I could collect in an automated way. 
  • Build it once - and then you *must* iterate
    When you're done building your dashboard, you're actually not done. We had a couple of iterations before arriving at the product that is now online. And I'm sure this would be different now had the project continued. This is because you have to evaluate whether the visualizations in the dashboard are actually useful and provide you actionable insights that can inform your strategy. Or simply because the software keeps evolving and can give you new possibilities that were not there before.

In the next post on this series I'll present a different approach to develop an M&E dashboard, this time using a combination of Google Forms, Sheets and Charts, together with Google Apps Scripts and Google Docs Add Ons.

In the meantime, if you have experience with Tableau or use other tools to create interactive dashboards, why not share it in the comments here?

Thursday, February 26, 2015

How to create a monitoring and evaluation dashboard

So you have a website, a blog, the usual social media channels on Twitter/Facebook/YouTube, maybe a series of RSS feeds. On top of this, your organization or research programme also publishes original content, or indexes content produced by others into an online portal. And you also organize events and workshops and maybe offer grants and awards.

With all these online spaces, outputs and products that you produce, how are you going to collect and aggregate this data as part of your monitoring and evaluation activities? And how are you going to display and present it in an effective way that can be easily understood by your co-workers, managers and donors?

For the past couple of years, I’ve been experimenting with tools to display data and information in online dashboards. This post presents a short introduction to the topic. It’s the first of a series of posts that will look into online tools for data collection, storage and display.

What is a dashboard? 

According to Stephen Few’s definition “A dashboard is a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance.”

More generally, a dashboard is a visualization of data related to the operation and performance of an organization, program or service. Dashboards are helpful tools to provide you with a quick overview to see whether you’re on track to reach the objectives stated in your logframe or Theory of Change.

Note that information about a wide range of channels can crowd one screen. So it’s important to be flexible and keep users in mind - keeping scrolling *very* limited and using features like tabbed navigation to view different set of metrics and indicators.

What are the steps to follow to build a dashboard? 

The Idealware report Data at a Foundation's Fingertips: Creating and Building Dashboards presents an excellent and detailed step-by-step description of the process to design effective dashboards for non profit. Ultimately the process boils down to 4 main phases:

  1. Define your audience
    Of course this is absolutely critical, determining the way you design it, the graphs you include, their order and sequence. The dashboards I’ve developed in the past were mainly designed for managers and executives, to tell them about the progress of a program or service at a quick glance.
  2. Identify the metrics to display - and how you collect them
    With the tons of metrics that you could collect, and the space limitations of a dashboard it is important to agree upfront which ones will be displayed in the dashboard. So it requires a bit of negotiation to agree upon what’s in and what’s out. Of course, the metrics should be useful in terms of monitoring progress towards the objectives in your logframe and theory of change. In this phase it is also important to discuss collection methods, frequency and access. 
    • Are there any process that you can automate? 
    • What is only possible instead through manual data collection?
    • And is it realistic to collect this data monthly - if the properties are high traffic or include active campaigns, for example, - or is quarterly more realistic?
    • Where are you going to store the raw data and who should have access to it?
  3. Identify your dashboard platform
    This is a maturing market so there are a lot of possible solutions - from expensive business intelligence software to low cost or free tools. Generally the decision is defined by the resources available as well as the time you and your users have to invest in learning new tools. Note that while potentially you can build a dashboard in Excel, investing some time in learning how to use a powerful and flexible dashboarding tool such as Tableau Public can enable you to design more complete and effective dashboards.
  4. Sketch, prototype and roll out
    In the design of the dashboard you need to find a good balance between the amount of information you want to display and the limited space available. So you have to carefully decide what graphs and chart you will use, what explanatory text you should include, which colours to use when...This will take a lot of testing and iterating to find the optimal design. Bottom line, your final product should: 
    • Be simple, intuitive, easy to read and understand; 
    • Present data together from different sources in an uncluttered way and following a logical sequence or order; 
    • Offer a quick overview of the key metrics and indicators to assess progress towards the objectives of your program/organization/service. 
In the following posts, I’ll be presenting two case studies about work we did recently on visualizing monitoring and evaluation data into online, interactive dashboards. I will look specifically at the tools  used to put these dashboards together, as well as the individual tools used to collect and store individual indicators and metrics. 

For the more techie readers, I’ll also share the details of what I’ve learned recently using Google Apps script to automate some data collection and storage processes, as well as tips and tricks to monitor activities and engagement around Twitter, which I’ve been experimenting a lot with lately. So stay tuned!

Friday, February 20, 2015

Facilitating emergent conversations – variants on Samoan Circles and Fishbowls

Trying to ensure that the brains and experience of all participants are brought into the room is one of the more enjoyable challenges of facilitation. It’s mainly a question of finding the right balance of different approaches since there are so many formats that provide opportunities for different combinations of people to share knowledge and questions. Our knowledge of those formats comes from ideas and stories freely shared by other facilitators, in person or via resource bases like the KS Toolkit. We do a lot of event facilitation using those ideas so to give back to the Commons we’re sharing here some recent workshop experience.

Samoan circle discussion during 2015 BDS convening 
One element of a good balance is to do with mixing up deliberately ‘leaderless moments’, where natural leaders or burning discussion topics can fill the space, with more structured processes such as those that promote particular people as conversation guides, or even gurus, around whose ideas and presentation discussion flows. Samoan circles and Fishbowl formats can lend themselves to most points along that spectrum of options, and we happily experimented with two variants at the recent annual workshop of a large Sanitation programme.

At its core it’s a simple method: a small group of people have a conversation amongst a wider group of participants. The difference with panels, for example, is that the small group sit in a circle surrounded by the participants. Samoan circles are possibly the purest form of the approach[1]. In this format the central group begin discussing the topic. People in the outer circle cannot speak unless they replace one of the speakers in the centre. If somebody wants to participate, she taps one of the current speakers on the shoulder as a sign of intent that she wishes to replace one of the current speakers in the circle. The conversation continues until the time is up or the conversation dies.

The democratising nature of the format generates a particular energy that drives people into the inner circle, in an active and engaged way. And crucially, people are able to intervene at precisely the point in the conversation which engages them, rather than having to wait and ask questions later, that then take people backwards to an earlier point. As a consequence conversation tends to flow organically– assuming of course that the chosen topic is interesting to the participants and that they are comfortable with and trusting of each other. It’s not a tool to use very early in a workshop.

The Feldman variation

The Feldman Variation
The excellent Liberating Structures group propose a variant, in which the outer circle ask questions, but not randomly. At a given point the conversation in the middle stops and the outer circle talk among themselves, agreeing questions, which they then put to the speakers. Peter Feldman, one of the main organisers of our recent sanitation workshop, proposed a variant in which there were two spare chairs in the central circle. The central speakers stayed in the ring and other participants could join the conversation by sitting in one of the empty chairs, or join by following the tapping convention to replace one of the speakers, but only those in the extra chairs.

We used the Samoan circle and the Feldman variation in the workshop, in two sessions, one focusing on Sanitation financing and the other on Behaviour change. The choice of topics meant that there were many people with ideas and opinions to contribute but it was interesting to see how the two formats operated. We used the Feldman variation for the Financing discussion, partly because we believed there was a great divergence of experience amongst participants, so having a group more familiar with different approaches operating as an expert panel seemed appropriate. The format engaged more participants in the conversation than would be normal in a traditional panel discussion, partly because the conversation didn’t always return to the experts but followed on from ideas introduced by the ‘outer circle’. However, having one group of people always present meant that the conversation was anchored by their experience and confidence in speaking about the topic.

For the Behaviour Change conversation we used the Samoan circle format. The topic and the format generated a lot of debate, lasting a full 90 minutes - at the end of a long day, and the fourth day of the workshop at that. However, the conversation ranged around the interests and opinions of participants and wasn't anchored in the same way. Our conclusion was that in this format someone, either the facilitators or a participant, needs to step forward pro-actively, intervening to summarise, reflect back opinions so far and point out questions that hadn’t been properly answered or addressed.

The workshop was organised around wide range of activities, including two straight-forward presentation and discussion sessions, world cafes, 1-2-4-All, field visit and feedback sessions, spectrogram exercises, group discussions - and the emergent conversations above. That variety scratches all the itches - allowing participants time to listen, reflect and engage participatively both individually and collectively. It's probably one of the reasons why participants were so positive about the Fishbowl exercises, which they were. Organising opportunities for participants to stretch both their legs and brains in stimulating conversations about issues that matter to them is a great way to earn a living!











[1] And apologies to co-facilitators and participants at the 2015 BDS convening, I was calling this a fishbowl!

Monday, February 16, 2015

How do we know we’re learning?

“We will live or die by our critical reflection and ability to internalize learning”, said Darren Saywell, Wash Director, Plan International, in a recent online Q&A on sanitation. That “there is an over-emphasis on Knowledge products and outputs and not enough emphasis on the reflection and learning processes that produce sustainable change within projects and organisations” is something we’ve long argued.  

And in the KM work we’re doing with a large sanitation program, we explicitly built in activities that foster a self-consciousness about learning, believing that in this way the process of learning is enriched and has a better chance of becoming embedded in how people work and interact (and thereby increasing the likelihood of sustainable change). But it’s precisely this kind of critical reflection that is so often squeezed out of operationally demanding jobs. One programme grantee illustrated the point by recounting how he’d hardly noticed an important innovation when it passed by in an email. It took a visit to the site in question to engage his attention and jog his memory about the email.

We’ve engaged the inimitable Nancy White to work with us on this Learning about Learning process. While talking about preparations for the recent annual convening of programme grantees Nancy suggested we, the organisers, be, “on the watch for those moments when reflection and learning is visible and to note when it’s happening, in what context, why and as part of what process” suggesting that, “understanding these things may help us better architect time/space/structure for learning”.

Learning Leaders

The portfolio manager, Jan Willem Rosenboom, rose to the challenge wonderfully, agreeing to lead group conversations and reflections about learning. Senior staff agreeing to lead and model the process is all too rare, and his stepping forward set the tone for the event. Jan Willem introduced an intriguing approach to the process, known as art-form conversations[1], developed by Brian Stanfield of the Institute of Cultural Affairs (ICA), with whom he’d worked in Kenya and Europe.

At the end of Day One Jan Willem held up one of the flowers sitting in the middle of the tables in the room and asked people to contemplate it, describe what they saw – list its’ attributes. You can imagine the looks in the room, but people began contributing. We were then asked to think of how it related to other flowers that we’d seen, compare it. The group (nearly 50 people) was getting restive, a bit ribald, but answers kept coming. Next, what name would we give it: guffaws and some gently mocking answers, including the ‘Rosenbloom’. And finally a question about what difference this might make to our how we use flowers in the future, at events or at home. There was less reaction, people were acknowledging the process underway, which was reinforced in the next question, “so what did you learn from that process?”

The group had been taken through an aid to reflection, developed by Stanfield, summarised below

OBJECTIVE
Facts
e.g. What can you see?
REFLECTIVE
Reaction

e.g. Where have you seen something like this before?
INTERPRETIVE
Implications
e.g. What does this mean to you?

DECISIONAL
Actions
e.g. How might this principle be used?

The process was instructive in itself but, more importantly, triggered a reflective conversation about learning, with participants noting things like the fact that knowledge is contextual, that our previous experience defines what we see, that we all have different reactions to the same thing, and so on. Jan Willem closed with a request for participants to reflect on their learning on that first day, first alone, maybe noting some things down, and then chat to another. The whole process worked well with the group, people were quiet and reflective by the end of the session.

And once the tone was set the process continued throughout the workshop. The second day was taken up with field visits, which were discussed in a feedback session at the beginning of Day Three. At the end of that session, just before coffee, Jan Willem asked people:
  • Give me a word, or phrase that you remember from presentations?
  • What surprised you?
  • What would you like to learn more about?
  • What are we learning?
  • Where do you see that can influence your work back home?
Again, the simple process encouraged people to reflect on both the activity and on their own learning processes, which triggered the reflection from one participant that it is very “difficult to be influenced outside our expectations and learning frameworks”.

Small reminders continued: one lunchtime there was encouragement to think about a question that was triggered in the sessions and to share them with one or more people. Another lunchtime participants were encouraged to think about whom in particular would be a good person to have a conversation about the issues of the day.

And what difference did it make?

The workshop was designed to maximise opportunities for exchange, conversation, discussion, story-telling. Overall feedback has been very positive, people appreciating the opportunities to dig deeper into issues, share experience, exchange ideas and build relationships. And while we don’t have objective evidence – it’s not something that would come out from an evaluation survey - my own experience of facilitating and participating was that there was a richness of texture to the exchanges, a greater criss-crossing of exchange than in many workshops. And the high profile leadership meant that learning about learning was explicitly on the agenda, an issue that we’ll follow up in other blogs. 


[1] There is a nice description of its genesis in the book “The art of focused conversation: 100 ways to access group wisdom in the workplace” by Brian Stanfield .