Google+

Friday, April 17, 2015

Learning & KS - "prod, shove, jolt, or nudge"?


“An organisation or programme’s development depends on the quality of interchange and group reflection going on among the staff” (EIU, 1996)

We know what works. The best Development practice has always centred on learning collectively how to work more effectively, adapting operations and exchanging lessons with colleagues to build in continuous improvement and scale. Yet, embedding successfully that kind of good practice into institutional norms and processes remains challenging, especially over distance and between layers of stakeholders engaged in combating poverty and inequity.

But it ought to be so easy. Whenever I’ve interviewed people for the kind of Knowledge Management review we’re currently working on with WaterAid, they always say that they really want to learn, to share, to document it’s just that operational pressures are so intense and relentless that it’s very hard to carve out the time. There are some well-worn responses to that conundrum, some of which we’ll rehearse in another blog, but we’ve become curious about what we can learn from all the experience and resources on Behaviour Change.

"Socially unacceptable not to learn"

Because, while I often say I want to clean the cooker properly (all right, sometimes say) somehow there’s always something else to do. Yet when we had tiny babies and children in the house we became hugely more concerned about food hygiene and we changed the way we behaved. And being honest, one element of the motivation for change was the reactions of friends with babies. We prioritised differently, and were influenced by the realisation that it had become socially unacceptable amongst our group of friends with kids to be as relaxed as we had been about kitchen hygiene.

Taking it back to Learning and Knowledge Sharing, how does a culture develop where, ‘it’s socially unacceptable not to learn’, as Aidan Cronin of UNICEF memorably asked? At its heart it’s about behaviour change, and it’s a problem shared by all those working for changes in areas like sanitation and hygiene, or vaccination, or climate change, that rely on how people act in their daily lives.

Nudge or shove

And since it’s not a new problem people have been aware of it and researching ways to influence it. Fashionable at the moment are theories grouped under the term ‘nudge’. It's odd that MS Word thesaurus cites "prod, push, shove, jolt" as synonyms, since the underlying theories opposes any kind of forcing behaviour in favour of gentler and more positive approaches. The ideas are rooted in Behavioural Economics Most sources cite Kahneman and his collaborators as the most influential and insightful research, accessibly presented in his 2011 book, ‘Thinking Fast and Thinking Slow’. And recent research demonstrates that interest in Nudge and similar theories isn’t just a modish response in the wealthy world, it’s a global phenomenon.

This is a nice introduction about nudge in the context of WASH from a blog by Stephanie Tam
"As fickle as our behaviors may be in relation to our conscious intentions, they respond systematically to certain environmental conditions, i.e. they have predictable biases. Through different combinations of biases, behavioral economists test out how to make certain behaviors easier to perform by subtly changing the context. These changes influence what neuroscientists call the reflexive cognitive system: a knee-jerk reaction that outstrips the slower reflective system we commonly call consciousness or analytical thought (Lieberman 2002). Instead of shoving people into sanitation and hygiene practices, we can create an environment that enables people to perform behaviors that they themselves have chosen."
There’s another excellent introduction in a resource from the Business Balls site, complete with useful toolkit:
“Nudge theory is mainly concerned with the design of choices, which influences the decisions we make. Nudge theory proposes that the designing of choices should be based on how people actually think and decide (instinctively and rather irrationally), rather than how leaders and authorities traditionally (and typically incorrectly) believe people think and decide (logically and rationally).”

“The use of Nudge theory is based on indirect encouragement and enablement. It avoids direct instruction or enforcement.”

“Fundamentally (and properly, according to its origins) Nudge theory operates by designing choices for people which encourage positive helpful decisions; for the people choosing, and ideally for the wider interests of society and environment, etc.”

“Nudge theory seeks to minimize resistance and confrontation, which commonly arise from more forceful 'directing' and autocratic methods of 'changing' people/behaviour.”
With the team in WaterAid we’re going to be reviewing what we can learn from Behaviour Change work in general and Nudge in particular, asking ourselves questions like:
  • What nudges you to take time to reflect, to share ideas, to document in some form or another? 
  • Is it a specific trigger, or a stimulus that is more environmental, part of the landscape?
  • What is the role of leaders and managers - how can they create that environment, or is it more of a personal style or posture issue? 
  • One of the commonest ways to kill things is to systematise it, put it in a manual. But how then can an organisation take on such an approach? 
What examples do you know of in your own work or from other projects about using these kind of ideas?

Thursday, April 02, 2015

Five things I learned about running online webinars

Online webinars have become a popular form of online product, especially for trainings and learning events. However, have you ever participated in one of these online sessions, where you just sit in front of your screen, a presenter goes through the slides, but you have no idea about who else is attending the session and what questions have been typed in the chat? I personally feel very lonely being in these types of webinars!

Knowing how I did not want to organize a webinar, last week I co-facilitated an online webinar about “Using Dgroups in all its features(see presentation below). This was the first of a pilot series of three webinars co-organized with Dgroups and ECDPM to support Dgroups users in learning the ins and out of the Dgroups platform, the basics of online community building and tips and tricks that are at the heart of online facilitation.


I have to admit I’m very lucky to be working with with good friends and colleagues Lucie Lamoureux and Ivan Kulis in the design and delivery of the overall Dgroups webinar series. I guess that knowing each other from the KM4Dev community - and sharing similar ideas in terms of effective knowledge sharing and facilitation - definitely helps to plan and run these online sessions.

But as this was not the first webinar I’ve designed and facilitated in the recent months, I thought to share a list of five things I’ve learned about how best to conduct effective and engaging online webinars.

1. Plan in detail 

Even more than in the facilitation of face-to-face events, to run online webinars I think preparation is key. You don’t want to lose time fiddling with technology, or not knowing what should happen when. For me, this means:

  1. Developing a session design and storyboard document
    Before the session, we used Google Docs for the session design and storyboard, so we had all our links and references at hand and had a clear plan of what should happen, when. It’s so easy to run longer in a presentation, or to allow too long for Q&A and finding yourself having to catch up. The storyboard was our reference document to check where to speed up and where to pause, where to allow for more interaction and questions and where to refer users to post-webinar interactions.
  2. Timing your presentations - and adding presenters notes
    I had to practice a couple of times and time myself to make sure I could fit my slides in the slot allocated for each part of the presentation. I also used the presenter’s notes to write down my script, to avoid losing my train of thought while presenting but also as a contingency measure: had my connection failed, one of the other co-facilitators could have continued delivering the presentation, by reading through the script on each slide.
  3. Preparing your room layouts and materials
    We used Adobe Connect as our webinar platform. One of the (many) great features in Adobe Connect is that you can create different layouts for the different part of your sessions. So we had separate layouts for the presentation and discussion parts, for example with a larger chat window in the latter. All materials were already loaded in the room before the sessions, and ready to be displayed for each presentation session.

2. Build interaction into the session design 

The session was designed to last for 90 minutes - and it was content heavy, I was aware of it. So we designed the webinar to alternate presentations (for max 15 minutes) and discussions (for 10 to 12 minutes) sessions. But we also asked participants to use the chat and write down their questions as they emerged during the presentation. By using an open chat window, participants become presenters themselves as they integrated the contents of the slides also with comments and additional tips or suggestions. Besides leveraging the possibility of peer learning, this is also a great way to keep participants attention and engagement.

3. You need a facilitation team 

You cannot run an interactive webinar on your own. In our case, I was the main host and presenter, while Lucie was managing the chat and the Q&A sessions and Ivan was the technical host helping participants that had experienced problems with audio for example (very few in reality). I believe this is the minimum you can think of in terms of roles and task division. Sure, for the next webinars we need to improve our teamwork, for example in terms of making smoother transitions between one member and the others, or from one part of the webinar to the next, but that comes with practice and better use of the back channels.

4. Not all back channels are equal… 

They are definitely not! In Adobe Connect, you have a presenter area on the screen which is visible only to meeting hosts and presenters. So as back channel we used a note pod (as the various content areas are called in Adobe) placed in the presenter area. However, this was a bit fiddly - we ended up writing over each other or having to wait for one to stop writing before the other could. Even more problematic was the moment that Lucie lost the connection to the meeting room. We had not planned to have also a Skype chat open as back-back channel in case something went wrong with Adobe. So thinking about all possible options will help us identify better solutions next time.

5. Some participants will not come… 

We had a limit of 25 seats in the Adobe Connect room so we kept participants’ registrations to that limit and had a few interested Dgroups users in the waiting list. But as always happens with a free online webinar, some people just didn’t show up, and it was difficult to bring in people on the waiting list after the session had started. So what we’ll do next time is probably not to set any seats limit for the registration, so anyone can register, while only the first 25 registered participants that will actually join the room will have the possibility to attend. This will prevent ending up with ‘empty’ seats in the room - and hopefully will also be an incentive for participants to connect few minutes before the start of the webinar, so it can actually start on time!

What are your top tips to organize effective and participatory online webinars? Let me know in the comments! 

Friday, March 27, 2015

Supermetrics: how to easily collect Analytics and YouTube data

Collecting and charting data from your YouTube and Google Analytics can be a time consuming process and not a very enjoyable one, as I’m sure you are aware of...But there’re some good news here. And it’s called Supermetrics.

What is Supermetrics? 

Supermetrics is a Google Drive add-on - an extension that you can add to your Google Sheets and Docs. Add-ons are similar to Google Apps Scripts but with some differences. If you’re curious to know more about this, go over this blog post here, where the two are compared in detail. And where you can also find a good overview of some useful Add-ons.


With Supermetrics you can create queries to (semi)automate your reporting from sources such as Google Analytics, Youtube, Twitter, Facebook, Google AdWords, etc….Data from these sources are added to a Spreadsheet (or Google Doc). You can then create live charts from these data, so they are automatically updated once you refresh your Supermetrics queries. Supermetrics comes with a free and pro versions.

The main two differences are about the amount of data you can get from each query and whether you can schedule automatic refresh of your queries, which is available only for the pro version (49 USD/month).

In CARIAA program M&E dashboard, we used the free version of Supermetrics.

How to activate and use Supermetrics?


To use this tool follow these simple steps below here:
  1. Activate Supermetrics - Navigate to the Add-ons button in your Google Sheets and browse the gallery to select Supermetrics.
  2. Set up your queries by 
    1. Connecting your data source and selecting your profiles - Launch the Supermetrics sidebar and connect the various sources you want to monitor. In our case, we only connected the Google Analytics and YouTube profiles that were shared with the Google Account we used.
    2. Selecting the date range you want to monitor and the metrics you are interested in - In our case, we set up queries for a specific time frame but you can also have a year-to-date query, last month, etc...Out of the many Google Analytics and YouTube metrics you can collect, we limited the query to web sessions, users and pageviews for Analytics, and views for YouTube.
  3. Launch your queries and see your data - Once you’re all set, decide how you want to get your data, in a table or in the various default charts. Data will be downloaded and added to your spreadsheet.
  4. Refresh your queries - From the Supermetrics menu under your Add-ons tab, you can refresh all your queries with a click and update your results and charts.
  5. Launch Supermetrics from your Add-ons menu
  6. Modify and edit your queries - Alternatively, you can click on Manage queries; this will open a worksheet where all your queries are listed. From here you can manage and modify them, for example changing the date range of the query.

Using Supermetrics has proved to be a very effective and efficient way to collect usage stats and traffic data for multiple Analytics profiles and YouTube channels at the same time, without having to waste time navigating in and out these various accounts to get individual data.

Note that, while at the beginning of this post I mentioned that this tool can also be used to collect data from Twitter, in our case we decided not to use Supermetrics for Twitter.

In the next post, I’ll explain you why we took this decision, and what we used instead. In the meantime, if you haven’t try it yet, give Supermetric a shot and see for yourself how it works.

Tuesday, March 24, 2015

Learning - sharing what we know we know

Here's a terrible story whose details we'll hide. It’s from an excellent, well regarded development agency where around 1998 a smart, experienced project manager learned in a country programme that a particular approach didn’t work, it upset people and their lives and was a waste of money. As s/he recounted the experience another equally smart experienced person stood up and said s/he’d learnt the same lesson working in the same organisation in another country around 1989. And I later spoke to someone who works for the same organisation who was too embarrassed to admit in plenary that s/he had learnt the same lesson for the same organisation in another country in 2004!


Stories like this are dismayingly common, and not just in international development cooperation. DfID’s Learning efforts, to take just one, scored Amber/Red 1 in a 2014 assessment by the UK’s Independent Commission for Aid Impact (ICAI). So what can organisations do to learn better? This perennial question is at the centre of a review we’re doing with Water Aid UK on Knowledge Sharing and Learning. It overlaps with the other sanitation work we’re doing, KM in the Building Demand for Sanitation (BDS) programme. Three sub questions are interesting to both projects:
  • Are we too cautious about saying what we know we know
  • How do we record what we know and have learned in ways that people will pay attention to? 
  • How do organisations develop cultures where it is “socially unacceptable not to learn”, as one grantee put it recently? 

Known knowns

On the first point, a conclusion from the terrible story is that the first smart experienced person, who told the story, groaned as he learnt that the same error had repeated, in the same organisation. He suggested we don’t declare loudly and clearly enough what it is we know we know. We are often too tentative and vague, delivering high level bullet point recommendations or simply not sharing our conclusions. As part of the BDS KM programme we're supporting a Learning Exchange where he is going to sit down with two others from two organisations and try to write down what it is they have all learnt, what they know they know (about Sanitation Marketing, in this instance). We're encouraging them to tell the story using a range of media, to try and make their ideas sing and dance.

We'll also be encouraging the group to produce content that makes people think. If there is a document that tells you how to do something, and doesn’t require you to think, then it's probably only a technical fix: important for sure, in specific contexts, but not necessarily generalisable nor stimulating to other people's learning. Meaningful outputs that might enable people to learn across contexts are those that require people to talk together, question and reflect on the basis of what they read/hear/see in the documentation - to learn socially.

But it’s not easy to pronounce on what we know we know. It’s quite a bold thing to do. It’s much easier to ask questions, be tentative. I tried in a long, excellent conversation about knowledge and doledge on the KM4Dev discussion list, and I still feel uneasy about being so definite. A better example is a great blog, "Do we learn enough and does learning lead to improved sector performance?" The authors are two more smart, experienced, WASH specialists and the blog reflects on learning from the recent BDS annual convening meeting in Hanoi. The authors described elsewhere how, when they first re-read what they had come up, with they were startled at how obvious a lot of it seemed. But the blog has been well received, possibly because by stating the obvious, statements about which they were confident, the authors are providing navigational markers by which other people can steer.

But it takes time – and a learning culture - to mainstream that kind of reflection and recording. To quote from the ICAI report on DfID: “DFID is not sufficiently integrating opportunities for continuous learning within day-to-day tasks. In particular, staff do not have enough time to build learning into their core tasks. DFID is not fully ensuring that the lessons from each stage of the delivery chain are captured, particularly in relation to locally employed staff, delivery agents and, most crucially, the beneficiaries. Heads of office do not consistently define a positive culture of learning".

We'll be addressing culture in the next blog.


1. [programme performs relatively poorly overall against ICAI’s criteria for effectiveness and value for money. Significant improvements should be made]

Friday, March 20, 2015

Power up your Google Sheets with Apps Scripts


Google Sheets and Docs are very powerful, flexible tools for data collection and analysis. But do you know that there's a lot more you can do with both Sheets and Docs, using free tools or just a bit of extra coding? And even if you are not a programmer? Do you know you can:
  • Enable users to edit responses they have made in Google forms? 
  • Automatically copy (part of) data from one Sheet into another one? 
  • Simultaneously collect various metrics for your Google Analytics, YouTube and Twitter accounts?
  • Automatically track Twitter posts around a Twitter handle, hashtag or search term?
  • Automatically count the number of Twitter followers of various accounts and add them dynamically into a Google Sheet? 

In this post and the next ones, I'm presenting few different options I’ve used to 'extend' Google Sheets and how I used them in the development of a program M&E system and dashboard for IDRC.

Today I'll look specifically at two possible use of Google Apps Scripts for Google Sheets.

Google Apps Script editor

About Google Apps Scripts 

Google Apps Scripts "is a JavaScript cloud scripting language that provides easy ways to automate tasks across Google products and third party services and build web applications."

With Apps Script, there's quite a lot that you can do, such as write custom functions, create macros and menus for Google Sheets. Google itself provides quite some guidance on how to work with Apps Script, but sure this may not be easy for a total beginner.

Luckily, there's plenty of kind (and clever!) people out there that have developed Apps Scripts and make them available to others online. And those you can just use!

Use Apps Scripts to collect Forms "edit response links" 

In M&E system and dashboard developed for IDRC program, as we saw part of data collection is manual, with users inputting data for research outputs and pilots through a series of Google Forms. So what if users want to update/modify an existing entry?

If you are familiar with Google Forms, you probably know that responses can be collected into a Sheet. You may also know that you can set up your Form so that, after an entry is submitted, it sends an email to the person that has contributed that submission. The email contains a link that the person can click to in order to modify/edit the entry.

Well this is sure nice and useful!

But wouldn't it be better if the edit response links were also added to the Sheet where the responses are collected, nicely ordered in line with the relevant form entry?

You can do this, with this Google Apps Script I've found browsing online.

To use this Apps Script, what you have to do is the following:
  • Click on Tools >> Script Editor in your destination Sheet (as in the image on the right);
  • In the Script Editor, copy this piece of code here
  • Change the parameters as indicated in the code; 
  • Save the script and run it; 
  • Click on Resources >> Current projects trigger and set the script to trigger at every new Form entry; 
  • Check that the edit response links are added in the right column on the destination Sheet.
Done! You set it up once and the script will continue to run and collect edit response links every time new responses are added via the form.

Importing data from a different spreadsheet using scripts

While this Apps Script is very specific for when you use Forms, there are few others that can come in handy in more occasion. For example, to automatically copy (part of) data from one Sheet into another one.

While you can actually do also this using in-line cells functions, as nicely explained in this post, I've found this to be not very reliable and not to always update automatically. So I would recommend to take the slightly more technical route and use Apps Scripts. You can find the link to the code and the explanation on how to insert this script.

Give it a try and see for yourself how it works. And let me know in the comments here if you are using other useful Apps Scripts that’s worth sharing.

Thursday, March 12, 2015

How to create an M&E dashboard using Google Apps

Last year we did a fair amount of work with IDRC to set up a KM platform for a new collaborative research program. In the follow up to that project, we developed an M&E system for the program, using the same technology infrastructure used to build the platform itself - the Google Apps for Business.

After last week’s case study on building the R4D dashboard with Tableau public, in this post I’m presenting how to set up a M&E system and dashboard using a combination of various Google Apps and free third party tools. This post is very much an overview of the process and the final product we delivered. In the next blog posts in this series I’ll look at the specific tools used from a more technical perspective.
M&E Dashboard - Click to enlarge

Who needs a dashboard, and why? 


This IDRC program is made of 4 different research consortia and the IDRC program team in Canada. Further, each consortium works on a specific issue related to climate change and adaptation. In doing so, it brings together different organizations geographically dispersed. So as collaboration is the basis of the program the M&E system had to follow this principle. So our brief was to “design platform-based, collaborative tools to collect monitoring data on up to eight key indicators in the Monitoring Framework.”

Ultimately, these data had to be brought together into a M&E dashboard that could be easily shared with donors and program leads as a link or quickly printed in PDF at regular intervals. As for the R4D dashboard, also this dashboard had to provide a “snapshot of progress against key indicators in the program's monitoring framework using data entered by consortia and the IDRC Team.”

So what are these indicators?

What to measure? Theory of change and monitoring framework 

The program M&E working group had already produced a solid Theory of Change with three clear objectives; for each they had defined the dimensions and potential metrics to be included in the M&E system. This made our job easier as it was clear from the outset what had to be measured and for what purposes. So we just had to help the team unpack a bit the various metrics and dimensions, and define the exact indicators and values to be tracked in the system:
  • Research outputs and pilots, including indication of type of outputs, authorship (gender and country), quality of outputs and their accessibility (whether peer-reviewed and/or openly accessible on the web), etc... 
  • Web traffic, social media and engagement data, such as web sessions and downloads, Twitter followers and number of conversations and Tweeps around specific accounts and Hashtags, media tracking, number of events and participants rating, etc. 
  • Grants and awards distributed, including gender and location of recipients.
M&E Dashboard - Click to enlarge

When and where? Data collection process and storage 

While the system (and resulting dashboard) was planned to be updated quarterly, we agreed on the principle that data collection would be automated when possible, and manual when other solutions were not at hand. As a result:
  • Data around web traffic and social media are automated or semi automated, using a series of third party tools and applications (I’ll talk about this specifically in the next blog post) 
  • Data around research outputs, pilots, grants and awards are entered via users’ submission forms, using Google Forms. While forms can (potentially) be submitted by anyone who has a user account on the KM platform, in reality specific users for each consortium are responsible for this process, while others are responsible for quality control, to ensure that entries are complete and there are no duplicates. 
Regardless of how the data are collected - manually, automatically or semi-automatically - they all feed into one of the 3 separate log files set up for the the three objectives in the Theory of Change. Google Spreadsheet are used for these log files, and the appropriate sharing and editing permissions are in place.

How to display the data? Platform, design, prototype and production 

On the basis of an initial sketch of the dashboard produced by the IDRC team, we populated the log files with dummy data and produced two different prototypes, one using Tableau Public and one using Google Charts and publishing them into a Google Site. We agreed to use Google tools to avoid adding another layer of complexity to the system and keep it all inside Google Apps. Additionally, as Charts are generated by the log files, when the log files are updated so are the charts on the live site, which is a great short-cut, cutting down work on updating the dashboard.

Similarly to the R4D dashboard, this program dashboard presents a tabbed navigation at the top, with one tab for each of the objectives monitored in the framework. This way we could present objective-specific charts, tables and figures in a clean, uncluttered interface.

Additionally, the main tab of the dashboard presents what we called ‘curated content’, such as a selection of recent publications, blog posts or key events that are hand picked by the dashboard administrators to highlight specific information.
M&E Dashboard - Click to enlarge

What next? Possible platform iteration and next blog posts 

This dashboard became at the beginning of 2015 and its second update is planned for the end of this quarter, so it’s too soon to evaluate it and think about possible iterations. However, feedback received from users has been positive so far and the system delivers the required information to the different target users.

In my opinion, a possible way to improve it would be to add filters and controls to the charts currently published on the dashboard, so that users can interact with them, browse for specific period of time, make comparisons, and get more out of this visual representation of data.

To do this requires working with Google Apps Scripts, JavaScript cloud scripting language that provides easy ways to automate tasks across Google products and third party services. I’m not a programmer but I like learning new things and findings solutions that others have already implemented. So also in the current version of this dashboard I’ve made use of Google Apps Scripts to collect data and to copy them from one spreadsheet into another.

If you are interested in what Apps Scripts I’ve been using and what they can do for you, subscribe to the blog and sit back till you’ll get my next post in this series - or share your experience in the comments below here.

Thursday, March 05, 2015

R4D dashboard: Visualize web access to DFID funded research

Collecting traffic and usage statistics for a website or portal can be a very time consuming and tedious task. And in most cases you end up compiling monthly or quarterly report for managers and donors that will be shared as email attachments - and at best skimmed, since there is so much information. But there are smarter ways you can do this process and bring life into your data, as as I explained in my previous blog.

Our case study is the R4D portal, a free access on-line portal containing the latest information about research funded by DFID, including details of current and past research in over 40,000 project and document records. Until 2013 we were part of the team supporting and managing the site.

As part of our work packages, we developed an online, interactive visualization of web traffic and usage of the R4D portal and its social media channels. The R4D dashboard, built using Tableau Public, is still updated and in use. However, since the termination of our support contract, it hasn't been iterated and improved since 2014.

This posts presents the process we followed to develop the dashboard, the tools used and the lessons learned in what was very much a learning by doing journey.

 Why develop the R4D dashboard? 

The collection of usage and traffic data for R4D used to be pretty much standard: a series of excel files updated monthly to generate charts and graphs. They were then put together in a PDF report and shared with project leads at DFID. The idea to develop instead an online, public dashboard of R4D web traffic and usage was inspired by the excellent work from Nick Scott and ODI, which he shared with us during a Peer Exchange session we organized back in 2012.

Donor organisations such as DFID collect a lot of statistics and indicators but these are often kept within projects and programmes and not made available for all staff, as was the case for R4D. So the reason behind the R4D dashboard was primarily to open up our stats and make them more accessible to anybody interested in it, not just the people that had sign off on the project.

Also, by encouraging a more open approach to web stats, the idea was also to have more terms of comparison: it is difficult to evaluate how well your website is doing if you can only compare against yourself. So being able to see how much traffic similar websites are generating will help you assess your own effort and performance.


So what did we do?

Process wise, we pretty much followed the steps indicated in my previous blog posts. With the primary audience well in mind, we started to select the metrics to include in the dashboard:
  • Website stats: Visits and visitors; referring sites; visitors by country; PDF downloads and top pages. 
  • RSS feeds subscribers Twitter clickthroughs and Facebook Insights data (later removed)
  • Number of outputs added to the R4D database (by type, for example open access articles, peer review articles, etc…) 
We decided that it was feasible to collect this data monthly as xls or cvs files exported to from the site(s) and save them into a shared Dropbox folder. This was the most effective way as data collection was decentralized with different people working on different platforms. With our limited budget, it was not possible to automate the data collection process, so this was entirely manual.

Software platform selection took quite some time in the initial phase of the process. We selected Tableau Public as our dashboard platform, and then had to invest more time in learning its features and functionality. But it was totally worth it!


Why Tableau? 

Tableau Public is free software that can allow anyone to connect to a spreadsheet or file and create interactive data visualizations for the web. There are many tutorials out there if you just Google for it, so I’m not going to tell you here how it works in details. But here are my top reason for using Tableau Public:
  • It's free! Well, that's a good reason already if you don't have resources to invest in business intelligence or visualization software - and normally the cost for these are steep and way outside the budget of the organizations we work with; 
  • It's intuitive. You don't need to be an expert to use the tool. The interface is very simple (drag and drop) and you can easily find your way around. 
  • It's rich and deep. There are so many charts you can choose from and you can play around with different visualization until you are happy with the result. It also goes much deeper than Excel with analysis and interactions.


What did we learn? 

Besides learning how to use Tableau Public itself, here are the main things I learned along and around the process of developing the R4D dashboard:
  • Google Analytics is the industry market standard - but it tends to under-count your traffic.
    We ran two different website analytics packages on the main R4D portal - Google Analytics (GA) and SmarterStats - and noticed a huge difference in the results, with GA massively under-counting visits and visitors. So it's always worth installing another tracker to be on the safe side. 
  • Updating Tableau is quick - but getting the data manually isn't
    Once your dashboard is set up, the process of updating it with new data is rather quick, just a few clicks and you are done. However data collection from the various sources in our case was mostly manual and it can be time consuming (and not much fun either!). If I were still working on the project, I’d look into ways to automate as much as possible data collection - while also looking at what additional (useful) data I could collect in an automated way. 
  • Build it once - and then you *must* iterate
    When you're done building your dashboard, you're actually not done. We had a couple of iterations before arriving at the product that is now online. And I'm sure this would be different now had the project continued. This is because you have to evaluate whether the visualizations in the dashboard are actually useful and provide you actionable insights that can inform your strategy. Or simply because the software keeps evolving and can give you new possibilities that were not there before.

In the next post on this series I'll present a different approach to develop an M&E dashboard, this time using a combination of Google Forms, Sheets and Charts, together with Google Apps Scripts and Google Docs Add Ons.

In the meantime, if you have experience with Tableau or use other tools to create interactive dashboards, why not share it in the comments here?