You are browsing the archive for OD4D.

Bridging the gap between journalism and data analysis

Chikezie Omeje - October 5, 2017 in Data Journalism, OD4D

This blogpost was written by Chikezie Omeje,  Kunle Adelowo and Vershima Tingir as part of the Open Data for Development (OD4D) embedded fellowship programme. This recently initiated programme is designed to build the organisational capacity of civil society organisations to use data effectively by raising the level of data literacy of the staff of the partner organisation(s), supporting the organisation(s) to deliver a specific data project, and developing an initial data strategy for the organisation’s future engagement. Chikezie Omeje is a journalist at the International Centre for Investigative Reporting (ICIR), Kunle Adelowo and Vershima Tingir are developers at the Public and Private Development Centre (PPDC). They are all based in Abuja. OD4D is a global network of leaders in the open data community, of which Open Knowledge International forms part, working together to develop open data solutions around the world. For this fellowship the Public and Private Development Centre (PPDC) will develop the International Centre for Investigative Reporting (ICIR) capacity to investigate and report on open contracting related stories. 

The threat to traditional journalism

Older journalists will agree that journalism is no longer what it used to be. It is rapidly changing. Within the past decade, the profession has been disrupted to the extent that the question of who is a journalist is now difficult to answer. Technology has democratised journalism in a way that is now within the reach of anyone who is interested. The rise of social media and digital publishing platforms have made it easier for those who were formally referred to as audience to become news producers.  Members of the audience who are interested in journalism can now do the work of a journalist comfortably. The traditional line between journalists and audience has been blurred. Anyone who has the digital tools can produce and publish news without the help of a journalist. This disruption in the media industry presents both a threat and an opportunity for journalists. This threat can be seen in the declining  revenue of legacy media organisations  which means traditional journalists now stand to lose their jobs. The old business model of journalism is no longer sustainable and journalists are facing fierce competition from a multitude of individual, online publishers. The implication is that being just a journalist who covers and writes news stories is no longer enough. Anybody who is willing and able can do that now. To survive the existential threat facing traditional journalism, journalists need to build new skills that were not even taught in journalism schools a decade ago. The emergence of  buzz words such as “tech-savvy journalist” and  “data journalist”  in the newsroom is evidence of this shift. To practice journalism, every journalist needs to have digital skills that are imperative for 21st century journalism. As ordinary citizens are increasingly able to perform the work of journalism, a professional journalist  needs to take  further steps to acquire the necessary skills beyond the old concept of journalism.  Therefore, a journalist must have both the reporting and technical skills. Among the skills that a journalist should have is the ability to process, analyse and visualise data. Despite the increasing amount of information that is now available to the citizens, people are still not adequately informed on critical issues bordering on data. This is why data journalism is receiving attention around the world. Data analysis and visualisation are useful skills for today’s journalist. A lot of critical information is buried in data and a journalist must now have the skills (or access to the skills)  to harness and report the data. When journalists have data skills, it will facilitate timely production of high value and impactful information. But many journalists have been complaining on why they should acquire these technical skills. They often complain that the new skills being demanded of them are too technical and complicated.  For example, some journalists usually want to know why should they learn certain aspect of computer programming, arguing that it is so difficult.  The truth is that there are a growing number of digital tools that have made these essential skills easy to acquire which reduces the initial technical barrier for most journalists. To become proficient in data journalism, there are three essential technical skills we think journalists would need.

Data Gathering, Conversion and Extraction Techniques

Reporters often get information from different sources. Information may be presented in different formats, some of which may not be directly usable until they are converted to another format. File formats like PDFs, HTML, hard copy documents makes it hard to gather data in a structured and reusable way. Therefore data presented in these formats have to be converted to more flexible, structured and reusable formats such as Excel, Word, and CSV. There are tools that make conversion easier and they require minimal technical capabilities to use. Some of these tools include Tabula for extracting tables from PDF to CSV, online optical character recognition (OCR) which is a handy tool for converting tables in scanned document to csv, small PDF etc.

Screenshot of Tabula interface

Screenshot of Online OCR web application

 Data cleaning tools

After gathering, conversion and extraction you would have all the data you want and more at your fingertips. Most times, the data you are looking for often come in large datasets and the data that you need might be a small portion of it.  Essentially, another set of skills that you will need to get exactly what  you are looking for will be how  to use data cleaning tools. Data cleaning is the process of correcting wrong fields by removing  or adding, rearranging a dataset. For this purpose, the go-to tool is Microsoft Excel. It is very powerful and can be used for simple tasks like sorting, filtering, simple maths and text functions, pivot tables and data validation.

Sort and filter buttons on Microsoft Excel

Visualisation tools

So now you have your data and it makes sense to you, but your job as a journalist is to gather this information and present to your audience. Among your audience you have people who like numbers: they like to see the exact digits in its rawest form while others are suckers for aesthetics. They want to see colors and animations that tell stories. For the latter, the solution would be visualisations and as you would guess, data visualisation would be to transform and present datasets in form of graphical representation. A typical example of this would be creating a bar chart out of the annual salary received by each employee of an organization. Simple tools that can be used to create data visualization include Microsoft Excel, Google charts. To be a tech-savvy journalist, you need to step out of your comfort zone to acquire these essential skills. Journalism is changing rapidly and nobody has complete idea how journalism will be practiced in the next decade. This change  will not slow down as long as there are emerging technologies.The internet has made basic information readily and easily available, anybody with a computer and internet access can start a blog become a journalist. This has lowered the value of basic everyday information.Therefore journalist have to go the extra mile in using technology to do more factual reporting. Journalism is at the mercy of technology and those who cannot master these new technical tools can not report on more meaningful, factual and high value information. The worst thing that can happen to a journalist is to be outdated or irrelevant in the new demands of the profession.

Impact Series: Improving Data Collection Capacity in Non-Technical Organisations

David Selassie Opoku - June 5, 2017 in OD4D, Open Data

Open Knowledge International is a member of Open Data for Development (OD4D), a global network of leaders in the open data community, working together to develop open data solutions around the world. In this blog, David Opoku of Open Knowledge International talks about how the OD4D programme’s Africa Open Data Collaboration Fund and  Embedded Fellowships are helping build the capacity of civil society organisations (CSOs) in Africa to explore the challenges and opportunities of becoming alternative public data producers.

Nana Baah Gyan was an embedded fellow who worked with Advocates for Community Alternatives (ACA) in Ghana to help with their data needs.


Due to the challenge of governments providing open data in Africa, civil society organisations (CSOs) have begun to emerge as alternative data producers. The value these CSOs bring includes familiarity of the local context or specific domain where data may be of benefit.  In some cases, this new role for CSOs serves to provide additional checks and verification for data that is already available, and in others to provide entire sets of data where none exists. CSOs now face the challenge of building their own skills to effectively produce public data that will benefit its users. For most CSOs in low-income areas, building this capacity can be long, logistically-intensive, and expensive.

Figure 1: CSOs are evolving from traditional roles as just data intermediaries to include producers of data for public use.

Through the Open Data for Development (OD4D) program, Open Knowledge International (OKI) sought to learn more about what it takes to enable CSOs to become capable data collectors. Using the Africa Open Data Collaboration (AODC) Fund and the OD4D embedded fellowship programmes, we have been exploring the challenges and opportunities for CSO capacity development to collect relevant data for their work.

Our Solution

The AODC Fund provided funding ($15000 USD) and technical support to the Women Environmental Programme (WEP) team in Abuja, Nigeria, that was working on a data collection project aimed at transparency and accountability in infrastructure and services for local communities. WEP was supported through the AODC Fund in learning how to design the entire data collection process, including recruiting and training the data collectors, selecting the best data collection tool, analysing and publishing the findings, and documenting the entire process.

Figure 2: Flowchart of a data collection process. Data collection usually requires several components or stages that make it challenging for non-technical CSOs to easily implement without the necessary skills and resources.

In addition, the embedded fellowship programme allowed us to place a data expert in the Advocates for Community Alternatives (ACA) team for 3 months to build their data collection skills. ACA, which works on land issues in Ghana, has been collecting data on various community members and their land. Their challenge was building an efficient system for data collection, analysis and use. The data expert has been working with them to design and test this system and train ACA staff members in using it.

Emerging Outcomes

Through this project, there has been an increased desire within both WEP and ACA to educate their staff members about open data and its value in advocacy work. Both organisations have learned the value of data and now understand the need to develop an organisational data strategy. This is coupled with an acknowledgement of the need to strengthen organisational infrastructure capacity (such as better emailing systems, data storage, etc.) to support this work. The hope is that both organisations will have greater knowledge going forward on the importance of data, and have gained new skills in how to apply it in practice. WEP, for instance, has since collected and published their dataset from their project and are now making use of the Kobo Toolbox along with other newly acquired skills in their new projects. ACA, on the other hand, is training more of its staff members with the Kobo Toolbox manual that was developed, and are exploring other channels to build internal data capacity.


These two experiences have shed some more light on the growing needs of CSOs to build their data collection capacity. However, the extent of the process as depicted in Figure 1 shows that more resources need to be developed to enhance the learning and training of CSOs. A great example of a beneficial resource is the School of Data’s  Easy Guide to Mobile Data Collection. This resource has been crucial in providing a holistic view of data collection processes to interested CSOs. Another example is the development of tools such as the Kobo Toolbox, which has simplified a lot of the technical challenges that would have been present for non-technical and low-income data collectors.

Figure 3: CSO-led data collection projects should be collaborative efforts with other data stakeholders.

We are also learning that it is crucial to foster collaborations with other data stakeholders in a CSO-led data collection exercise. Such stakeholders could include working with academic institutions in methodology research and design,  national statistics offices for data verification and authorisation, civic tech hubs for technical support and equipment, telecommunication companies for internet support, and other CSOs for contextualised experiences in data collection. Learn more about this project:


Daniel Fowler - May 30, 2017 in Events, Frictionless Data, OD4D, Open Spending

The third manifestation of everyone’s favorite community conference about data—csv,conf,v3—happened earlier this May in Portland, Oregon. The conference brought together data makers/doers/hackers from various backgrounds to share knowledge and stories about data in a relaxed, convivial, alpaca-friendly (see below) environment. Several Open Knowledge International staff working across our Frictionless Data, OpenSpending, and Open Data for Development projects made the journey to Portland to help organize, give talks, and exchange stories about our lives with data. Thanks to Portland and the Eliot Center for hosting us. And, of course, thanks to the excellent keynote speakers Laurie Allen, Heather Joseph, Mike Bostock, and Angela Bassa who provided a great framing for the conference through their insightful talks. Here’s what we saw.

Talks We Gave

The first priority for the team was to present on the current state of our work and Open Knowledge International’s mission more generally. In his talk, Continuous Data Validation for Everybody, developer Adrià Mercader updated the crowd on the launch and motivation of It was a privilege to be able to present our work at one of my favourite conferences. One of the main things attendees highlight about csv,conf is how diverse it is: many different backgrounds were represented, from librarians to developers, from government workers to activists. Across many talks and discussions, the need to make published data more useful to people came up repeatedly. Specifically, how could we as a community help people publish better quality data? Our talk introducing presented what we think will be a dominant approach to approaching this question: automated validation. Building on successful practices in software development like automated testing, integrates within the data publication process to allow publishers to identify issues early and ensure data quality is maintained over time. The talk was very well received, and many people reached out to learn more about the platform. Hopefully, we can continue the conversation to ensure that automated (frictionless) data validation becomes the standard on all data publication workflows. David Selassie Opoku presented When Data Collection Meets Non-technical CSOs in Low-Income Areas: csv,conf was a great opportunity to share highlights of the OD4D (and School of Data) team’s data collection work. The diverse audience seemed to really appreciate insights on working with non-technical CSOs in low-income areas to carry out data collection. In addition to highlighting the lessons from the work and its potential benefit to other regions of the world, I got to connect with data literacy organisations such as Data Carpentry who are currently expanding their work in Africa and could help foster potential data literacy training partnerships. As a team working with CSOs in low-income areas like Africa, School of Data stands to benefit from continuing conversations with data “makers” in order to present potential use cases. A clear example I cited in my talk was Kobo Toolbox, which continues to mitigate several daunting challenges of data collection through abstraction and simple user interface design. Staying in touch with the csv,conf community may highlight more such scenarios which could lead to the development of new tools for data collection. Paul Walsh, in his talk titled Open Data and the Question of Quality (slides) talked about lessons learned from working on a range of government data publishing projects and we can do as citizens to demand better quality data from our governments:

Talks We Saw

Of course, we weren’t there only to present; we were there to learn from others as well. Before the conference, through our Frictionless Data project, we have been lucky to be in contact with various developers and thinkers around the world who also presented talks at the conference. Eric Busboom presented Metatab, an approach to packaging metadata in spreadsheets. Jasper Heefer of Gapminder talked about DDF, a data description format and associated data pipeline tool to help us live a more fact-based existence. Bob Gradeck of the Western Pennsylvania Regional Data Center talked about data intermediaries in civic tech, a topic near and dear to our hearts here at Open Knowledge International.

Favorite Talks

  • “Data in the Humanities Classroom” by Miriam Posner
  • “Our Cities, Our Data” by Kate Rabinowitz
  • “When Data Collection Meets Non-technical CSOs in Low Income Areas” by David Selassie Opoku
  • “Empowering People By Democratizing Data Skills” by Erin Becker
  • “Teaching Quantitative and Computational Skills to Undergraduates using Jupyter Notebooks” by Brian Avery
  • “Applying Software Engineering Practices to Data Analysis” by Emil Bay
  • “Open Data Networks with Fieldkit” by Eric Buth
  • “Smelly London: visualising historical smells through text-mining, geo-referencing and mapping” by Deborah Leem
  • “Open Data Networks with Fieldkit” by Eric Buth
  • “The Art and Science of Generative Nonsense” Mouse Reeve
  • “Data Lovers in in a Dangerous Time” by Bendan O’Brien

Data Tables

This csv,conf was the first csv,conf to have a dedicated space for working with data hands-on. In past events, attendees left with their heads buzzing full of new ideas, tools, and domains to explore but had to wait until returning home to try them out. This time we thought: why wait? During the talks, we had a series of hands-on workshops where facilitators could walk through a given product and chat about the motivations, challenges, and other interesting details you might not normally get to in a talk. We also prepared several data “themes” before the conference meant to bring people together on a specific topic around data. In the end, these themes proved a useful starting point for several of the facilitators and provided a basis for a discussion on cultural heritage data following on from a previous workshop on the topic. The facilitated sessions went well. Our own Adam Kariv walked through Data Package Pipelines, his ETL tool for data based on the Data Package framework. Jason Crawford demonstrated Fieldbook, a tool for managing easily managing a database in-browser as you would a spreadsheet. Bruno Vieira presented Bionode, going into fascinating detail on the mechanics of Node.js Streams. Nokome Bentley walked through a hands-on introduction to accessible, reproducible data analysis using Stencila, a way to create interactive, data-driven documents using the language of your choice to enable reproducible research. Representatives from, an Austin startup we worked with on an integration for Frictionless Data also demonstrated uploading datasets to The final workshop was conducted by several members of the Dat team, including co-organizer Max Ogden, with a super enthusiastic crowd. Competition from the day’s talks was always going to be fierce, but it seems that many attendees found some value in the more intimate setting provided by Data Tables.


If you were there at csv,conf in Portland, we hope you had a great time. Of course, our thanks go to the Gordon and Betty Moore Foundation and to Sloan Foundation for enabling me and my fellow organizers John Chodacki, Max Ogden, Martin Fenner, Karthik, Elaine Wong, Danielle Robinson, Simon Vansintjan, Nate Goldman and Jo Barratt who all put so much personal time and effort to bringing this all together. Oh, and did I mention the Comma Llama Alpaca? You, um, had to be there.

Collaborating For A Greater Good

Nana Baah Gyan - April 19, 2017 in OD4D

Open Knowledge International is a member of Open Data for Development (OD4D), a global network of leaders in the open data community, working together to develop open data solutions around the world. In this blog, Nana Baah Gyan talks about his work carrying out an embedded data fellowship with Advocates for Community Alternatives (ACA) in Ghana as part of the OD4D programme.  Generally, information needs often require different strategies in order to meet them satisfactorily. And this is even more the case in circumstances where technology know-how needs to be taught and gradually introduced to fit a particular setting of would-be technology adopters. Often, this case presents its own unique and (quite frankly speaking) exciting challenges for technology enthusiasts. It opens up otherwise largely unexplored avenues for technology innovation and learning in new communities. And this is exactly how I thought of it when I was first invited by Open Knowledge International (OKI) to be part of the  Open Data for Development (OD4D) embedded fellowship  working with Advocates for Community Alternatives (ACA) in Ghana for three months.

Some of the ACA team members in Ghana. From the right to left): Nimako, Naomi and Jonathan.

Registered as an non governmental organisation (NGO), ACA has as its main task, among other things, to help — through trainings and frequent community engagements — rural communities to independently explore for the themselves possible alternative livelihoods, especially in situations where these communities are threatened by big mining firms. In Ghana, the story has not always been a pleasant one whenever some such firm shows up at the door of the community with heavy machinery and equipments ready to mine. Often, mining companies have ended up destroying livelihoods by taking away farmlands, polluting drinking waters and significantly altering the way of life of people in the these communities for the worse. However, for some four villages in the Brong Ahafo region of Ghana this was actively stood against by members of the communities, and their resolve to prevent one such mining giant in their communities presented, by itself, a fascinating story which attracted the engagement of ACA.

The OKI Fellowship

The fellowship took the form of being embedded as a data expert in an existing organisation, and comprised a 3-month contract with ACA facilitated by Ghana’s representative of OKI. OKI made all the pre-contract engagements and agreements for the project. My main role was to identify data needs of ACA and suggest and/or implement open standards-compliant tools to meet this need, including the training of staff to use these tools too. ACA was just at that moment of exploring the proper collection and use of data in their work. They had realised the important role proper data management had begun to play in their work and also seen that, in order for them to succeed in their efforts in the villages, the timely collection, collation and delivery and analysis of information from the field was essential. This they saw to be crucial also for the purposes of monitoring and evaluating interventions over time as well as ensuring data integrity for its analyses and reporting needs.

Identifying Appropriate Tools

Right from the onset, R came up as the tool of choice for working with data. This was particularly because of R’s suitability in terms of its vast pool of packages to choose from for different analysis and modelling. But almost about a week or so into the fellowship this had to be reconsidered because of a number of issues. In order to deploy an R application for the needs described above for ACA, not only did R offer more than was wanted, it also presented unique challenges overcoming of which required significant investment (time, technical infrastructure, etc) — far more than necessary for a small organisation as ACA. For this fellowship, KoBo offered far more desirable advantages which made it the tool of choice. KoBo’s biggest advantage over R in this project was its ability to support offline form filling and, for the conditions which prevailed in the areas of ACA’s interest, this was especially useful. With it’s simple drag and drop interface for form design, and dual support for both mobile and non-mobile devices, KoBo presented all the was needed for ACA’s work. For that main reason, KoBo was the tool of choice for designing and sending out questionnaires, interviewing stakeholders in the villages for onward submission to ACA’s head office in Accra. It’s mobile/smartphone-capable tools only meant that end-users only needed SIM-enabled tablets to work with.

Training Stakeholders

Although KoBo is extremely useful and easy to use, it is still not largely known, and especially among non-technology inclined users. Therefore, stakeholders in the project had to be trained to use the tool. This included taking them through registering on the KoBo platform, designing and building questionnaire forms, deploying forms for use in the field, and analysing data sent by field workers on the KoBo platform.

A session with a farmer group in Kyeredeso, Brong Ahafo Region, Ghana. Here, I was explaining to the farmers, who are more than likely to become respondents to questionnaires in the future, how the new platform was going to be used by ACA’s field workers.

The trainees were of two categories: those who design and deploy questionnaires to the platform for use. These are the ones who decide on strategy and planning and are responsible for the kinds of data that should be collected. The other category consists of those who do the actual face-to-face interviews and fill out deployed forms answers from respondents. As expected, both categories of trainees required different training needs hence its design had to reflect these needs. Of particular mention is also the fact that, for some of the users, the training had to include basic instructions such as how to navigate the questions on a tablet or smartphone.  KoBo’s unique suitability for such purposes, among other things, is anchored on its superb ‘rural conditions support’ and the ability to operate the software offline. What I thought also useful, and therefore put together, was a manual for using some of the basic features of KoBo. It contained, simple, straight-to-the-point steps for registering, designing and deploying forms, filling out questionnaires and using the platform for data analysis. It took such format that both categories of trainees would find it useful. In total, five people were trained in the use of KoBo: two persons to design questionnaires and three in the field who would use them.

Looking Ahead

The training was successful in terms of explaining to end-users how the KoBo platform works and can be used. However, limited time would not permit at least one use case where that could be tested. It’s left to see how the training acquired could translate into actual use and practice — just as it is generally with the use of technology. However, there are good indications that the fellowship has been a success. There’s has been some increased awareness about data needs and tools such as KoBo that greatly help in its management. This, I hope, will go a long way in aiding ACA in their planning, implementation, monitoring and evaluation of projects.


In conclusion, the entire fellowship sponsored by OKI has been, in my experience of this nature as a data expert, a good experience. Every project of this kind that I’ve worked has had it’s own unique form and execution and this was no different. The whole idea of embedding or sponsoring experts in organisations with data needs is innovative and should be commended. It provides rare opportunities, in such developing countries as Ghana, to make the benefits of proper data management available to those who actually need them. This translates into better monitoring and evaluation strategies and decision making such as with ACA Ghana. The data needs of Ghana, in general, are immense but it is also a largely unexplored territory lying dormant. Freely available tools and technologies for dealing with this problems are largely unknown and unused. Only few organisations and private individuals are now warming up to this idea of data management. If this hurdle can be overcome, a number of initiatives such as this are crucial. What would perhaps speed up the process is identifying the particular groups with such needs and matching them with experts who would share and engage communities on this need and, thus, creating a necessary, much-needed awareness.

Presenting the value of open data to NGOs: A gentle introduction to open data for Oxfam in Kenya

Simeon Oriko - March 1, 2017 in OD4D

Open Knowledge International is a member of OD4D, a global network of leaders in the open data community, working together to develop open data solutions around the world. Here, Simeon Oriko talks about his work carrying out an embedded data fellowship with Oxfam in Kenya as part of the Open Data for Development (OD4D) programme.  For years, traditional non-governmental organisations (NGOs) in Kenya have focused on health, humanitarian and aid initiatives. Most still engage in this type of work but with an increasing shift toward adopting modern approaches that respect historical experience and consider future emerging constraints. Modern approaches such as social innovation models and using solution-based processes such as design thinking are increasingly commonplace in NGOs. Open data practices are also being adopted by some NGOs with the aim of mobilizing all their resources for transformative change.
Image credit: Systems thinking about the society by Marcel Douwe Dekker CC BY 3.0
As part of an ongoing embedded data fellowship programme with Open Knowledge International, I had the opportunity late last year to meet with some Oxfam in Kenya staff and talk to them about their priorities and their organizational shift towards ‘influencing’ and ‘advocacy’. We spoke about the strategies they would like to adopt to make this shift happen. These meetings helped identify key areas of need and opportunities for the organization to adopt open data practices. I learned from my conversations that the organization collects and uses data but most of it is not open. It was also made plain that staff faced significant challenges in their data workflow; most lack the skills or technical know-how to work with data and often heavily rely on the Monitoring and Evaluation team to handle data analysis and other key processes. Oxfam in Kenya expressed interest in building their knowledge and capacity to work with data. One tangible outcome from my conversations was to hold workshops to sensitize them on the value of open data. We held two workshops as a result: one with Oxfam in Kenya staff and the other with some of their partner organizations.
Open Data Workshop with Oxfam in Kenya Partners – January 26th, 2017
On January 13th 2017, Oxfam in Kenya hosted me for a 3-hour workshop on open data. The training focused on 2 things:
  • a basic understanding of what open data is and what it does, and,
  • helping them understand and map out its potential strategic value in their organization.

The training emphasized 3 key characteristics of open data:

  • Availability and Access: We spoke on the need for data to be available as a whole and at no more than a reasonable reproduction cost. We emphasized the need for data to be available in a convenient and modifiable form. It became clear that providing an easy way to download data over the internet was the preferred way of making data easily available and accessible.
  • Reuse and Redistribution: The training emphasized the use of licenses or terms under which the data must be provided. Licenses and terms that permit reuse and redistribution including the intermixing with other datasets were recommended. We briefly explored using Creative Commons Licenses to license the data openly.
  • Universal Participation: The idea that anyone must be able to use, re-use and redistribute the data, was emphasized as important practice in both principle and practice.
Part of the training showcased tools and visualizations to show the strategic value of open data. This was helpful in making concrete the concept of open data to the majority in the room who either had a vague idea of open data, or were learning about it for the very first time. In follow-up meetings, there was an express interest in sharing this knowledge with some of Oxfam in Kenya’s partner organizations. This conversation led to the second workshop hosted by the Oxfam’s tax justice team for some of their partner organization: Inuka ni Sisi, Caritas Lodwar, National Taxpayers Association (NTA) and ALDEF. This training took place in Nairobi on January 26th 2017. The majority of participants for this workshop were people who ran and worked in grassroots organizations in various parts of Kenya including Wajir and Turkana. The training followed the same format and highlighted the same themes as the one carried out for staff. We also took time during this training to help participants identify data sources and data sets that were relevant to their programs. Some participants were keen to find revenue and expenditure data from their local County Governments. A few more were interested in comparing this data with population distribution data in their regions. Most of this updated data is available on Kenya’s national Open Data portal. This granted us the opportunity to explore data publishing tools. From conducting these trainings, I learned of a clear need to offer very simple, entry-level workshops to expose traditional NGOs and their local partner organizations to open data and its value. Many of these organizations are already working with data in one way or another but few understand and reap the benefits that data can accord them due to a lack of knowledge and capacity. In the interest of helping these organizations mobilize their resources for transformative change, I believe actors in the global open data community should seek out local development actors and create programs and tools that easily onboard them into the open data ecosystem with a focus on building their capacity to work with data and collaborating with them to drive increase adoption of open data practices at the grassroots and community level. In collaboration with the OD4D program, Open Knowledge International coordinates the embedded fellowship programme which places an open data expert in a CSO  over 3 months to provide support in thinking through and working with open data.