You are browsing the archive for Open Knowledge.

Frictionless Data Tool Fund update: Shelby Switzer and Greg Bloom, Open Referral

- January 15, 2020 in Data Package, Frictionless Data, Open Knowledge

This blogpost is part of a series showcasing projects developed during the 2019 Frictionless Data Tool Fund. The 2019 Frictionless Data Tool Fund provided four mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research built using the Frictionless Data specifications and software. This fund is part of the Frictionless Data for Reproducible Research project, which is funded by the Sloan Foundation. This project applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate reproducible data workflows in research contexts.     Open Referral Logo   Open Referral creates standards for health, human, and social services data – the data found in community resource directories used to help find resources for people in need. In many organisations, this data lives in a multitude of formats, from handwritten notes to Excel files on a laptop to Microsoft SQL databases in the cloud. For community resource directories to be maximally useful to the public, this disparate data must be converted into an interoperable format. Many organisations have decided to use Open Referral’s Human Services Data Specification (HSDS) as that format. However, to accurately represent this data, HSDS uses multiple linked tables, which can be challenging to work with. To make this process easier, Greg Bloom and Shelby Switzer from Open Referral decided to implement datapackage bundling of their CSV files using the Frictionless Data Tool Fund.  In order to accurately represent the relationships between organisations, the services they provide, and the locations they are offered, Open Referral aims to use their Human Service Data Specification (HSDS) makes sense of disparate data by linking multiple CSV files together by foreign keys. Open Referral used Frictionless Data’s datapackage to specify the tables’ contents and relationships in a single machine-readable file, so that this standardised format could transport HSDS-compliant data in a way that all of the teams who work with this data can use: CSVs of linked data.  In the Tool Fund, Open Referral worked on their HSDS Transformer tool, which enables a group or person to transform data into an HSDS-compliant data package, so that it can then be combined with other data or used in any number of applications. The HSDS-Transformer is a Ruby library that can be used during the extract, transform, load (ETL) workflow of raw community resource data. This library extracts the community resource data, transforms that data into HSDS-compliant CSVs, and generates a datapackage.json that describes the data output. The Transformer can also output the datapackage as a zip file, called HSDS Zip, enabling systems to send and receive a single compressed file rather than multiple files. The Transformer can be spun up in a docker container — and once it’s live, the API can deliver a payload that includes links to the source data and to the configuration file that maps the source data to HSDS fields. The Transformer then grabs the source data and uses the configuration file to transform the data and return a zip file of the HSDS-compliant datapackage.  HSDS demo app

Example of a demo app consuming the API generated from the HSDS Zip

The Open Referral team has also been working on projects related to the HSDS Transformer and HSDS Zip. For example, the HSDS Validator checks that a given datapackage of community service data is HSDS-compliant. Additionally, they have used these tools in the field with a project in Miami. For this project, the HSDS Transformer was used to transform data from a Microsoft SQL Server into an HSDS Zip. Then that zipped datapackage was used to populate a Human Services Data API with a generated developer portal and OpenAPI Specification.   Further, as part of this work, the team also contributed to the original source code for the datapackage-rb Ruby gem. They added a new feature to infer a datapackage.json schema from a given set of CSVs, so that you can generate the json file automatically from your dataset. Greg and Shelby are eager for the Open Referral community to use these new tools and provide feedback. To use these tools currently, users should either be a Ruby developer who can use the gem as part of another Ruby project, or be familiar enough with Docker and HTTP APIs to start a Docker container and make an HTTP request to it. You can use the HSDS Transformer as a Ruby gem in another project or as a standalone API. In the future, the project might expand to include hosting the HSDS Transformer as a cloud service that anyone can use to transform their data, eliminating many of these technical requirements. Interested in using these new tools? Open Referral wants to hear your feedback. For example, would it be useful to develop an extract-transform-load API, hosted in the cloud, that enables recurring transformation of nonstandardised human service directory data source into an HSDS-compliant datapackage? You can reach them via their GitHub repos. Further reading: openreferral.org Repository: https://github.com/openreferral/hsds-transformer HSDS Transformer: https://openreferral.github.io/hsds-transformer/ 

Neuroscience Experiments System Frictionless Tool

- December 16, 2019 in Frictionless Data, Open Knowledge

This blog is part of a series showcasing projects developed during the 2019 Frictionless Data Tool Fund.  The 2019 Frictionless Data Tool Fund provided four mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research built using the Frictionless Data specifications and software. This fund is part of the Frictionless Data for Reproducible Research project, which is funded by the Sloan Foundation. This project applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate reproducible data workflows in research contexts.  

NES logo

Neuroscience Experiments System Frictionless Data Incorporation, by the Technology Transfer team of the Research, Innovation and Dissemination Center for Neuromathematics.

  The Research, Innovation and Dissemination Center for Neuromathematics (RIDC NeuroMat) is a research center established in 2013 by the São Paulo Research Foundation (FAPESP) at the University of São Paulo, in Brazil. A core mission of NeuroMat is the development of open-source computational tools to aid in scientific dissemination and advance open knowledge and open science. To this end, the team has created the Neuroscience Experiments System (NES), which is an open-source tool to assist neuroscience research laboratories in routine procedures for data collection. To more effectively understand the function and treatment of brain pathologies, NES aids in recording data and metadata from various experiments, including clinical data, electrophysiological data, and fundamental provenance information. NES then stores that data in a structured way, allowing researchers to seek and share data and metadata from those neuroscience experiments.  For the 2019 Tool Fund, the NES team, particularly João Alexandre Peschanski, Cassiano dos Santos and Carlos Eduardo Ribas, proposed to adapt their existing export component to conform to the Frictionless Data specifications.   Public databases are seen as crucial by many members of the neuroscience community as a means of moving science forward. However, simply opening up data is not enough; it should be created in a way that can be easily shared and used. For example, data and metadata should be readable by both researchers and machines, yet they typically are not. When the NES team learned about Frictionless Data, they were interested in trying to implement the specifications to help make the data and metadata in NES machine readable.  For them, the advantage of the Frictionless Data approach was to be able to standardize data opening and sharing within the neuroscience community.   Before the Tool Fund, NES had an export component that set up a file with folders and documents with information on an entire experiment (including data collected from participants, device metadata, questionnaires, etc. ), but they wanted to improve this export to be more structured and open. By implementing Frictionless Data specifications, the resulting export component includes the Data Package (datapackage.json) and the folders/files inside the archive, with a root folder called data. With this new “frictionless” export component, researchers can transport and share their export data with other researchers in a recognized open standard format (the Data Package), facilitating the understanding of that exported data. They have also implemented Goodtables into the unit tests to check data structure.   The RIDC NeuroMat team’s expectation is that many researchers,  particularly neuroscientists and experimentalists, will have an interest in using the freely available NES tool. With the anonymization of sensitive information, the data collected using NES can be publicly available through the NeuroMat Open Database, allowing any researcher to reproduce the experiment or simply use the data in a different study. In addition to storing collected experimental data and being a tool for guiding and documenting all the steps involved in a neuroscience experiment, NES has an integration with the Neuroscience Experiment Database, another NeuroMat project, based on a REST API, where NES users can send their experiments to become publicly available for other researchers to reproduce them or to use as inspiration for further experiments. Screenshot of the export of an experiment: NES export   Screenshot of the export of data on participants:   Picture of a hypothetical export file tree of type Per Experiment after the Frictionless Data implementation: NES data   Further reading: Repository: https://github.com/neuromat/nes User manual: https://nes.readthedocs.io/en/latest/ NeuroMat blog: https://neuromat.numec.prp.usp.br/ Post on NES at the NeuroMat blog: https://neuromat.numec.prp.usp.br/content/a-pathway-to-reproducible-science-the-neuroscience-experiments-system/

Announcing Frictionless Data Joint Stewardship

- December 12, 2019 in Frictionless Data, Open Knowledge

We are pleased to announce joint stewardship of Frictionless Data between the Open Knowledge Foundation and Datopian. While this collaboration already exists informally, we are solidifying how we are leading together on future Frictionless Data projects and goals.   What does this mean for users of Frictionless Data software and specifications?   First, you will continue to see a consistent level of activity and support from Open Knowledge Foundation, with a particular focus on the application of Frictionless Data for reproducible research, as part of our three-year project funded by the Sloan Foundation. This also includes specific contributions in the development of the Frictionless Data specifications under the leadership of Rufus Pollock, Datopian President and Frictionless Data creator, and Paul Walsh, Datopian CEO and long-time contributor to the specifications and software.   Second, there will be increased activity in software development around the specifications, with a larger team across both organisations contributing to key codebases such as Good Tables, and the various integrations with backend storage systems such as Elasticsearch, BigQuery, and PostgreSQL, and data science tooling such as PandasAdditionally, based on their CKAN commercial services work, and co-stewardship of the CKAN project, Datopian look forward to providing more integrations of Frictionless Data with CKAN, building on existing work done at the Open Knowledge Foundation.    Our first joint project is redesigning the Frictionless Data website. Our goal is to make the project more understandable, usable, and user-focused. At this point, we are actively seeking user input, and are requesting interviews to help inform the new design. Have you used our website and are interested in having your opinion heard? Please get in touch to give us your ideas and feedback on the site. Focusing on user needs is a top goal for this project.   Ultimately, we are focused on leading the project openly and transparently, and are excited by the opportunities that clarification of the leadership of the project will provide. We want to emphasize that the Frictionless Data project is community focused, meaning that we really value to input and participation of our community of users. We encourage you to reach out to us on Discuss, in Gitter, or open issues in GitHub with your ideas or problems. Datopian   OKF logo   

Frictionless DarwinCore Tool by André Heughebaert

- December 9, 2019 in Frictionless Data, Open Knowledge, Open Research, Open Science, Open Software, Technical

This blog is part of a series showcasing projects developed during the 2019 Frictionless Data Tool Fund.  The 2019 Frictionless Data Tool Fund provided four mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research built using the Frictionless Data specifications and software. This fund is part of the Frictionless Data for Reproducible Research project, which is funded by the Sloan Foundation. This project applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate reproducible data workflows in research contexts.   logo

Frictionless DarwinCore, developed by André Heughebaert

  André Heughebaert is an open biodiversity data advocate in his work and his free time. He is an IT Software Engineer at the Belgian Biodiversity Platform and is also the Belgian GBIF (Global Biodiversity Information Facility) Node manager. During this time, he has worked with the Darwin Core Standards and Open Biodiversity data on a daily basis. This work inspired him to apply for the Tool Fund, where he has developed a tool to convert DarwinCore Archives into Frictionless Data Packages.   The DarwinCore Archive (DwCA) is a standardised container for biodiversity data and metadata largely used amongst the GBIF community, which consists of more than 1,500 institutions around the world. The DwCA is used to publish biodiversity data about observations, collections specimens, species checklists and sampling events. However, this domain specific standard has some limitations, mainly the star schema (core table + extensions), rules that are sometimes too permissive, and a lack of controlled vocabularies for certain terms. These limitations encouraged André to investigate emerging open data standards. In 2016, he discovered Frictionless Data and published his first data package on historical data from 1815 Napoleonic Campaign of Belgium. He was then encouraged to create a tool that would, in part, build a bridge between these two open data ecosystems.   As a result, the Frictionless DarwinCore tool converts DwCA into Frictionless Data Packages, and also gives access to the vast Frictionless Data software ecosystem enabling constraints validation and support of a fully relational data schema.  Technically speaking, the tool is implemented as a Python library, and is exposed as a Command Line Interface. The tool automatically converts: project architecture   * DwCA data schema into datapackage.json * EML metadata into human readable markdown readme file * data files are converted when necessary, this is when default values are described The resulting zip file complies to both DarwinCore and Frictionless specifications.    André hopes that bridging the two standards will give an excellent opportunity for the GBIF community to provide open biodiversity data to a wider audience. He says this is also a good opportunity to discover the Frictionless Data specifications and assess their applicability to the biodiversity domain. In fact, on 9th October 2019, André presented the tool at a GBIF Global Nodes meeting. It was perceived by the nodes managers community as an exploratory and pioneering work. While the command line interface offers a simple user interface for non-programmers, others might prefer the more flexible and sophisticated Python API. André encourages anyone working with DarwinCore data, including all data publishers and data users of GBIF network, to try out the new tool. 
“I’m quite optimistic that the project will feed the necessary reflection on the evolution of our biodiversity standards and data flows.”

To get started, installation of the tool is done through a single pip install command (full directions can be found in the project README). Central to the tool is a table of DarwinCore terms linking a Data Package type, format and constraints for every DwC term. The tool can be used as CLI directly from your terminal window or as Python Library for developers. The tool can work with either locally stored or online DwCA. Once converted to Tabular DataPackage, the DwC data can then be ingested and further processed by software such as Goodtables, OpenRefine or any other Frictionless Data software. André has aspirations to take the Frictionless DarwinCore tool further by encapsulating the tool in a web-service that will directly deliver Goodtables reports from a DwCA, which will make it even more user friendly. Additional ideas for further improvement would be including an import pathway for DarwinCore data into Open Refine, which is a popular tool in the GBIF community. André’s long term hope is that the Data Package will become an optional format for data download on GBIF.org.  workflow Further reading: Repository: https://github.com/frictionlessdata/FrictionlessDarwinCore Project blog: https://andrejjh.github.io/fdwc.github.io/

Open Knowledge Foundation CEO Catherine Stihler awarded OBE

- November 27, 2019 in Open Knowledge

Our chief executive Catherine Stihler has been awarded an OBE by Prince William, the Duke of Cambridge.

She was recognised in the Queen’s Birthday Honours for her service to politics.

Yesterday, Catherine took part in the investiture at Buckingham Palace, watched on by her proud family.

Catherine said: “It was an immense honour to receive this recognition and be awarded an OBE by Prince William.

“When I entered the European Parliament as Britain’s youngest MEP 20 years ago it was because I believed in public service as a force for good. That’s something I still passionately believe today.

“At the Open Knowledge Foundation I continue to fight to improve politics, tackling disinformation and lies and working towards a future that is fair, free and open.

“The overwhelming majority of people who choose public service do so to improve lives for their communities, and we should never lose sight of that.”

Catherine has been chief executive of the Open Knowledge Foundation since February 2019. Prior to this, she represented Scotland as a Member of the European Parliament for Labour since 1999. As Vice-Chair of the European Parliament’s Internal Market and Consumer Protection Committee, she worked on digital policy, prioritising the digital single market, digital skills, better accessibility of digital products for the disabled, as well as citizen online data protection and privacy. As leader and founder of the All-Party Library Group she promoted and advocated for the importance of libraries and how libraries can remain relevant in the new digital age.

Born in Bellshill in 1973, Catherine was educated at Coltness High School, Wishaw and St Andrews University, where she was awarded a MA (Hons) Geography and International Relations (1996), and a MLitt in International Security Studies (1998). Before becoming a MEP, Catherine served as President of St Andrews University Students Association (1994-1995) and worked in the House of Commons for Dame Anne Begg MP (1997-1999). She has a Master of Business Administration from the Open University, and in 2018 was awarded an honorary doctorate from the University of St Andrews. Catherine was elected to serve as the 52nd Rector of the University of St Andrews between 2014 and 2017.

Meet Monica Granados, one of our Frictionless Data for Reproducible Research Fellows

- November 11, 2019 in Open Knowledge

The Frictionless Data for Reproducible Research Fellows Programme is training early career researchers to become champions of the Frictionless Data tools and approaches in their field. Fellows will learn about Frictionless Data, including how to use Frictionless Data tools in their domains to improve reproducible research workflows, and how to advocate for open science. Working closely with the Frictionless Data team, Fellows will lead training workshops at conferences, host events at universities and in labs, and write blogs and other communications content. Hello there! My name is Monica Granados and I am a food-web ecologist, science communicator and a champion of open science. There are not too many times or places in my life where it is so easy to demarcate a “before” and an “after.” In 2014, I travelled to Raleigh, North Carolina to attend the Open Science for Synthesis (OSS) course co-facilitated by the National Centre for Ecological Synthesis and Analysis and the Renaissance Computing Institute. I was there to learn more about the R statistical programming language to aid my quest for a PhD. At the conclusion of the course I did come home with more knowledge about R and programming but what I couldn’t stop thinking about was what I learned about open science. I came home a different scientist, truth be told a different person. You see at OSS I learned that there was a different way to do science – an approach so diametrically opposite to what I had been taught in my five years in graduate school. Instead of hoarding data and publishing behind paywalls, open science asks – wouldn’t science be better if our data, methods, publications and communications were open? When I returned back from Raleigh, I uploaded all of my data to GitHub and sought out open access options for my publications. Before OSS I was simply interested in contributing my little piece to science, but after OSS I dedicated my career to the open science movement. In the years since OSS, I have made all my code, data and publications open and I have delivered workshops and designed courses for others to work in the open. I now run a not-for-profit that teaches researchers how to do peer-review using open access preprints and I am a policy analyst working on open science at Environment and Climate Change Canada. I wanted to become a Frictionless Data fellow because open science is continually evolving. I wanted to learn more about reproducible research. When research is reproducible, it is more accessible and that sets off a chain reaction of beneficial consequences. Open data, methods and publications mean that if you were interested in knowing more about the course of treatment your doctor prescribed or you are in doctor in the midst of an outbreak searching for the latest data on the epidemic, or perhaps you are a decision maker looking for guidance on what habitat to protect, this information is available to you. Easily, quickly and free of charge. I am looking forward to building some training materials and data packages to make it easier for scientists to work in the open through the Frictionless Data fellowship. And I look forward to updating you on my and my fellow fellows’ progress. Frictionless Data for Reproducible Research Fellows Programme

More on Frictionless Data

The Fellows programme is part of the Frictionless Data for Reproducible Research project at Open Knowledge Foundation. This project, funded by the Sloan Foundation, applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate data workflows in research contexts. Frictionless Data is a set of specifications for data and metadata interoperability, accompanied by a collection of software libraries that implement these specifications, and a range of best practices for data management. Frictionless Data’s other current projects include the Tool Fund, in which four grantees are developing open source tooling for reproducible research. The Fellows programme will be running until June 2020, and we will post updates to the programme as they progress. • Originally published at http://fellows.frictionlessdata.io/blog/hello-monica/

World Library Congress – Closing Libraries is ‘short-sighted’

- August 26, 2019 in Featured, library, Open Knowledge, Open Knowledge Foundation

Closing down libraries to save money is ‘one of the most short-sighted decisions that public officials can make’, the World Library and Information Congress has heard.
Speaking at the International Federation of Library Associations and Institutions (IFLA) annual congress in Athens, Open Knowledge Foundation chief executive Catherine Stihler said ‘libraries are too often seen as an easy target for cuts’. The former MEP for Scotland said libraries can also ‘fill the gap’ in the delivery of coding lessons and data practice in schools, to ensure people across Europe and the world have the skills for the jobs of the future. In 2017, it is estimated that more than 120 libraries closed their doors in England, Wales and Scotland. But a recent study by the Carnegie UK Trust found that people aged 15-24 in England are the most likely age group to use libraries. And nearly half of people aged 25 to 34 still visit them, according to the study. The IFLA World Library and Information Congress (https://2019.ifla.org/) is the international flagship professional and trade event for the library and information services sector, bringing together over 3,500 participants from more than 120 countries. In her address to the World Library and Information Congress, Open Knowledge Foundation chief executive Catherine Stihler said:
“Governments across the world must now work harder to give everyone access to key information and the ability to use it to understand and shape their lives; as well as making powerful institutions more accountable; and ensuring vital research information that can help us tackle challenges such as poverty and climate change is available to all.
“In short, we need a future that is fair, free and open.
“But this is not the way things are going in the UK, the EU, the US, China and across our world.
“Instead, we see in the UK, councils across the country facing major financial pressures, and libraries are too often seen as an easy target for cuts.
“But closing down a library has to be one of the most short-sighted decisions that public officials can make, with serious consequences for the future of local communities.” She added:
“There is a widespread misconception that the services offered are out-of-date – a relic of a bygone age before youngsters started carrying smartphones in their pockets with instant access to Wikipedia, and before they started downloading books on their Kindle.
“Today, the most successful libraries have remodelled themselves to become fit for the 21st century, and more can follow suit if they receive the right support and advice, and have the backing of governments and councils.
“I have long championed the importance of coding as part of the education curriculum, especially given that my home country of Scotland is home to more than 100,000 digital tech economy jobs.
“But while there remains a shortfall in what is delivered in our schools in terms of coding and data practice, libraries can fill that gap.
“Our world is moulded in code, and libraries offer young people an opportunity to bring ideas to life and build things that will bring joy to millions.
“So by embracing the future, they can continue to be an unrivalled place of learning, like they always were for previous generations.”

Why greater tax transparency is needed to help fix the broken global tax system

- July 30, 2019 in Open Knowledge

Public CBCR by Financial Transparency Coalition is licensed under CC BY-NC-ND 3.0

The international tax system is broken and in need of urgent updating to address issues which allow globalised businesses to move their profits and intellectual property around the world, often to locations where they pay the least tax. Indeed some economists estimate that “close to 40% of multinational profits are shifted to tax havens globally each year” with many of the world’s most important tax havens being connected to the UK. The digital services taxes being proposed by countries such as France and the UK arise from frustrations with the slow pace of progress towards an internationally-agreed solution.  Those processes may continue to be held back by reactions from the US – where many of the largest digital businesses originate – or countries such as Ireland which corporations like Facebook may have chosen as their European base for beneficial tax reasons. The EU has so far failed to pass its own legislation to better tax digital businesses although the incoming president of the European Commission recently stated that the EU must act by the end of 2020 if no other international solution is agreed.  The OECD is currently in discussions about a new programme of work to “develop a consensus solution to the tax challenges arising from the digitalisation of the economy. This work is expected to conclude by the end of 2020 and establish a follow-up to their anti tax avoidance Base Erosion and Profit Shifting (BEPS) project. However the BEPS process has been criticised as being biased towards rich countries prompting calls – from the G77 coalition of developing nations, China and others, most recently Norway – for the United Nations to set up a UN tax body to create a truly global solution to modern taxation. Tech giants such as Google, Amazon and Facebook may be some of the most high-profile examples of companies using complicated tax structuring that the public is aware of – thanks to years of media reporting and targeted campaigning – but the problem is systemic. Tax justice advocates – such as those that the Open Knowledge Foundation helped convene for our Open Data for Tax Justice project – argue that the world’s tax systems need to be fundamentally restructured and have also pushed for a variety of measures sometimes summed up as the ABCs of tax transparency. A stands for automatic exchange of information where countries can more easily share tax data on individuals or businesses. B stands for beneficial ownership where the issue of opaque company ownership is addressed by publishing public registers of who owns or runs companies and trusts. C stands for country-by-country reporting where corporations would be required to publish details about the tax they pay, people they employ and profits they make in each country where they operate to build up a better picture of their activities. Taken together, it is believed that such transparency measures would shine a light on the insalubrious practices currently being used by multinational corporations in order to help the push to crack down on abuses as exposed by investigations such as the Mauritius Leaks, Paradise Papers and Panama Papers. The BEPS process has seen pushed automatic exchange of information forwards and many countries are joining the drive for beneficial ownership transparency (see the OpenOwnership project for more). There are also steps being taken towards making country-by-country reporting public, but progress is slow.  Two years after the EU voted in favour of publishing public country-by-country reporting information as open data for all large corporations operating in Europe, the issue remains stuck in trilogue discussions at the EU Council. Meanwhile others are taking on the issue including international accounting standards setters and civil society efforts such as the Fair Tax Mark. We believe that a lack of transparency in current country-by-country reporting standards will fail to build confidence in the treatment of corporations, missing an important opportunity to build tax morale and wider public support for tax compliance.  Research has shown how restricting access to country-by-country reporting exacerbates global inequalities in taxing rights while civil society organisations have set out why public country-by-country reporting is a must for large multinationals to create an “effective deterrent of aggressive tax avoidance and profit shifting”. We urge all policymakers working on tax issues to prioritise increased tax transparency as an essential strand of modernising the global taxation system as a way to improve public trust and ensure corporate compliance.

Transforming the UK’s data ecosystem: Open Knowledge Foundation’s thoughts on the National Data Strategy

- July 17, 2019 in National Data Strategy, Open Data, Open Government Data, Open Knowledge, Policy

Following an open call for evidence issued by the UK’s Department for Digital, Culture, Media and Sport, Open Knowledge Foundation submitted our thoughts about what the UK can do in its forthcoming National Data Strategy to “unlock the power of data across government and the wider economy, while building citizen trust in its use”. We also signed a joint letter alongside other UK think tanks, civil and learned societies calling for urgent action from government to overhaul its use of data. Below our CEO Catherine Stihler explains why the National Data Strategy needs to be transformative to ensure that British businesses, citizens and public bodies can play a full role in the interconnected global knowledge economy of today and tomorrow: Today’s digital revolution is driven by data. It has opened up extraordinary access to information for everyone about how we live, what we consume, and who we are. But large unaccountable technology companies have also monopolised the digital age, and an unsustainable concentration of wealth and power has led to stunted growth and lost opportunities. Governments across the world must now work harder to give everyone access to key information and the ability to use it to understand and shape their lives; as well as making powerful institutions more accountable; and ensuring vital research information that can help us tackle challenges such as poverty and climate change is available to all. In short, we need a future that is fair, free and open. The UK has a golden opportunity to lead by example, and the Westminster government is currently developing a long-anticipated National Data Strategy. Its aim is to ensure all citizens and organisations trust the data ecosystem, are sufficiently skilled to operate effectively within it, and can get access to high-quality data when they need it. Laudable aims, but they must come with a clear commitment to invest in better data and skills. The Open Knowledge Foundation I am privileged to lead was launched 15 years ago to pioneer the way that we use data, working to build open knowledge in government, business and civil society – and creating the technology to make open material useful. This week, we have joined with a group of think tanks, civil and learned societies to make a united call for sweeping reforms to the UK’s data landscape. In order for the strategy to succeed, there needs to be transformative, not incremental, change and there must be leadership from the very top, with buy-in from the next Prime Minister, Culture Secretary and head of the civil service. All too often, piecemeal incentives across Whitehall prevent better use of data for the public benefit. A letter signed by the Open Knowledge Foundation, the Institute for Government, Full Fact, Nesta, the Open Data Institute, mySociety, the Royal Statistical Society, the Open Contracting Partnership, 360Giving, OpenOwnership, and the Policy Institute at King’s College London makes this clear. We have called for investment in skills to convert data into real information that can be acted upon; challenged the government to earn the public’s trust, recognising that the debate about how to use citizens’ data must be had in public, with the public; proposed a mechanism for long-term engagement between decision-makers, data users and the public on the strategy and its goals; and called for increased efforts to fix the government’s data infrastructure so organisations outside the government can benefit from it. Separately, we have also submitted our own views to the UK Government, calling for a focus on teaching data skills to the British public. Learning such skills can prove hugely beneficial to individuals seeking employment in a wide range of fields including the public sector, government, media and voluntary sector.  But at present there is often a huge amount of work required to clean up data in order to make it usable before insights or stories can be gleaned from it.  We believe that the UK government could help empower the wider workforce by instigating or backing a fundamental data literacy training programme open to local communities working in a range of fields to strengthen data demand, use and understanding.  Without such training and knowledge, large numbers of UK workers will be ill-equipped to take on many jobs of the future where products and services are devised, built and launched to address issues highlighted by data. Empowering people to make better decisions and choices informed by data will boost productivity, but not without the necessary investment in skills. We have also told the government that one of the most important things it can do to help businesses and non-profit organisations best share the data they hold is to promote open licencing. Open licences are legal arrangements that grant the general public rights to reuse, distribute, combine or modify works that would otherwise be restricted under intellectual property laws. We would also like to see the public sector pioneering new ways of producing and harnessing citizen-generated data efforts by organising citizen science projects through schools, libraries, churches and community groups.  These local communities could help the government to collect high-quality data relating to issues such as air quality or recycling, while also leading the charge when it comes to increasing the use of central government data. We live in a knowledge society where we face two different futures: one which is open and one which is closed. A closed future is one where knowledge is exclusively owned and controlled leading to greater inequality and a closed society. But an open future means knowledge is shared by all – freely available to everyone, a world where people are able to fulfil their potential and live happy and healthy lives. The UK National Data Strategy must emphasise the importance and value of sharing more, better quality information and data openly in order to make the most of the world-class knowledge created by our institutions and citizens.  Without this commitment at all levels of society, British businesses, citizens and public bodies will fail to play a full role in the interconnected global knowledge economy of today and tomorrow.

Statement from the Open Knowledge Foundation Board on the future of the CKAN Association

- June 6, 2019 in ckan, Open Data, Open Knowledge, Open Knowledge Foundation

The Open Knowledge Foundation (OKF) Board met on Monday evening to discuss the future of the CKAN Association.

The Board supported the CKAN Stewardship proposal jointly put forward by Link Digital and Datopian. As two of the longest serving members of the CKAN Community, it was felt their proposal would now move CKAN forward, strengthening both the platform and community.

In appointing joint stewardship to Link Digital and Datopian, the Board felt there was a clear practical path with strong leadership and committed funding to see CKAN grow and prosper in the years to come.

OKF will remain the ‘purpose trustee’ to ensure the Stewards remain true to the purpose and ethos of the CKAN project. The Board would like to thank everyone who contributed to the deliberations and we are confident CKAN has a very bright future ahead of it.

If you have any questions, please get in touch with Steven de Costa, managing director of Link Digital, or Paul Walsh, CEO of Datopian, by emailing stewards@ckan.org.