You are browsing the archive for Open Knowledge.

The Open Revolution: rewriting the rules of the information age

Open Knowledge International - June 12, 2018 in News, open, Open Data, Open Knowledge, Open/Closed

Rufus Pollock, the Founder of Open Knowledge International, is delighted to announce the launch of his new book The Open Revolution on how we can revolutionize information ownership and access in the digital economy.

About the book

Will the digital revolution give us digital dictatorships or digital democracies? Forget everything you think you know about the digital age. It’s not about privacy, surveillance, AI or blockchain – it’s about ownership. Because, in a digital age, who owns information controls the future.

Today, information is everywhere. From your DNA to the latest blockbusters, from lifesaving drugs to the app on your phone, from big data to algorithms. Our entire global economy is built on it and the rules around information affect us all every day.

As information continues to move into the digital domain, it can be copied and distributed with ease, making access and control even more important. But the rules we have made for it, derived from how we manage physical property, are hopelessly maladapted to the digital world.

In The Open Revolution, Pollock exposes the myths that cloud the digital debate. Looking beneath the surface, into the basic rules of the digital economy, he offers a simple solution. The answer is not technological but political: a choice between making information Open, shared by all, or making it Closed, exclusively owned and controlled. Today in a Closed world we find ourselves at the mercy of digital dictators. Rufus Pollock charts a path to a more “Open” future that works for everyone.
Cory Doctorow, journalist and activist: “The richest, most powerful people in the world have bet everything on the control of information in all its guises; Pollock’s fast-moving, accessible book explains why seizing the means of attention and information is the only path to human freedom and flourishing.”

An Open future for all

The book’s vision of choosing Open as the path to a more equitable, innovative and profitable future for all is closely related to the vision of an open knowledge society of Open Knowledge International. Around the world, we are working towards societies where everyone has access to key information and the ability to use it to understand and shape their lives. We want to see powerful institutions made comprehensible and accountable. We want to see vital research information which can help us tackle challenges such as poverty and climate change available to all as open information. The Open Revolution is a great inspiration for our worldwide network of people passionate about openness, boosting our shared efforts towards an open future for all.
Get the book and join the open revolution at openrevolution.net, or join our forum to discuss the book’s content.

About the author

Dr Rufus Pollock is a researcher, technologist and entrepreneur. He has been a pioneer in the global Open Data movement, advising national governments, international organisations and industry on how to succeed in the digital world. He is the founder of Open Knowledge, a leading NGO which is present in over 35 countries, empowering people and organization with access to information so that they can create insight and drive change. Formerly, he was the Mead Fellow in Economics at Emmanuel College, University of Cambridge. He has been the recipient of a $1m Shuttleworth Fellowship and is currently an Ashoka Fellow and Fellow of the RSA. He holds a PhD in Economics and a double first in Mathematics from the University of Cambridge.
 

The Open Revolution: rewriting the rules of the information age

Open Knowledge International - June 12, 2018 in News, open, Open Data, Open Knowledge, Open/Closed

Rufus Pollock, the Founder of Open Knowledge International, is delighted to announce the launch of his new book The Open Revolution on how we can revolutionize information ownership and access in the digital economy.

About the book

Will the digital revolution give us digital dictatorships or digital democracies? Forget everything you think you know about the digital age. It’s not about privacy, surveillance, AI or blockchain – it’s about ownership. Because, in a digital age, who owns information controls the future.

Today, information is everywhere. From your DNA to the latest blockbusters, from lifesaving drugs to the app on your phone, from big data to algorithms. Our entire global economy is built on it and the rules around information affect us all every day.

As information continues to move into the digital domain, it can be copied and distributed with ease, making access and control even more important. But the rules we have made for it, derived from how we manage physical property, are hopelessly maladapted to the digital world.

In The Open Revolution, Pollock exposes the myths that cloud the digital debate. Looking beneath the surface, into the basic rules of the digital economy, he offers a simple solution. The answer is not technological but political: a choice between making information Open, shared by all, or making it Closed, exclusively owned and controlled. Today in a Closed world we find ourselves at the mercy of digital dictators. Rufus Pollock charts a path to a more “Open” future that works for everyone.
Cory Doctorow, journalist and activist: “The richest, most powerful people in the world have bet everything on the control of information in all its guises; Pollock’s fast-moving, accessible book explains why seizing the means of attention and information is the only path to human freedom and flourishing.”

An Open future for all

The book’s vision of choosing Open as the path to a more equitable, innovative and profitable future for all is closely related to the vision of an open knowledge society of Open Knowledge International. Around the world, we are working towards societies where everyone has access to key information and the ability to use it to understand and shape their lives. We want to see powerful institutions made comprehensible and accountable. We want to see vital research information which can help us tackle challenges such as poverty and climate change available to all as open information. The Open Revolution is a great inspiration for our worldwide network of people passionate about openness, boosting our shared efforts towards an open future for all.
Get the book and join the open revolution at openrevolution.net, or join our forum to discuss the book’s content.

About the author

Dr Rufus Pollock is a researcher, technologist and entrepreneur. He has been a pioneer in the global Open Data movement, advising national governments, international organisations and industry on how to succeed in the digital world. He is the founder of Open Knowledge, a leading NGO which is present in over 35 countries, empowering people and organization with access to information so that they can create insight and drive change. Formerly, he was the Mead Fellow in Economics at Emmanuel College, University of Cambridge. He has been the recipient of a $1m Shuttleworth Fellowship and is currently an Ashoka Fellow and Fellow of the RSA. He holds a PhD in Economics and a double first in Mathematics from the University of Cambridge.
 

Open Data Day 2018 – Τα δεδομένα ως υποδομή καινοτομίας: επιστήμη, διακυβέρνηση, διαφάνεια, μεταφορές

Alexandra Ioannou - March 23, 2018 in event, Featured, gdpr, News, Open Data Day, Open Knowledge, ανοικτά δεδομένα, ανοικτή διακυβέρνηση, Διαφάνεια, Εκδηλώσεις, εκπαίδευση

Με μεγάλη επιτυχία ολοκληρώθηκε η ημερίδα του Ιδρύματος Ανοικτής Γνώσης Ελλάδος (Open Knowledge Greece) και της Βιβλιοθήκης & Κέντρου Πληροφόρησης Α.Π.Θ που έλαβε χώρα την Τετάρτη 14 Μαρτίου στο αμφιθέατρο της Κεντρικής Βιβλιοθήκης Α.Π.Θ.  Η εκδήλωση πραγματοποιήθηκε με αφορμή την Ημέρα Ανοικτών Δεδομένων – Open Data Day 2018 και είχε θέμα: Τα δεδομένα ως υποδομή καινοτομίας: […]

Open Data Day 2018 – Τα δεδομένα ως υποδομή καινοτομίας: επιστήμη, διακυβέρνηση, διαφάνεια, μεταφορές

Alexandra Ioannou - March 23, 2018 in event, Featured, gdpr, News, Open Data Day, Open Knowledge, ανοικτά δεδομένα, ανοικτή διακυβέρνηση, Διαφάνεια, Εκδηλώσεις, εκπαίδευση

Με μεγάλη επιτυχία ολοκληρώθηκε η ημερίδα του Ιδρύματος Ανοικτής Γνώσης Ελλάδος (Open Knowledge Greece) και της Βιβλιοθήκης & Κέντρου Πληροφόρησης Α.Π.Θ που έλαβε χώρα την Τετάρτη 14 Μαρτίου στο αμφιθέατρο της Κεντρικής Βιβλιοθήκης Α.Π.Θ.  Η εκδήλωση πραγματοποιήθηκε με αφορμή την Ημέρα Ανοικτών Δεδομένων – Open Data Day 2018 και είχε θέμα: Τα δεδομένα ως υποδομή καινοτομίας: […]

2018 Open Data Day in Korea

OKFN - February 20, 2018 in Open Data, Open Data Day, Open Knowledge, 이벤트, 해커톤

2018 Open Data Day in Korea에 초대합니다. 이번 행사는 해커톤과 더불어 공공데이터 사례 소개와 열린 토론회를 함께 진행합니다.
  1. 공공데이터 개방 및 활용에 있어 모범 사례로 평가받고 있는 서울특별시, 제주특별자치도의 실제 사례를 소개합니다.
서울시에서 공개 예정인 생활인구 데이터에 대한 소개와 제주도에서 추진하며 쌓은 경험을 직접 보실 수 있습니다.
  1. 열린 토론회는 지자체, 정부출연연구소, 민간기업을 대표하는 분들이 패널 토의를 하고, 각계 전문가와 행사 참석자가 자유롭게 토론할 수 있는 시간입니다.
4차 산업혁명시대에 데이터에 대한 역할은 무엇인지 함께 고민해 보려 합니다.
  1. 해커톤은 주요 분야 데이터를 조사하고, 데이터의 품질을 평가합니다. 이미 상당한 수의 공공데이터가 개방되고 있지만, 적합한 데이터를 찾는 것은 쉽지 않습니다.
데이터를 찾고 활용해도 개인의 경험에 한정된다는 한계가 있습니다. 데이터 조사와 논의는 오픈데이터 놀이터 (http://discuss.datahub.kr)에서 자유롭게 참여할 수 있습니다. 미리미리 등록해 주세요 ~

Podcast: Pavel Richter on the value of open data

Open Knowledge International - August 25, 2017 in Interviews, Open Knowledge, podcasts

This month Pavel Richter, CEO of Open Knowledge International, was interviewed by Stephen Ladek of Aidpreneur for the 161st episode of his Terms of Reference podcast. Aidpreneur is an online community focused on social enterprise, humanitarian aid and international development that runs this podcast to cover important topics in the social impact sector. Under the title ‘Supporting The Open Data Movement’, Stephen Ladek and Pavel Richter discuss a range of topics surrounding open data, such as what open data means, how open data can improve people’s lives (including the role it can play in aid and development work) and the current state of openness in the world. As Pavel phrases it: “There are limitless ways where open data is part of your life already, or at least should be”. Pavel Richter joined Open Knowledge International as CEO in April 2015, following five years of experience as Executive Director of Wikimedia Deutschland. He explains how Open Knowledge International has set its’ focus on bridging the gap between the people who could make the best use of open data (civil society organisations and activists in areas such as human rights, health or the fight against corruption) and the people who have the technical knowledge on how to work with data. OKI can make an impact by bridging this gap, empowering these organisations to use open data to improve people’s lives. The podcast goes into several examples that demonstrate the value of open data in our everyday life, from how OpenStreetMap was used by volunteers following the Nepal earthquake to map where roads were destroyed or still accessible, to governments opening up financial data on tax returns or on how foreign aid money is spent, to projects such as OpenTrials opening up clinical trial data, so that people are able to get information on what kind of drugs are being tested for effectiveness against viruses such as Ebola or Zika. In addition, Stephen Ladek and Pavel Richter discuss questions surrounding potential misuse of open data, the role of the cultural context in open data, and the current state of open data around the world, as measured in recent initiatives such as the Open Data Barometer and the Global Open Data Index. Listen to the full podcast below, or visit the Aidpreneur website for more information:  

Fostering open, inclusive, and respectful participation

Sander van der Waal - August 21, 2017 in community, network, Open Knowledge, Open Knowledge international Local Groups

At Open Knowledge International we have been involved with various projects with other civil society organisations aiming for the release of public interest data, so that anyone can use it for any purpose. More importantly, we focus on putting this data to use, to help it fulfil its potential of working towards fairer and more just societies. Over the last year, we started the first phase of the project Open Data for Tax Justice, because we and our partners believe the time is right to demand for more data to be made openly available to scrutinise the activities of businesses. In an increasingly globalised world, multinational corporations have tools and techniques to their disposal to minimise their overall tax bill, and many believe that this gives them an unfair advantage over ordinary citizens. Furthermore, the extent to which these practices take place is unknown, because taxes that multinational corporations pay in all jurisdictions in which they operate are not reported publicly. By changing that we can have a proper debate about whether the rules are fair, or whether changes will need to be made to share the tax bill in a different way. For us at Open Knowledge International, this is an entry into a new domain. We are not tax experts, but instead we rely on the expertise of our partners. We are open to engaging all experts to help shape and define together how data should be made available, and how it can be put to use to work towards tax systems that can rely on more trust from their citizens. Unsurprisingly, in such a complex and continuously developing field, debates can get very heated. People are obviously very passionate about this, and being passionate open data advocates ourselves, we sympathise. However, we think it is crucial that the passion to strive for a better world should never escalate to personal insults, ad-hominem attacks, or violate basic norms in any other way. Unfortunately, this happened recently with a collaborator on a project. While they made clear they were not affiliated with Open Knowledge International, nevertheless their actions reflected very badly on the overall project and we deeply condemn their actions. Moving forward, we want to make more explicitly clear what behaviour is and is not acceptable within the context of the projects we are part of. To that end, we are publishing project participation guidelines that make clear how we define acceptable and unacceptable behaviour, and what you can do if you feel any of these guidelines are being violated. We invite your feedback on these guidelines, as it is important that these norms are shared among our community. So please let us know on our Open Knowledge forum what you think and where you think these guidelines could be improved. Furthermore, we would like to make clear what the communities we are part of, like the one around tax justice, can expect from Open Knowledge International beyond enforcing the basic behavioural norms that we set out in the guidelines linked above. Being in the business of open data, we love facts and aim to record many facts in the databases we build. However, facts can be used to reach different and sometimes even conflicting conclusions. Some partners engage heavily on social media channels like Twitter to debate conflicting interpretations, and other partners choose different channels for their work. Open Knowledge International is not, and will never be, in a position to be the arbiter on all interpretations that partners make about the data that we publish. Our expertise is in building open databases, helping put the data to use, and convening communities around the work that we do. On the subject matter of, for example, tax justice, we are more similar to those of us who are interested and care about the topic, but would rely on the debate being led by experts in the field. Where we spot abuse of the data published in databases we run, or obvious misrepresentation of the data, we will speak out. But we will not monitor or take a stance on all issues that are being debated by our partners and the wider communities around our projects. Finally, we strongly believe that the open knowledge movement is best served by open and diverse participation. We aim for the project participation guidelines to spell out our expectations and hope these will help us move towards developing more inclusive and diverse communities, where everyone who wants to participate respectfully feels welcomed to do so. Do you think these guidelines are a step in the right direction? What else do you feel we should be doing at Open Knowledge International? We look forward to hearing from you in our forum.

OpenSpending platform update

Paul Walsh - August 16, 2017 in Open Knowledge, Open Spending

Introduction

OpenSpending is a free, open and global platform to search, visualise, and analyse fiscal data in the public sphere. This week, we soft launched an updated technical platform, with a newly designed landing page. Until now dubbed “OpenSpending Next”, this is a completely new iteration on the previous version of OpenSpending, which has been in use since 2011. At the core of the updated platform is Fiscal Data Package. This is an open specification for describing and modelling fiscal data, and has been developed in collaboration with GIFT. Fiscal Data Package affords a flexible approach to standardising fiscal data, minimising constraints on publishers and source data via a modelling concept, and enabling progressive enhancement of data description over time. We’ll discuss in more detail below. From today:
  • Publishers can get started publishing fiscal data with the interactive Packager, and explore the possibilities of the platform’s rich API, advanced visualisations, and options for integration.
  • Hackers can work on a modern stack designed to liberate fiscal data for good! Start with the docs, chat with us, or just start hacking.
  • Civil society can access a powerful suite of visualisation and analysis tools, running on top of a huge database of open fiscal data. Discover facts, generate insights, and develop stories. Talk with us to get started.
All the work that went into this new version of OpenSpending was only made possible by our funders along the way. We want to thank Hewlett, Adessium, GIFT, and the OpenBudgets.eu consortium for helping fund this work. As this is now completely public, replacing the old OpenSpending platform, we do expect some bugs and issues. If you see anything, please help us by opening a ticket on our issue tracker.

Features

The updated platform has been designed primarily around the concept of centralised data, decentralised views: we aim to create a large, and comprehensive, database of fiscal data, and provide various ways to access that data for others to build localised, context-specific applications on top. The major features of relevance to this approach are described below.

Fiscal Data Package

As mentioned above, Fiscal Data Package affords a flexible approach to standardising fiscal data. Fiscal Data Package is not a prescriptive standard, and imposes no strict requirements on source data files. Instead, users “map” source data columns to “fiscal concepts”, such as amount, date, functional classification, and so on, so that systems that implement Fiscal Data Package can process a wide variety of sources without requiring change to the source data formats directly. A minimal Fiscal Data Package only requires mapping an amount and a date concept. There are a range of additional concepts that make fiscal data usable and useful, and we encourage the mapping of these, but do not require them for a valid package. Based on this general approach to specifying fiscal data with Fiscal Data Package, the updated OpenSpending likewise imposes no strict requirements on naming of columns, or the presence of columns, in the source data. Instead, users (of the graphical user interface, and also of the application programming interfaces) can provide any source data, and iteratively create a model on top of that data that declares the fiscal measures and dimensions.

GUIs

Packager

The Packager is the user-facing app that is used to model source data into Fiscal Data Packages. Using the Packager, users first get structural and schematic validation of the source files, ensuring that data to enter the platform is validly formed, and then they can model the fiscal concepts in the file, in order to publish the data. After initial modelling of data, users can also remodel their data sources for a progressive enhancement approach to improving data added to the platform.

Explorer

The Explorer is the user-facing app for exploration and discovery of data available on the platform.

Viewer

The Viewer is the user-facing app for building visualisations around a dataset, with a range of options, for presentation, and embedding views into 3rd party websites.

DataMine

The DataMine is a custom query interface powered by Re:dash for deep investigative work over the database. We’ve included the DataMine as part of the suite of applications as it has proved incredibly useful when working in conjunction with data journalists and domain experts, and also for doing quick prototype views on the data, without the limits of API access, as one can use SQL directly.

APIs

Datastore

The Datastore is a flat file datastore with source data stored in Fiscal Data Packages, providing direct access to the raw data. All other databases are built from this raw data storage, providing us with a clear mechanism for progressively enhancing the database as a whole, as well as building on this to provide such features directly to users.

Analytics and Search

The Analytics API provides a rich query interface for datasets, and the search API provides exploration and discovery capabilities across the entire database. At present, search only goes over metadata, but we have plans to iterate towards full search over all fiscal data lines.

Data Importers

Data Importers are based on a generic data pipelining framework developed at Open Knowledge International called Data Package Pipelines. Data Importers enable us to do automated ETL to get new data into OpenSpending, including the ability to update data from the source at specified intervals. We see Data Importers as key functionality of the updated platform, allowing OpenSpending to grow well beyond the one thousand plus datasets that have been uploaded manually over the last five or so years, towards tens of thousands of datasets. A great example of how we’ve put Data Importers to use is in the EU Structural Funds data that is part of the Subsidy Stories project.

Iterations

It is slightly misleading to announce the launch today, when we’ve in fact been using and iterating on OpenSpending Next for almost 2 years. Some highlights from that process that have led to the platform we have today are as follows.

SubsidyStories.eu with Adessium

Adessium provided Open Knowledge International with funding towards fiscal transparency in Europe, which enabled us to build out significant parts of the technical platform, commision work with J++ on Agricultural Subsidies , and, engage in a productive collaboration with Open Knowledge Germany on what became SubsidyStories.eu, which even led to another initiative from Open Knowledge Germany called The Story Hunt. This work directly contributed to the technical platform by providing an excellent use case for the processing of a large, messy amount of source data into a normalised database for analysis, and doing so while maintaining data provenance and the reproducibility of the process. There is much to do in streamlining this workflow, but the benefits, in terms of new use cases for the data, are extensive. We are particularly excited by this work, and the potential to continue in this direction, by building out a deep, open database as a potential tool for investigation and telling stories with data.

OpenBudgets.eu via Horizon 2020

As part of the OpenBudgets.eu consortium, we were able to both build out parts of the technical platform, and have a live use case for the modularity of the general architecture we followed. A number of components from the core OpenSpending platform have been deployed into the OpenBudgets.eu platform with little to no modification, and the analytical API from OpenSpending was directly ported to run on top of a triple store implementation of the OpenBudgets.eu data model. An excellent outcome of this project has been the close and fruitful work with both Open Knowledge Germany and Open Knowledge Greece on technical, community, and journalistic opportunities around OpenSpending, and we plan for continuing such collaborations in the future.

Work on Fiscal Data Package with GIFT

Over three phases of work since 2015 (the third phase is currently running), we’ve been developing Fiscal Data Package as a specification to publish fiscal data against. Over this time, we’ve done extensive testing of the specification against a wide variety of data in the wild, and we are iterating towards a v1 release of the specification later this year. We’ve also been piloting the specification, and OpenSpending, with national governments. This has enabled extensive testing of both the manual modeling of data to the specification using the OpenSpending Packager, and automated ETL of data into the platform using the Data Package Pipelines framework. This work has provided the opportunity for direct use by governments of a platform we initially designed with civil society and civic tech actors in mind. We’ve identified difficulties and opportunities in this arena at both the implementation and the specification level, and we look forward to continuing this work and solving use cases for users inside government.

Credits

Many people have been involved in building the updated technical platform. Work started back in 2014 with an initial architectural vision articulated by our peers Tryggvi Björgvinsson and Rufus Pollock. The initial vision was adapted and iterated on by Adam Kariv (Technical Lead) and Sam Smith (UI/X), with Levko Kravets, Vitor Baptista, and Paul Walsh. We reused and enhanced code from Friedrich Lindenberg. Lazaros Ioannidis and Steve Bennett made important contributions to the code and the specification respectively. Diana Krebs, Cecile Le Guen, Vitoria Vlad and Anna Alberts have all contributed with project management, and feature and design input.

What’s next?

There is always more work to do. In terms of technical work, we have a long list of enhancements.
However, while the work we’ve done in the last years has been very collaborative with our specific partners, and always towards identified use cases and user stories in the partnerships we’ve been engaged in, it has not, in general, been community facing. In fact, a noted lack of community engagement goes back to before we started on the new platform we are launching today. This has to change, and it will be an important focus moving forward. Please drop by at our forum for any feedback, questions, and comments.

Using the Global Open Data Index to strengthen open data policies: Best practices from Mexico

Oscar Montiel - August 16, 2017 in Global Open Data Index, Open Data Index, Open Government Data, Open Knowledge

This is a blog post coauthored with Enrique Zapata, of the Mexican National Digital Strategy. As part of the last Global Open Data Index (GODI), Open Knowledge International (OKI) decided to have a dialogue phase, where we invited individuals, CSOs, and national governments to exchange different points of view, knowledge about the data and understand data publication in a more useful way. In this process, we had a number of valuable exchanges that we tried to capture in our report about the state of open government data in 2017, as well as the records in the forum. Additionally, we decided to highlight the dialogue process between the government and civil society in Mexico and their results towards improving data publication in the executive authority, as well as funding to expand this work to other authorities and improve the GODI process. Here is what we learned from the Mexican dialogue:

The submission process

During this stage, GODI tries to directly evaluate how easy it is to find and their data quality in general. To achieve this, civil society and government actors discussed how to best submit and agreed to submit together, based on the actual data availability.   Besides creating an open space to discuss open data in Mexico and agreeing on a joint submission process, this exercise showed some room for improvement in the characteristics that GODI measured in 2016:
  • Open licenses: In Mexico and many other countries, the licenses are linked to datasets through open data platforms. This showed some discrepancies with the sources referenced by the reviewers since the data could be found in different sites where the license application was not clear.
  • Data findability: Most of the requested datasets assess in GODI are the responsibility of the federal government and are available in datos.gob.mx. Nevertheless, the titles to identify the datasets are based on technical regulation needs, which makes it difficult for data users to easily reach the data.
  • Differences of government levels and authorities: GODI assesses national governments but some of these datasets – such as land rights or national laws – are in the hands of other authorities or local governments. This meant that some datasets can’t be published by the federal government since it’s not in their jurisdiction and they can’t make publication of these data mandatory.
 

Open dialogue and the review process

  During the review stage, taking the feedback into account, the Open Data Office of the National Digital Strategy worked on some of them. They summoned a new session with civil society, including representatives from the Open Data Charter and OKI in order to:
  • Agree on the state of the data in Mexico according to GODI characteristics;
  • Show the updates and publication of data requested by GODI;
  • Discuss paths to publish data that is not responsibility of the federal government;
  • Converse about how they could continue to strengthen the Mexican Open Data Policy.
  The results   As a result of this dialogue, we agreed six actions that could be implemented internationally beyond just the Mexican context both by governments with centralised open data repositories and those which don’t centralise their data, as well as a way to improve the GODI methodology:  
  1. Open dialogue during the GODI process: Mexico was the first country to develop a structured dialogue to agree with open data experts from civil society about submissions to GODI. The Mexican government will seek to replicate this process in future evaluations and include new groups to promote open data use in the country. OKI will take this experience into account to improve the GODI processes in the future.
  2. Open licenses by default: The Mexican government is reviewing and modifying their regulations to implement the terms of Libre Uso MX for every website, platform and online tool of the national government. This is an example of good practice which OKI have highlighted in our ongoing Open Licensing research.
  3. “GODI” data group in CKAN: Most data repositories allow users to create thematic groups. In the case of GODI, the Mexican government created the “Global Open Data Index” group in datos.gob.mx. This will allow users to access these datasets based on their specific needs.
  4. Create a link between government built visualization tools and datos.gob.mx: The visualisations and reference tools tend to be the first point of contact for citizens. For this reason, the Mexican government will have new regulations in their upcoming Open Data Policy so that any new development includes visible links to the open data they use.
  5. Multiple access points for data: In August 2018, the Mexican government will launch a new section on datos.gob.mx to provide non-technical users easy access to valuable data. These data called “‘Infraestructura de Datos Abiertos MX’ will be divided into five easy-to-explore and understand categories.
  6. Common language for data sets: Government naming conventions aren’t the easiest to understand and can make it difficult to access data. The Mexican government has agreed to change the names to use more colloquial language can help on data findability and promote their use. In case this is not possible with some datasets, the government will go for an option similar to the one established in point 5.
We hope these changes will be useful for data users as well as other governments who are looking to improve their publication policies. Got any other ideas? Share them with us on Twitter by messaging @OKFN or send us an email to index@okfn.org  

Data-cards – a design pattern

Sam Smith - August 15, 2017 in Frictionless Data, Open Knowledge

Cross-posted on smth.uk
It can be useful to recognise patterns in the challenges we face, and in our responses to those challenges. In doing this, we can build a library of solutions, a useful resource when similar challenges arise in the future. When working on innovative projects, as is often the case at Open Knowledge International, creating brand new challenges is inevitable. With little or no historical reference material on how best to tackle these challenges, paying attention to your own repeatable solutions becomes even more valuable. From a user interface design point of view, these solutions come in the form of design patterns – reusable solutions to commonly occurring problems. Identifying, and using design patterns can help create familiar processes for users; and by not reinventing the wheel, you can save time in production too. In our work on Data Packages, we are introducing a new task into the world – creating those data packages. This task can be quite simple, and it will ultimately be time saving for people working with data. That said, there is no escaping the fact that this is a task that has never before been asked of people, one that will need to be done repeatedly, and potentially, from within any number of interfaces. It has been my task of late to design some of these interfaces; I’d like to highlight one pattern that is starting to emerge – the process of describing, or adding metadata to, the columns of a data table. I was first faced with this challenge when working on OS Packager. The objective was to present a recognisable representation of the columns, and facilitate the addition of metadata for each of those columns. The adding of data would be relatively straight forward, a few form fields. The challenge lay in helping the user to recognise those columns from the tables they originated. As anyone who works with spreadsheets on a regular basis will know, they aren’t often predictably or uniformly structured, meaning it is not always obvious what you’re looking at. Take them out of the familiar context of the application they were created in, and this problem could get worse. For this reason, just pulling a table header is probably not sufficient to identify a column. We wanted to provide a preview of the data, to give the best chance of it being recognisable. In addition to this, I felt it important to keep the layout as close as possible to that of say Excel. The simplest solution would be to take the first few rows of the table, and put a form under each column, for the user to add their metadata.     This is a good start, about as recognisable and familiar as you’re going to get. There is one obvious problem though, this could extend well beyond the edge of the users screen, leading to an awkward navigating experience. For an app aimed at desktop users, horizontal scrolling, in any of its forms, would be problematic. So, in the spirit of the good ol’ webpage, let’s make this thing wrap. That is to say that when an element can not fit on the screen, it moves to a new “line”. When doing this we’ll need some vertical spacing where this new line occurs, to make it clear that one column is separate from the one above it. We then need horizontal spacing to prevent the false impression of grouping created by the rows.     The data-card was born. At the time of writing it is utilised in OS Packager, pretty closely resembling the above sketch.     Data Packagist is another application that creates data packages, and it faces the same challenges as described above. When I got involved in this project there was already a working prototype, I saw in this prototype data cards beginning to emerge. It struck me that if these elements followed the same data card pattern created for OS Packager, they could benefit in two significant ways. The layout and data preview would again allow the user to more easily recognise the columns from their spreadsheet; plus the grid layout would lend itself well to drag and drop, which would mean avoiding multiple clicks (of the arrows in the screenshot above) when reordering. I incorporated this pattern into the design.     Before building this new front-end, I extracted what I believe to be the essence of the data-card from the OS Packager code, to reuse in Data Packagist, and potentially future projects. While doing so I thought about the current and potential future uses, and the other functions useful to perform at the same time as adding metadata. Many of these will be unique to each app, but there are a couple that I believe likely to be recurring:
  • Reorder the columns
  • Remove / ignore a column
These features combine with those of the previous iteration to create this stand-alone data-card project: Time will tell how useful this code will be for future work, but as I was able to use it wholesale (changing little more than a colour variable) in the implementation of the Data Packagist front-end, it came at virtually no additional cost. More important than the code however, is having this design pattern as a template, to solve this problem when it arises again in the future.