You are browsing the archive for Policy.

Transforming the UK’s data ecosystem: Open Knowledge Foundation’s thoughts on the National Data Strategy

- July 17, 2019 in National Data Strategy, Open Data, Open Government Data, Open Knowledge, Policy

Following an open call for evidence issued by the UK’s Department for Digital, Culture, Media and Sport, Open Knowledge Foundation submitted our thoughts about what the UK can do in its forthcoming National Data Strategy to “unlock the power of data across government and the wider economy, while building citizen trust in its use”. We also signed a joint letter alongside other UK think tanks, civil and learned societies calling for urgent action from government to overhaul its use of data. Below our CEO Catherine Stihler explains why the National Data Strategy needs to be transformative to ensure that British businesses, citizens and public bodies can play a full role in the interconnected global knowledge economy of today and tomorrow: Today’s digital revolution is driven by data. It has opened up extraordinary access to information for everyone about how we live, what we consume, and who we are. But large unaccountable technology companies have also monopolised the digital age, and an unsustainable concentration of wealth and power has led to stunted growth and lost opportunities. Governments across the world must now work harder to give everyone access to key information and the ability to use it to understand and shape their lives; as well as making powerful institutions more accountable; and ensuring vital research information that can help us tackle challenges such as poverty and climate change is available to all. In short, we need a future that is fair, free and open. The UK has a golden opportunity to lead by example, and the Westminster government is currently developing a long-anticipated National Data Strategy. Its aim is to ensure all citizens and organisations trust the data ecosystem, are sufficiently skilled to operate effectively within it, and can get access to high-quality data when they need it. Laudable aims, but they must come with a clear commitment to invest in better data and skills. The Open Knowledge Foundation I am privileged to lead was launched 15 years ago to pioneer the way that we use data, working to build open knowledge in government, business and civil society – and creating the technology to make open material useful. This week, we have joined with a group of think tanks, civil and learned societies to make a united call for sweeping reforms to the UK’s data landscape. In order for the strategy to succeed, there needs to be transformative, not incremental, change and there must be leadership from the very top, with buy-in from the next Prime Minister, Culture Secretary and head of the civil service. All too often, piecemeal incentives across Whitehall prevent better use of data for the public benefit. A letter signed by the Open Knowledge Foundation, the Institute for Government, Full Fact, Nesta, the Open Data Institute, mySociety, the Royal Statistical Society, the Open Contracting Partnership, 360Giving, OpenOwnership, and the Policy Institute at King’s College London makes this clear. We have called for investment in skills to convert data into real information that can be acted upon; challenged the government to earn the public’s trust, recognising that the debate about how to use citizens’ data must be had in public, with the public; proposed a mechanism for long-term engagement between decision-makers, data users and the public on the strategy and its goals; and called for increased efforts to fix the government’s data infrastructure so organisations outside the government can benefit from it. Separately, we have also submitted our own views to the UK Government, calling for a focus on teaching data skills to the British public. Learning such skills can prove hugely beneficial to individuals seeking employment in a wide range of fields including the public sector, government, media and voluntary sector.  But at present there is often a huge amount of work required to clean up data in order to make it usable before insights or stories can be gleaned from it.  We believe that the UK government could help empower the wider workforce by instigating or backing a fundamental data literacy training programme open to local communities working in a range of fields to strengthen data demand, use and understanding.  Without such training and knowledge, large numbers of UK workers will be ill-equipped to take on many jobs of the future where products and services are devised, built and launched to address issues highlighted by data. Empowering people to make better decisions and choices informed by data will boost productivity, but not without the necessary investment in skills. We have also told the government that one of the most important things it can do to help businesses and non-profit organisations best share the data they hold is to promote open licencing. Open licences are legal arrangements that grant the general public rights to reuse, distribute, combine or modify works that would otherwise be restricted under intellectual property laws. We would also like to see the public sector pioneering new ways of producing and harnessing citizen-generated data efforts by organising citizen science projects through schools, libraries, churches and community groups.  These local communities could help the government to collect high-quality data relating to issues such as air quality or recycling, while also leading the charge when it comes to increasing the use of central government data. We live in a knowledge society where we face two different futures: one which is open and one which is closed. A closed future is one where knowledge is exclusively owned and controlled leading to greater inequality and a closed society. But an open future means knowledge is shared by all – freely available to everyone, a world where people are able to fulfil their potential and live happy and healthy lives. The UK National Data Strategy must emphasise the importance and value of sharing more, better quality information and data openly in order to make the most of the world-class knowledge created by our institutions and citizens.  Without this commitment at all levels of society, British businesses, citizens and public bodies will fail to play a full role in the interconnected global knowledge economy of today and tomorrow.

On the trail of OE policy co-creation

- April 24, 2019 in copyright, Events, Featured, oer, Policy

By Javiera Atenas & Leo HavemannWe’ve recently returned from the OER19 conference in Galway, Ireland, where we had the opportunity of running the third edition of the Open Education Policy Co-creation (OEPC) workshop, and the outcomes were very interesting! But let’s start from the beginning. This workshop was originally developed in the context of the OpenMed project, to support the project stakeholders to develop Open Education Policies following the Recommendations from OpenMed to University leaders and policy makers for opening up Higher Education in the South-Mediterranean by 2030. The workshop aimed to give the project stakeholders some basic policy co-design skills, and as well as an overview of the key techniques and elements needed to opening up the arenas to foster sustainable policies. In order to support these objectives the workshop is grounded on the participation and co-creation standard developed by OGP to foster the co-creation of national commitments, and uses a set of cards and a canvas (adapted from those developed by the UK Policy Lab) aligning the elements with those recommended by the Ljubljana Action Plan, and the JRC report, Policy Approaches to Open Education. The workshop elements aim at raising awareness of the international Landscape towards widening participation including a wide range of stakeholders, while, being resourceful, optimistic and flexible, to ensure that the policy design addresses the co-creation process in a specific context, involving a wide range of policy design partners to ensure the correct implementation, overseeing the opportunities and challenges of an OE policy, and the key elements these must comprise providing the evidence needed to support the stakeholders and to prevent risks of policy derailment.   The OE policy workshop fosters the assessment of data, research and experiences from national and international perspectives related to the socio-economic, political and cultural context in what is known as global policy convergence [Haddad & Demsky (1995); Thompson & Cook (2014)]

From Rome to Warsaw

We piloted the OEPC workshop at the OpenMed conference (Rome) with a group of stakeholders from Egypt, England, Italy, Jordan, Morocco, Palestine and Spain. Then, with Fabio Nascimbeni we re-tested the methodology at the OE Policy Forum (Warsaw), with stakeholders from Germany, Malta, Poland, Romania, Spain, Slovenia, Sweden and The Netherlands. In both pilots, the participants agreed that core processes and partners for OE policy-making were co-design and collaboration, which should include not only senior management but academics, librarians and experts in copyright, as these could provide a wide range of perspectives related to their local contexts and needs. Also, the participants mentioned as stakeholders the need to work alongside with Open Science, Open Access and OE experts and policymakers to foster cohesion in Open Policies.

OpenMed Policy Forum

Regarding solutions and approaches, the participants mentioned the need to include experts in accreditation systems and copyright regulations, as these policy opportunities are key to foster sustainability in OE policy making, but also, are possible challenges and barriers for promoting the adoption of Open Educational Practices, alongside the lack of copyright and IP understanding, and scarce awareness of open practices amongst faculty, senior management, and policymakers, which prevent the acknowledgment of Open Practices for career progression, and, also diminish the chances for obtaining funding to implement OE policies. So, in order to enable an OE policy, the participants mentioned as key elements the recognition of Open Practices and accreditation of Open Learning were key, as these elements, can provide evidence to promote the adoption of Open Education alongside with international good practices, data on cost-benefits of OER, national educational data and performance data to showcase the impact of Open Education.

Centrum Cyfrowe – Open Education Policy Forum

According to the participants of the first two pilots, the main beneficiaries of an OE policy are learners and educators, however, families, general public, universities and governments can also benefit from Open Education by lowering costs of access to education while widening participation, although, the groups mentioned that it is key to be aware of the risks that an OE policy may face, are lack of political understanding of openness, as well as datafication and commodification of education and also, lobbying from commercial publishers and ed-tech vendors might severely impact upon  or derail an OE policy initiative.

From Warsaw to Galway

With all this information in hand, and after carefully updating the kit according to the feedback given by the pilot participants, we ran a new edition of our workshop, billed as Fostering Openness in Education: considerations for sustainable policy making at OER19, in which over 20 participants from Ireland, England, Scotland, Austria, The Netherlands, Australia and Spain participated. For them, in order to foster co-creation of OE policies, processes such as the involvement of communities of practices and use spaces in global conferences are key, and also, the use of consultations and roundtables to discuss the policy at different stages. When discussing the policy context, the participants mentioned the importance of acknowledging the voices of diverse groups to ensure inclusivity, considering the level of access to technological infrastructure. When talking about Policy Design Partners the participants agree that educators, policy makers, librarians, learning technologists and education experts need to be involved, while others mentioned the need to include learners. While discussing opportunities and challenges, the participants mentioned collaboration, innovation, chances to flourish and improvement of quality and access to education as key opportunities while, they highlighted as challenges, the commodification of education and conflicts of interest and agendas between negotiations between institutions and technology suppliers.

Pic by Virginia Rodés

In relation to the key elements of an OE policy, the participants highlighted transparent practices, and bench-learning from existing policies in order to include accreditation and recognition of Open Learning, and also, to have elements that enable  measurement of the impact of the policy, as impact data can be further reused by other institutions willing to develop their policies as evidence, including for example student success rates, uptake rates, learner engagement and amount of resources created and used. This evidence can provide data for recognition of educators’ good practices, towards benefiting two groups of key stakeholders learners and the society as a whole through the provision of Open Content. Finally, in relation to risks, the participants mentioned the lobby of commercial textbook publishers and from educational corporations taking advantage of Open Content to profit commercially.

From Galway to London

Following the Galway workshop, we have reviewed and compared the outcomes of the three workshops and found some fascinating stuff. Regarding processes in the Rome pilot, most of the discussion focused on the co-creation process, as for the participants, policy-making was most likely related to the governance processes and to senior management activities, as for the groups in Warsaw, it was key to connect OE with other educational reforms, and to align it with their Open Government Partnership strategies, while in Galway, the keyword was collaboration, as they saw the opportunity for fostering collective ownership when a policy is co-created. Regarding the policy context, for the groups in Rome, the need was related with the need of promoting innovation to enhance the quality of education in a context of overcrowded classrooms, while in Warsaw, lots of the discussion focused on the need of having content in national languages, and in Galway the key idea was inclusion and diversity, to provide learners with the content they need. When discussing Policy Design Partners the participants in Rome highlighted the importance of involving international OE experts and the group in Warsaw mentioned learning technologists and copyright experts while in Galway, librarians and academics were mentioned. In relation with to policy opportunities, the groups in Rome mentioned access to quality educational materials and opportunities for distance learning, while in Warsaw, OE policies were seen as a mean to defeat the EU copyright reform and in Galway, the concepts of co-creation and collaboration to foster bottom-up policies was seen as a great advantage. In regards with the challenges, in Rome, the biggest one mentioned was overcrowding of classrooms and little flexibility for open learning accreditation, while in Warsaw the EU copyright reform and the ruthless publishers’ lobby was seen as a major threat. For the groups in Rome, Warsaw and Galway, the key elements were accreditation of open learning, and recognition of open education practices for career progression. For the participants in Rome, the key evidence was good practices on the use and production of OER at an international level, while in Warsaw, it was important to provide data on cost-benefits of OER, and in Galway, success rates, uptake rates and learning engagement data as key to foster an OE policy. Finally, the key stakeholders for the group in Rome were learners, educators and universities while for the Warsaw group governments were also key, and for the participants in Galway, the group extended to the society as a whole. In regards with the risks, the group in Rome mentioned lack of political understanding of openness, while the participants in Warsaw, were concerned about the current wave of datafication, commodification and marketisation of education and furthermore, worried at the tactics used by publishers and ed-tech vendors/gurus, as this set of practices were of potential danger not only to OE but to education in general, and this concern was widely replicated in the Galway session. Is interesting to see that in some cases the groups see elements from different perspectives, and while for the groups in Warsaw and in Galway shared some concerns regarding datafication and copyright, the participants in Rome were more concerned by the lack of IT literacies. It is also interesting each group, without being connected, builds on top of each other, and that for all the international OE community it is key to foster sustainable OE policies that can provide evidence of good practices to promote the adoption of OE.

From London to Lisbon

Our next stop is Lisbon, we will be holding another  OE policy co-creation workshop at the CC summit, so join us Friday, May 10th from 3:30pm – 4:25pm.

Pic by Javiera Atenas

 

Next stops

If you think that your institution or a consortium of institutions may benefit from this open policy-making exercise, please get in touch with Leo Havemann <leo.havemann@open.ac.uk> or with Javiera Atenas <javiera.atenas@idatosabiertos.org>  

References

Haddad, W. D., & Demsky, T. (1995). Education policy-planning process: an applied framework Fundamentals of educational planning—51. Paris: UNESCO: International Institute for Educational Planning. Retrieved from http://www.unesco.org/education/pdf/11_200.pdf Thompson, G., & Cook, I. (2014). Education policy-making and time. Journal of Education Policy, 29(5), 700–715.https://doi.org/10.1080/02680939.2013.875225   —

About the authors

Javiera Atenas: PhD in Education and co-coordinator of the Open Education Working Group, responsible for the promotion of Open Data, Open Policies and Capacity Building in Open Education. She is also a Senior Fellow of the Higher Education Academy and the Education Lead at the Latin American Initiative for Open Data [ILDA] as well as an academic and researcher with interest in the use of Open Data as Open Educational Resources and in critical pedagogy. @jatenas  
Leo Havemann: Is a Digital Education Advisor at University College London, and a postgraduate researcher in open education at the Open University. He is a co-ordinator of the M25 Learning Technology Group. His research interests include open educational practices, skills and literacies, blended learning, and technology-enhanced assessment and feedback. He has taught in HE in New Zealand and Australia, worked as a librarian in a London FE college, and worked in IT roles in the private sector. He has a Master’s degree from the University of Waikato. He can be followed as @leohavemann on Twitter.
                  sad    

EU Council backs controversial copyright crackdown

- April 15, 2019 in copyright, eu, Featured, Internet, News, Policy

The Council of the European Union today backed a controversial copyright crackdown in a ‘deeply disappointing’ vote that could impact on all internet users. Six countries voted against the proposal which has been opposed by 5million people through a Europe-wide petition – Italy, Luxembourg, Netherlands, Poland, Finland and Sweden.
Three more nations abstained, but the UK voted for the crackdown and there were not enough votes for a blocking minority. The proposal is expected to lead to the introduction of ‘filters’ on sites such as YouTube, which will automatically remove content that could be copyrighted. While entertainment footage is most likely to be affected, academics fear it could also restrict the sharing of knowledge, and critics argue it will have a negative impact on freedom of speech and expression online. EU member states will have two years to implement the law, and the regulations are still expected to affect the UK despite Brexit. The Open Knowledge Foundation said the battle is not over, with the European elections providing an opportunity to elect ‘open champions’. Catherine Stihler, chief executive of the Open Knowledge Foundation, said:
“This is a deeply disappointing result which will have a far-reaching and negative impact on freedom of speech and expression online. The controversial crackdown was not universally supported, and I applaud those national governments which took a stand and voted against it. We now risk the creation of a more closed society at the very time we should be using digital advances to build a more open world where knowledge creates power for the many, not the few.

But the battle is not over. Next month’s European elections are an opportunity to elect a strong cohort of open champions at the European Parliament who will work to build a more open world.”

EU copyright vote a ‘massive blow’ for internet users

- March 26, 2019 in copyright, eu, Featured, Internet, News, Policy

MEPs have today voted to press ahead with a controversial copyright crackdown in a ‘massive blow’ for all internet users. Despite a petition with over 5 million signatures and scores of protests across Europe attended by tens of thousands of people, MEPs voted by 348 to 274 in favour of the changes. It is expected to lead to the introduction of ‘filters’ on sites such as YouTube, which will automatically remove content that could be copyrighted. While entertainment footage is most likely to be affected, academics fear it could also restrict the sharing of knowledge, and critics argue it will have a negative impact on freedom of speech and expression online. EU member states will have two years to implement the law, and the regulations are still expected to affect the UK despite Brexit. Catherine Stihler, chief executive of the Open Knowledge Foundation, said:
“This vote is a massive blow for every internet user in Europe. MEPs have rejected pleas from millions of EU citizens to save the internet, and chose instead to restrict freedom of speech and expression online. We now risk the creation of a more closed society at the very time we should be using digital advances to build a more open world where knowledge creates power for the many, not the few.

But while this result is deeply disappointing, the forthcoming European elections provide an opportunity for candidates to stand on a platform to seek a fresh mandate to reject this censorship.”

Open data governance and open governance: interplay or disconnect?  

- February 20, 2019 in Open Data, open data governance, Policy, research

Authors: Ana Brandusescu, Carlos Iglesias, Danny Lämmerhirt, Stefaan Verhulst (in alphabetical order) The presence of open data often gets listed as an essential requirement toward “open governance”. For instance, an open data strategy is reviewed as a key component of many action plans submitted to the Open Government Partnership. Yet little time is spent on assessing how open data itself is governed, or how it embraces open governance. For example, not much is known on whether the principles and practices that guide the opening up of government – such as transparency, accountability, user-centrism, ‘demand-driven’ design thinking – also guide decision-making on how to release open data. At the same time, data governance has become more complex and open data decision-makers face heightened concerns with regards to privacy and data protection. The recent implementation of the EU’s General Data Protection Regulation (GDPR) has generated an increased awareness worldwide of the need to prevent and mitigate the risks of personal data disclosures, and that has also affected the open data community. Before opening up data, concerns of data breaches, the abuse of personal information, and the potential of malicious inference from publicly available data may have to be taken into account. In turn, questions of how to sustain existing open data programs, user-centrism, and publishing with purpose gain prominence. To better understand the practices and challenges of open data governance, we have outlined a research agenda in an earlier blog post. Since then, and perhaps as a result, governance has emerged as an important topic for the open data community. The audience attending the 5th International Open Data Conference (IODC) in Buenos Aires deemed governance of open data to be the most important discussion topic. For instance, discussions around the Open Data Charter principles during and prior to the IODC acknowledged the role of an integrated governance approach to data handling, sharing, and publication. Some conclude that the open data movement has brought about better governance, skills, technologies of public information management which becomes an enormous long-term value for government. But what does open data governance look like?

Understanding open data governance

To expand our earlier exploration and broaden the community that considers open data governance, we convened a workshop at the Open Data Research Symposium 2018. Bringing together open data professionals, civil servants, and researchers, we focused on:
  • What is open data governance?
  • When can we speak of “good” open data governance, and
  • How can the research community help open data decision-makers toward “good” open data governance?
In this session, open data governance was defined as the interplay of rules, standards, tools, principles, processes and decisions that influence what government data is opened up, how and by whom. We then explored multiple layers that can influence open data governance. In the following, we illustrate possible questions to start mapping the layers of open data governance. As they reflect the experiences of session participants, we see them as starting points for fresh ethnographic and descriptive research on the daily practices of open data governance in governments.

Figure: Schema of an open data governance model

The Management layer

Governments may decide about the release of data on various levels. Studying the management side of data governance could look at decision-making methods and devices. For instance, one might analyze how governments gauge public interest in their datasets – through data request mechanisms, user research, or participatory workshops? What routine procedures do governments put in place to interact with other governments and the public? For instance, how do governments design routine processes to open data requests? How are disputes over open data release settled? How do governments enable the public to address non-publication? One might also study cost-benefit calculations and similar methodologies to evaluate data, and how they inform governments what data counts as crucial and is expected to bring returns and societal benefits. Understanding open data governance would also require to study the ways in which open data creation, cleaning, and publication are managed itself. Governments may choose to organise open data publication and maintenance in house, or seek collaborative approaches, otherwise known from data communities like OpenStreetMaps. Another key component is funding and sustainability. Funding might influence management on multiple layers – from funding capacity building, to investing in staff innovations and alternative business models for government agencies that generate revenue from high value datasets. What do these budget and sustainability models look like? How are open data initiatives currently funded, under what terms, for how long, by whom and for what? And how do governments reconcile the publication of high value datasets with the need to provide income for public government bodies? These questions gain importance as governments move towards assessing and publishing high value datasets. Open governance and management: To what extent is management guided by open governance? For instance, how participatory, transparent, and accountable are decision-making processes and devices? How do governments currently make space for more open governance in their management processes? Do governments practice more collaborative data management with communities, for example to maintain, update, verify government data?   

The Legal and Policy layer

The interplay between legal and policy frameworks: Open data policies operate among other legal and policy frameworks, which can complement, enable, or limit the scope of open data. New frameworks such as GDPR, but also existing right to information and freedom of expression frameworks prompt the question of how the legal environment influences the behaviour and daily decision-making around open data. To address such questions, one could study the discourse and interplay between open data policies as well as tangential policies like smart city or digitalisation policies. Implementation of law and policies: Furthermore, how are open data frameworks designed to guide the implementation open data? How do they address governmental devolution? Open data governance needs to stretch across all government levels to unlock data from all government levels. What approaches are experimented with to coordinate the implementation of policies across jurisdictions and government branches? To what agencies do open data policies apply, and how do they enable or constrain choices around open data? What agencies define and move forward open data, and how does this influence adoption and sustainability of open data initiatives? Open governance of law and policy: Besides studying the interaction of privacy protection, right to information, and open data policies, how could open data benefit from policies enabling open governance and civic participation? Do governments develop more integrated strategies for open governance and open data, and if so, what policies and legal mechanisms are in place? If so, how do these laws and policies enable other aspects of open data governance, including more participatory management, more substantive and legally supported citizen participation?  

The Technical and Standards layer

Governments may have different technical standards in place for data processing and publication, from producing data, to quality assurance processes. Some research has looked into the ways data standards for open data alter the way governments process information. Others have argued that the development of data standards is reference how governments envisage citizens, primarily catering to tech-literate audiences. (Data) standards do not only represent, but intervene in the way governments work. Therefore, they could substantially alter the ways government publishes information. Understood this way, how do standards enable resilience against change, particularly when facing shifting political leadership? On the other hand, most government data systems are not designed for open data. Too often, governments are struggling to transform huge volumes of government data into open data using manual methods. Legacy IT systems that have not been built to support open data create additional challenges to developing technical infrastructure, but there is no single global solution to data infrastructure. How could then governments transform their technical infrastructure to allow them to publish open data efficiently? Open governance and the technical / standards layer: If standards can be understood as  bridge building devices, or tools for cooperation, how could open governance inform the creation of technical standards? Do governments experiment with open standards, and if so, what standards are developed, to what end, using what governance approach?

The Capacity layer

Staff innovations may play an important role in open data governance. What is the role of chief data officers in improving open data governance? Could the usual informal networks of open data curators within government and a few open data champions make open data success alone? What role do these innovations play in making decisions about open data and personal data protection? Could governments rely solely on senior government officials to execute open data strategies? Who else is involved in the decision-making around open data release? What are the incentives and disincentives for officials to increase data sharing? As one session participant mentioned: “I have never experienced that a civil servant got promoted for sharing data”. This begs the question if and how governments currently assess performance metrics that support opening up data. What other models could help reward data sharing and publication? In an environment of decreased public funding, are there opportunities for governments to integrate open data publication in existing engagement channels with the public? Open governance and capacity: Open governance may require capacities in government, but could also contribute new capacities. This can apply to staff, but also resources such as time or infrastructure. How do governments provide and draw capacity from open governance approaches, and what could be learnt for other open data governance approaches?    

Next steps

With this map of data governance aspects as a starting point, we would like to conduct empirical research to explore how open data governance is practised. A growing body of ethnographic research suggests that tech innovations such as algorithmic decision-making, open data, or smart city initiatives are ‘multiples’ — meaning that they can be practiced in many ways by different people, arising in various contexts. With such an understanding, we would like to develop empirical case studies to elicit how open data governance is practised. Our proposed research approach includes the following steps:
  • Universe mapping: Identifying public sector officials and civil servants involved in deciding how data gets managed, shared and published openly (this helps to get closer to the actual decision-makers, and to learn from them).
  • Describing how and on what basis (legal, organisational & bureaucratic, technological, financial, etc.) people make decisions on what gets published and why.
  • Observe and describe different approaches to do open data governance, looking at enabling and limiting factors of opening up data.
  • Describe gaps and areas of improvement with regards to open data governance, as well as best practices.
This may surface how open data governance becomes salient for governments, under what circumstances and why. If you are a government official, or civil servant working with (open) data, and would like to share your experiences, we would like to hear from you!  

Celebrating the public domain in 2019

- January 29, 2019 in open culture, Open GLAM, OpenGLAM, Policy, Public Domain

2019 is a special year for the public domain, the out-of-copyright material that everyone is free to enjoy, share, and build upon without restriction. Normally, each year on the 1st of January a selection of works (books, films, artworks, musical scores and more) enter the public domain because their copyright expires – which is most commonly 70 years after the creator’s death depending on where in the world you are. This year, for the first time in more than twenty years, new material entered the public domain in the US, namely all works that were published in the year 1923. Due to complicated legal proceedings, the last new release of public domain material in the US was in 1998, for all works dating from 1922. But from now on, each following year we will expect to see a new batch of material freed of copyright restrictions (so content from the year 1924 will become available from 2020 onwards, content from 1925 in 2021, and so on). This is good news for everyone, since the availability of such open cultural data enables citizens from across the world to enjoy this material, understand their cultural heritage and re-use it to produce new works of art. The Public Domain Review, an online journal & not-for-profit project dedicated to promoting and celebrating the public domain, curated their Class of 2019: a top pick of artists and writers whose works entered the public domain this year. A full overview of the 2019 release is available here. A great way to celebrate this public domain content in 2019 could be to organise events, workshops or hackathons using this material on Open Data Day, the annual celebration of open data on Saturday 2 March 2019. If you are planning an event, you can add it to the global map via the Open Data Day registration form. Coinciding with this mass release of public domain works, the Public Domain Manifesto that was been produced within the context of COMMUNIA, the European Thematic Network on the digital public domain, has now been made available via a renewed website at publicdomainmanifesto.org. Describing the public domain material as “raw material from which new knowledge is derived and new cultural works are created”, the manifesto aims to stress the importance of the wealth of the public domain to both citizens and policy-makers, to make sure its legal basis remains strong and everyone will be able to access and reuse the material in the future. The manifesto describes the key principles that are needed to actively maintain the public domain and the voluntary commons in our society, for example to keep public domain works in the Public Domain by not claiming exclusive rights to technical reproductions of works. It also formulates a number of recommendations to protect the public domain from legal obstacles and assure it can function to the benefit of education, cultural heritage and scientific research in a meaningful way. There are currently over 3.000 signatures of the manifesto, but additional support is important to strengthen the movement: you show your support by signing the Public Domain Manifesto here.

What data counts in Europe? Towards a public debate on Europe’s high value data and the PSI Directive

- January 16, 2019 in Open Government Data, Open Standards, Policy, research

This blogpost was co-authored by Danny Lämmerhirt and Pierre Chrzanowski (*author note at the bottom) January 22 will mark a crucial moment for the future of open data in Europe. That day, the final trilogue between European Commission, Parliament, and Council is planned to decide over the ratification of the updated PSI Directive. Among others, the European institutions will decide over what counts as ‘high value’ data. What essential information should be made available to the public and how those data infrastructures should be funded and managed are critical questions for the future of the EU. As we will discuss below, there are many ways one might envision the collective ‘value’ of those data. This is a democratic question and we should not be satisfied by an ill and broadly defined proposal. We therefore propose to organise a public debate to collectively define what counts as high value data in Europe.

What does PSI Directive say about high value datasets?  

The European Commission provides several hints in the current revision of the PSI Directive on how it envisions high value datasets. They are determined by one of the following ‘value indicators’:
  • The potential to generate significant social, economic, or environmental benefits,
  • The potential to generate innovative services,
  • The number of users, in particular SMEs,  
  • The revenues they may help generate,  
  • The data’s potential for being combined with other datasets
  • The expected impact on the competitive situation of public undertakings.
Given the strategic role of open data for Europe’s Digital Single Market, these indicators are not surprising. But as we will discuss below, there are several challenges defining them. Also, there are different ways of understanding the importance of data. The annex of the PSI Directive also includes a list of preliminary high value data, drawing primarily from the key datasets defined by Open Knowledge International’s (OKI’s) Global Open Data Index, as well as the G8 Open Data Charter Technical Annex. See the proposed list in the table below. List of categories and high-value datasets:
Category Description
1. Geospatial Data Postcodes, national and local maps (cadastral, topographic, marine, administrative boundaries).
2. Earth observation and environment Space and situ data (monitoring of the weather and of the quality of land and water, seismicity, energy consumption, the energy performance of buildings and emission levels).
3. Meteorological data Weather forecasts, rain, wind and atmospheric pressure.
4. Statistics National, regional and local statistical data with main demographic and economic indicators (gross domestic product, age, unemployment, income, education).
5. Companies Company and business registers (list of registered companies, ownership and management data, registration identifiers).
6. Transport data Public transport timetables of all modes of transport, information on public works and the state of the transport network including traffic information.
  According to the proposal, regardless of who provide them, these datasets shall be available for free, machine-readable and accessible for download, and where appropriate, via APIs. The conditions for re-use shall be compatible with open standard licences.

Towards a public debate on high value datasets at EU level

There has been attempts by EU Member States to define what constitutes high-value data at national level, with different results. In Denmark, basic data has been defined as the five core information public authorities use in their day-to-day case processing and should release. In France, the law for a Digital Republic aims to make available reference datasets that have the greatest economic and social impact. In Estonia, the country relies on the X-Road infrastructure to connect core public information systems, but most of the data remains restricted. Now is the time for a shared and common definition on what constitute high-value datasets at EU level. And this implies an agreement on how we should define them. However, as it stands, there are several issues with the value indicators that the European Commission proposes. For example, how does one define the data’s potential for innovative services? How to confidently attribute revenue gains to the use of open data? How does one assess and compare the social, economic, and environmental benefits of opening up data? Anyone designing these indicators must be very cautious, as metrics to compare social, economic, and environmental benefits may come with methodical biases. Research found for example, that comparing economic and environmental benefits can unfairly favour data of economic value at the expense of fuzzier social benefits, as economic benefits are often more easily quantifiable and definable by default. One form of debating high value datasets could be to discuss what data gets currently published by governments and why. For instance, with their Global Open Data Index, Open Knowledge International has long advocated for the publication of disaggregated, transactional spending figures. Another example is OKI’s Open Data For Tax Justice initiative which wanted to influence the requirements for multinational companies to report their activities in each country (so-called ‘Country-By-Country-Reporting’), and influence a standard for publicly accessible key data.   A public debate of high value data should critically examine the European Commission’s considerations regarding the distortion of competition. What market dynamics are engendered by opening up data? To what extent do existing markets rely on scarce and closed information? Does closed data bring about market failure, as some argue (Zinnbauer 2018)? Could it otherwise hamper fair price mechanisms (for a discussion of these dynamics in open access publishing, see Lawson and Gray 2016)? How would open data change existing market dynamics? What actors proclaim that opening data could purport market distortion, and whose interests do they represent? Lastly, the European Commission does not yet consider cases of government agencies  generating revenue from selling particularly valuable data. The Dutch national company register has for a long time been such a case, as has the German Weather Service. Beyond considering competition, a public debate around high value data should take into account how marginal cost recovery regimes currently work.

What we want to achieve

For these reasons, we want to organise a public discussion to collectively define
  1. i) What should count as a high value datasets, and based on what criteria,
  2. ii) What information high value datasets should include,
  3. ii) What the conditions for access and re-use should be.
The PSI Directive will set the baseline for open data policies across the EU. We are therefore at a critical moment to define what European societies value as key public information. What is at stake is not only a question of economic impact, but the question of how to democratise European institutions, and the role the public can play in determining what data should be opened.

How you can participate

  1. We will use the Open Knowledge forum as main channel for coordination, exchange of information and debate. To join the debate, please add your thoughts to this thread or feel free to start a new discussion for specific topics.
  2. We gather proposals for high value datasets in this spreadsheet. Please feel free to use it as a discussion document, where we can crowdsource alternative ways of valuing data.
  3. We use the PSI Directive Data Census to assess the openness of high value datasets.
We also welcome any reference to scientific paper, blogpost, etc. discussing the issue of high-value datasets. Once we have gathered suggestions for high value datasets, we would like to assess how open proposed high-value datasets are. This will help to provide European countries with a diagnosis of the openness of key data.     Author note: Danny Lämmerhirt is senior researcher on open data, data governance, data commons as well as metrics to improve open governance. He has formerly worked with Open Knowledge International, where he led its research activities, including the methodology development of the Global Open Data Index 2016/17. His work focuses, among others, on the role of metrics for open government, and the effects metrics have on the way institutions work and make decisions. He has supervised and edited several pieces on this topic, including the Open Data Charter’s Measurement Guide. Pierre Chrzanowski is Data Specialist with the World Bank Group and a co-founder of Open Knowledge France local group. As part of his work, he developed the Open Data for Resilience Initiative (OpenDRI) Index, a tool to assess the openness of key datasets for disaster risk management projects. He has also participated in the impact assessment prior to the new PSI Directive proposal and has contributed to the Global Open Data Index as well as the Web Foundation’s Open Data Barometer.

Europe’s proposed PSI Directive: A good baseline for future open data policies?

- June 21, 2018 in eu, licence, Open Data, Open Government Data, Open Standards, Policy, PSI, research

Some weeks ago, the European Commission proposed an update of the PSI Directive**. The PSI Directive regulates the reuse of public sector information (including administrative government data), and has important consequences for the development of Europe’s open data policies. Like every legislative proposal, the PSI Directive proposal is open for public feedback until July 13. In this blog post Open Knowledge International presents what we think are necessary improvements to make the PSI Directive fit for Europe’s Digital Single Market.    In a guest blogpost Ton Zijlstra outlined the changes to the PSI Directive. Another blog post by Ton Zijlstra and Katleen Janssen helps to understand the historical background and puts the changes into context. Whilst improvements are made, we think the current proposal is a missed opportunity, does not support the creation of a Digital Single Market and can pose risks for open data. In what follows, we recommend changes to the European Parliament and the European Council. We also discuss actions civil society may take to engage with the directive in the future, and explain the reasoning behind our recommendations.

Recommendations to improve the PSI Directive

Based on our assessment, we urge the European Parliament and the Council to amend the proposed PSI Directive to ensure the following:
  • When defining high-value datasets, the PSI Directive should not rule out data generated under market conditions. A stronger requirement must be added to Article 13 to make assessments of economic costs transparent, and weigh them against broader societal benefits.
  • The public must have access to the methods, meeting notes, and consultations to define high value data. Article 13 must ensure that the public will be able to participate in this definition process to gather multiple viewpoints and limit the risks of biased value assessments.
  • Beyond tracking proposals for high-value datasets in the EU’s Interinstitutional Register of Delegated Acts, the public should be able to suggest new delegated acts for high-value datasets.  
  • The PSI Directive must make clear what “standard open licences” are, by referencing the Open Definition, and explicitly recommending the adoption of Open Definition compliant licences (from Creative Commons and Open Data Commons) when developing new open data policies. The directive should give preference to public domain dedication and attribution licences in accordance with the LAPSI 2.0 licensing guidelines.
  • Government of EU member states that already have policies on specific licences in use should be required to add legal compatibility tests with other open licences to these policies. We suggest to follow the recommendations outlined in the LAPSI 2.0 resources to run such compatibility tests.
  • High-value datasets must be reusable with the least restrictions possible, subject at most to requirements that preserve provenance and openness. Currently the European Commission risks to create use silos if governments will be allowed to add “any restrictions on re-use” to the use terms of high-value datasets.  
  • Publicly funded undertakings should only be able to charge marginal costs.
  • Public undertakings, publicly funded research facilities and non-executive government branches should be required to publish data referenced in the PSI Directive.

Conformant licences according to the Open Definition, opendefinition.org/licenses

Our recommendations do not pose unworkable requirements or disproportionately high administrative burden, but are essential to realise the goals of the PSI directive with regards to:
  1. Increasing the amount of public sector data available to the public for re-use,
  2. Harmonising the conditions for non-discrimination, and re-use in the European market,
  3. Ensuring fair competition and easy access to markets based on public sector information,
  4. Enhancing cross-border innovation, and an internal market where Union-wide services can be created to support the European data economy.

Our recommendations, explained: What would the proposed PSI Directive mean for the future of open data?

Publication of high-value data

The European Commission proposes to define a list of ‘high value datasets’ that shall be published under the terms of the PSI Directive. This includes to publish datasets in machine-readable formats, under standard open licences, in many cases free of charge, except when high-value datasets are collected by public undertakings in environments where free access to data would distort competition. “High value datasets” are defined as documents that bring socio-economic benefits, “notably because of their suitability for the creation of value-added services and applications, and the number of potential beneficiaries of the value-added services and applications based on these datasets”. The EC also makes reference to existing high value datasets, such as the list of key data defined by the G8 Open Data Charter. Identifying high-quality data poses at least three problems:
  1. High-value datasets may be unusable in a digital Single Market: The EC may “define other applicable modalities”, such as “any conditions for re-use”. There is a risk that a list of EU-wide high value datasets also includes use restrictions violating the Open Definition. Given that a list of high value datasets will be transposed by all member states, adding “any conditions” may significantly hinder the reusability and ability to combine datasets.
  2. Defining value of data is not straightforward. Recent papers, from Oxford University, to Open Data Watch and the Global Partnership for Sustainable Development Data demonstrate disagreement what data’s “value” is. What counts as high value data should not only be based on quantitative indicators such as growth indicators, numbers of apps or numbers of beneficiaries, but use qualitative assessments and expert judgement from multiple disciplines.
  3. Public deliberation and participation is key to define high value data and to avoid biased value assessments. Impact assessments and cost-benefit calculations come with their own methodical biases, and can unfairly favour data with economic value at the expense of fuzzier social benefits. Currently, the PSI Directive does not consider data created under market conditions to be considered high value data if this would distort market conditions. We recommend that the PSI Directive adds a stronger requirement to weigh economic costs against societal benefits, drawing from multiple assessment methods (see point 2). The criteria, methods, and processes to determine high value must be transparent and accessible to the broader public to enable the public to negotiate benefits and to reflect the viewpoints of many stakeholders.

Expansion of scope

The new PSI Directive takes into account data from “public undertakings”. This includes services in the general interest entrusted with entities outside of the public sector, over which government maintains a high degree of control. The PSI Directive also includes data from non-executive government branches (i.e. from legislative and judiciary branches of governments), as well as data from publicly funded research. Opportunities and challenges include:
  • None of the data holders which are planned to be included in the PSI Directive are obliged to publish data. It is at their discretion to publish data. Only in case they want to publish data, they should follow the guidelines of the proposed PSI directive.
  • The PSI Directive wants to keep administrative costs low. All above mentioned data sectors are exempt from data access requests.
  • In summary, the proposed PSI Directive leaves too much space for individual choice to publish data and has no “teeth”. To accelerate the publication of general interest data, the PSI Directive should oblige data holders to publish data. Waiting several years to make the publication of this data mandatory, as happened with the first version of the PSI Directive risks to significantly hamper the availability of key data, important for the acceleration of growth in Europe’s data economy.    
  • For research data in particular, only data that is already published should fall under the new directive. Even though the PSI Directive will require member states to develop open access policies, the implementation thereof should be built upon the EU’s recommendations for open access.

Legal incompatibilities may jeopardise the Digital Single Market

Most notably, the proposed PSI Directive does not address problems around licensing which are a major impediment for Europe’s Digital Single Market. Europe’s data economy can only benefit from open data if licence terms are standardised. This allows data from different member states to be combined without legal issues, and enables to combine datasets, create cross-country applications, and spark innovation. Europe’s licensing ecosystem is a patchwork of many (possibly conflicting) terms, creating use silos and legal uncertainty. But the current proposal does not only speak vaguely about standard open licences, and makes national policies responsible to add “less restrictive terms than those outlined in the PSI Directive”. It also contradicts its aim to smoothen the digital Single Market encouraging the creation of bespoke licences, suggesting that governments may add new licence terms with regards to real-time data publication. Currently the PSI Directive would allow the European Commission to add “any conditions for re-use” to high-value datasets, thereby encouraging to create legal incompatibilities (see Article 13 (4.a)). We strongly recommend that the PSI Directive draws on the EU co-funded LAPSI 2.0 recommendations to understand licence incompatibilities and ensure a compatible open licence ecosystem.   I’d like to thank Pierre Chrzanowksi, Mika Honkanen, Susanna Ånäs, and Sander van der Waal for their thoughtful comments while writing this blogpost.   Image adapted from Max Pixel   ** Its’ official name is the Directive 2003/98/EC on the reuse of public sector information.

Europe’s proposed PSI Directive: A good baseline for future open data policies?

- June 21, 2018 in eu, licence, Open Data, Open Government Data, Open Standards, Policy, PSI, research

Some weeks ago, the European Commission proposed an update of the PSI Directive**. The PSI Directive regulates the reuse of public sector information (including administrative government data), and has important consequences for the development of Europe’s open data policies. Like every legislative proposal, the PSI Directive proposal is open for public feedback until July 13. In this blog post Open Knowledge International presents what we think are necessary improvements to make the PSI Directive fit for Europe’s Digital Single Market.    In a guest blogpost Ton Zijlstra outlined the changes to the PSI Directive. Another blog post by Ton Zijlstra and Katleen Janssen helps to understand the historical background and puts the changes into context. Whilst improvements are made, we think the current proposal is a missed opportunity, does not support the creation of a Digital Single Market and can pose risks for open data. In what follows, we recommend changes to the European Parliament and the European Council. We also discuss actions civil society may take to engage with the directive in the future, and explain the reasoning behind our recommendations.

Recommendations to improve the PSI Directive

Based on our assessment, we urge the European Parliament and the Council to amend the proposed PSI Directive to ensure the following:
  • When defining high-value datasets, the PSI Directive should not rule out data generated under market conditions. A stronger requirement must be added to Article 13 to make assessments of economic costs transparent, and weigh them against broader societal benefits.
  • The public must have access to the methods, meeting notes, and consultations to define high value data. Article 13 must ensure that the public will be able to participate in this definition process to gather multiple viewpoints and limit the risks of biased value assessments.
  • Beyond tracking proposals for high-value datasets in the EU’s Interinstitutional Register of Delegated Acts, the public should be able to suggest new delegated acts for high-value datasets.  
  • The PSI Directive must make clear what “standard open licences” are, by referencing the Open Definition, and explicitly recommending the adoption of Open Definition compliant licences (from Creative Commons and Open Data Commons) when developing new open data policies. The directive should give preference to public domain dedication and attribution licences in accordance with the LAPSI 2.0 licensing guidelines.
  • Government of EU member states that already have policies on specific licences in use should be required to add legal compatibility tests with other open licences to these policies. We suggest to follow the recommendations outlined in the LAPSI 2.0 resources to run such compatibility tests.
  • High-value datasets must be reusable with the least restrictions possible, subject at most to requirements that preserve provenance and openness. Currently the European Commission risks to create use silos if governments will be allowed to add “any restrictions on re-use” to the use terms of high-value datasets.  
  • Publicly funded undertakings should only be able to charge marginal costs.
  • Public undertakings, publicly funded research facilities and non-executive government branches should be required to publish data referenced in the PSI Directive.

Conformant licences according to the Open Definition, opendefinition.org/licenses

Our recommendations do not pose unworkable requirements or disproportionately high administrative burden, but are essential to realise the goals of the PSI directive with regards to:
  1. Increasing the amount of public sector data available to the public for re-use,
  2. Harmonising the conditions for non-discrimination, and re-use in the European market,
  3. Ensuring fair competition and easy access to markets based on public sector information,
  4. Enhancing cross-border innovation, and an internal market where Union-wide services can be created to support the European data economy.

Our recommendations, explained: What would the proposed PSI Directive mean for the future of open data?

Publication of high-value data

The European Commission proposes to define a list of ‘high value datasets’ that shall be published under the terms of the PSI Directive. This includes to publish datasets in machine-readable formats, under standard open licences, in many cases free of charge, except when high-value datasets are collected by public undertakings in environments where free access to data would distort competition. “High value datasets” are defined as documents that bring socio-economic benefits, “notably because of their suitability for the creation of value-added services and applications, and the number of potential beneficiaries of the value-added services and applications based on these datasets”. The EC also makes reference to existing high value datasets, such as the list of key data defined by the G8 Open Data Charter. Identifying high-quality data poses at least three problems:
  1. High-value datasets may be unusable in a digital Single Market: The EC may “define other applicable modalities”, such as “any conditions for re-use”. There is a risk that a list of EU-wide high value datasets also includes use restrictions violating the Open Definition. Given that a list of high value datasets will be transposed by all member states, adding “any conditions” may significantly hinder the reusability and ability to combine datasets.
  2. Defining value of data is not straightforward. Recent papers, from Oxford University, to Open Data Watch and the Global Partnership for Sustainable Development Data demonstrate disagreement what data’s “value” is. What counts as high value data should not only be based on quantitative indicators such as growth indicators, numbers of apps or numbers of beneficiaries, but use qualitative assessments and expert judgement from multiple disciplines.
  3. Public deliberation and participation is key to define high value data and to avoid biased value assessments. Impact assessments and cost-benefit calculations come with their own methodical biases, and can unfairly favour data with economic value at the expense of fuzzier social benefits. Currently, the PSI Directive does not consider data created under market conditions to be considered high value data if this would distort market conditions. We recommend that the PSI Directive adds a stronger requirement to weigh economic costs against societal benefits, drawing from multiple assessment methods (see point 2). The criteria, methods, and processes to determine high value must be transparent and accessible to the broader public to enable the public to negotiate benefits and to reflect the viewpoints of many stakeholders.

Expansion of scope

The new PSI Directive takes into account data from “public undertakings”. This includes services in the general interest entrusted with entities outside of the public sector, over which government maintains a high degree of control. The PSI Directive also includes data from non-executive government branches (i.e. from legislative and judiciary branches of governments), as well as data from publicly funded research. Opportunities and challenges include:
  • None of the data holders which are planned to be included in the PSI Directive are obliged to publish data. It is at their discretion to publish data. Only in case they want to publish data, they should follow the guidelines of the proposed PSI directive.
  • The PSI Directive wants to keep administrative costs low. All above mentioned data sectors are exempt from data access requests.
  • In summary, the proposed PSI Directive leaves too much space for individual choice to publish data and has no “teeth”. To accelerate the publication of general interest data, the PSI Directive should oblige data holders to publish data. Waiting several years to make the publication of this data mandatory, as happened with the first version of the PSI Directive risks to significantly hamper the availability of key data, important for the acceleration of growth in Europe’s data economy.    
  • For research data in particular, only data that is already published should fall under the new directive. Even though the PSI Directive will require member states to develop open access policies, the implementation thereof should be built upon the EU’s recommendations for open access.

Legal incompatibilities may jeopardise the Digital Single Market

Most notably, the proposed PSI Directive does not address problems around licensing which are a major impediment for Europe’s Digital Single Market. Europe’s data economy can only benefit from open data if licence terms are standardised. This allows data from different member states to be combined without legal issues, and enables to combine datasets, create cross-country applications, and spark innovation. Europe’s licensing ecosystem is a patchwork of many (possibly conflicting) terms, creating use silos and legal uncertainty. But the current proposal does not only speak vaguely about standard open licences, and makes national policies responsible to add “less restrictive terms than those outlined in the PSI Directive”. It also contradicts its aim to smoothen the digital Single Market encouraging the creation of bespoke licences, suggesting that governments may add new licence terms with regards to real-time data publication. Currently the PSI Directive would allow the European Commission to add “any conditions for re-use” to high-value datasets, thereby encouraging to create legal incompatibilities (see Article 13 (4.a)). We strongly recommend that the PSI Directive draws on the EU co-funded LAPSI 2.0 recommendations to understand licence incompatibilities and ensure a compatible open licence ecosystem.   I’d like to thank Pierre Chrzanowksi, Mika Honkanen, Susanna Ånäs, and Sander van der Waal for their thoughtful comments while writing this blogpost.   Image adapted from Max Pixel   ** Its’ official name is the Directive 2003/98/EC on the reuse of public sector information.

New Report: Avoiding data use silos – How governments can simplify the open licensing landscape

- December 14, 2017 in licence, Open Data, Policy, research

Licence proliferation continues to be a major challenge for open data. When licensors decide to create custom licences instead of using standard open licences, it creates a number of problems. Users of open data may find it difficult and cumbersome to understand all legal arrangements. More importantly though, legal uncertainties and compatibility issues with many different licenses can have chilling effects on the reuse of data. This can create ‘data use silos’, a situation where users are legally allowed to only combine some data with one another, as most data would be legally impossible to use under the same terms. This counteracts efforts such as the European Digital Single Market strategy, prevents the free flow of (public sector) information and impedes the growth of data economies. Standardised licences can smoothen this process by clearly stating usage rights. Our latest report  ‘Avoiding data use silos – How governments can simplify the open licensing landscape’ explains why reusable standard licences, or putting the data in the public domain are the best options for governments. While the report has a focus on government, many of the recommendations can also apply to public sector bodies as well as publishers of works more broadly. The lack of centralised coordination within governments is a key driver of licence proliferation. Different phases along the licensing process influence government choices what open licences to apply – including clearance of copyright, policy development, and the development and application of individual licences. Our report also outlines how governments can harmonise the decision-making around open licences and ensure their compatibility. We aim to provide the ground for a renewed discussion around what good open licensing means – and inspire follow-up research on specific blockages of open licensing. We propose following best practices and recommendations for governments who wish to make their public sector information as reusable as possible:
  1. Publish clear notices that concisely inform users about their rights to reuse, combine and distribute information, in case data is exempt from copyright or similar rights.
  2. Align licence policies via inter-ministerial committees and collaborations with representative bodies for lower administrative levels. Consider appointing an agency overseeing and reviewing licensing decisions.
  3. Precisely define reusable standard licences in your policy tools. Clearly define a small number of highly compatible legal solutions. We recommend putting data into the public domain using Creative Commons Zero, or applying a standard open license like Creative Commons BY 4.0.
  4. If you still opt to use custom licences, carefully verify if provisions cause incompatibilities with other licences. Add compatibility statements explicitly naming the licences and licence versions compatible with a custom licence, and keep the licence text short, simple, and reader-friendly.

Custom licences used across a sample of 20 governments