You are browsing the archive for Opinion.

The benefits of Open Transport Data

- April 10, 2014 in News, Opinion

The UITP has published an overview of Open Data in Public Transport and its benefits. Leave your thoughts on our mailinglist You can find the full article at

The benefits of Open Transport Data

- April 10, 2014 in News, Opinion

The UITP has published an overview of Open Data in Public Transport and its benefits. Leave your thoughts on our mailinglist

You can find the full article at

Open Knowledge Foundation Austria MeetUp on Open Transport Data took place on 14.11.2013 in Vienna

- December 2, 2013 in austria, City of Linz, City of Vienna, Events, Featured, OKF-AT, Open Data, Open Transport, open transport data, Opinion, transport information system, Vienna

About 2 weeks ago, on the 14th of November 2013 an Open Knowledge Foundation Austria (OKF-AT) MeetUp took place in the late afternoon with the title & topic: Open Transport (Data). The MeetUp was hosted at Fabasoft (one of the bigger Austrian IT vendors) in Vienna, Austria. The idea of this event was to present and discuss the current status of transport information systems in Austria and Open Data / Open Transport as well as to discuss how these sector can become more open and take a look into planned future steps of open transport in Austria.
FOTO: Copyrights: Fabasoft AG; Fotograf: Peter Ehringer
After a short introduction by Helmuth Bronnenmayer (Board Member of OKF-AT) telling the ~ 35 people of the audience about objectives and activities of the Austrian chapter of the Open Knowledge Foundation (see slides: the topic of the MeetUp was introduced by Peter Parycek (Danube University Krems) and Robert Harm (open3) by giving an insight in the area of Open Transport in the MeetUp keynote – see: Peter introduced the idea of open transport (data) and showcased the importance to open up transport data to (beside others) enable cross-border mobility services and apps. He also introduced the ODP AT – the Open Data Portal Austria project that develops an open data portal for all data – beside government data – in Austria (science, education, industry, NGO & NPO, citizens, OpenGLAM, et al) that will be launched in May / June 2014.
Following the opening keynote 3 short presentations of the City of Linz, by Egon Pischinger of Linz AG (provider of public transport in the City of Linz) where transport data is open but services / applications are still maintained by the City of Linz (although there are already several apps on Linz data in place today), see his slides: Big discussion with Egon & the audience were on the issue why the Linz AG thinks that there is still a need to create & maintain transport apps by the data providers (by themselves) and not starts to become a pure data provider. Linz AG argued that they do not see really stable maintaintence of the existing open transport apps for Linz these days.
2nd talk was held by Rainer Haslberger of the City of Vienna (see slides: where
transport data is also open in the data catalogue. Rainer explained in a very comprehensive talk the whole landscape of the Austrian transport information- and data system. This mainly consists of A) the GIP System that is the basic infrastructure of all transport / traffic routes in Austria – a system that for the first time brings together all data of these traffic routes accros the 9 provinces of Austria on a centralised national level and that has to be adapted to the INSPIRE directive as a next step (and thereby could (!) become open data also). And B) the VAO – Verkerhsauskunft Österreich = Information System on Traffic & Transport in Austria that puts all the additional information on top of the GIP System as traffic flow, schedules of trains, busses & other public transport et al. This data is NOT open data anyhow at the moment as there are several stakeholders involved in this project with different data sets as public administration as well as companies like the Austrian Broadcasting Company or the Austrian Automotive Club et al. The pitty here is that open data is not even a topic of discussion in the VAO project group at the moment – but possibly the MeetUp as well as following meetings & discussions can change this a bit!
The third presentation was held by Denise Recheid of REEEP about Open Transport Data in Developing Countries, see slides: Denise showcased the huge problems of traffic systems and information in developing countries / mega cities but also pointed out the environmental aspect of these problems regarding e.g. carbon emission.
A comprehensive Q&A session as well as a networking session with catering & drinks completed the OKF-AT MeetUp on Open Transport (Data). So what about the findings of this event? Lets say the information- and data system of Austria in the field seems to be on a good way to be harmonised accross our 9 provinces and also some Cities already provide open transport data (as geoinformation, traffic routes or timetable information) in their respective data catalogues – BUT: A) there is no discussion how open data can change environmental problems of transport data in Austria, B) there are NO plans on how to open up the data of the Austrian central transport information system and C) there are only little thoughts about how to follow standardisation on European level (beside INSPIRE) to enable cross border services on (open) transport data – so a lot work still in front of us!

Open Knowledge Foundation Austria MeetUp on Open Transport Data took place on 14.11.2013 in Vienna

- December 2, 2013 in austria, City of Linz, City of Vienna, Events, Featured, OKF-AT, Open Data, Open Transport, open transport data, Opinion, transport information system, Vienna

About 2 weeks ago, on the 14th of November 2013 an Open Knowledge Foundation Austria (OKF-AT) MeetUp took place in the late afternoon with the title & topic: Open Transport (Data). The MeetUp was hosted at Fabasoft (one of the …

Forming an Open Authority in Cultural Heritage

- June 11, 2013 in Featured, Guest Blog Post, Opinion

The following post is by Lori Byrd Phillips, who served as the 2012 US Cultural Partnerships Coordinator for the Wikimedia Foundation and is now Digital Content Coordinator at The Children’s Museum of Indianapolis. Her research on Open Authority was recently published in Curator: The Museum Journal. You can learn more in this video from Ignite MCN and on her blog, “Defining Open Authority.”
No one knows everything, everyone knows something, all knowledge resides in humanity.—Pierre Lévy
Lori presenting at the MCN ignite sessions - Michael P. Edson -  cc-by 2.0

Lori presenting at the MCN ignite sessions – Michael P. Edson – cc-by 2.0

This often quoted idea of collective intelligence holds true now even more than when it was first written 20 years ago, due much to the interconnected, social, and open digital worlds in which we live. Yet, in spite of the advances in online community organizing and crowdsourcing, many cultural institutions are still uncomfortable with the idea of being “open”—and not just “open” as in “access” but “open” as in the co-creation of knowledge. The premise of this fear lies in the tension between traditional, curatorial authority and the unknown consequence of community participation and user-generated content. I believe that we’re often afraid of the things that we don’t understand, and that putting a name to something is the first step towards building understanding. So I coined the term “open authority” as a way to show that this scary idea of “open” isn’t actually so scary after all. It’s no wonder that curators see themselves as the last bastions of legitimacy in this digital age. While new technologies are allowing user-generated content to grow exponentially by the day, curators just see the clutter. Meanwhile, others are undermining the role of traditional curators by declaring that everyone’s now a curator! But here’s the thing, professional curators are needed now more than ever to make sense of this user-generated content. The curator’s newfound relevance lies in being a facilitator of the dialogue happening on open platforms. One doesn’t lose authority when they become “open.” The expert’s role becomes even more significant when they actively participate in the broader conversation that occurs after the content is freely available. In finding a framework for open authority I was inspired by the Reggio Emilia educational approach, which holds as its core tenets a respect for the contributions and interests of the child and the importance of community collaboration in art and in life. In a Reggio Emilia classroom the teacher never talks down to a child, but instead gets down on their level and works alongside them, letting the student guide the direction of learning. This is an already-existing vision of open authority that can be applied to museums—where the expertise of the curator comes together with visitor insights for the benefit of the community as a whole. To achieve this, the goal must be to establish institutional respect for the role of the community’s voice in the interpretation of our shared heritage. So what really is open authority? Well we know that openness, both in access and transparency, is needed to remain relevant in our insanely collaborative world. And authority is needed to bring expertise to all of that user-generated content. Maintaining authority and being open do not have to be mutually exclusive. Openness and authority are not an “either/or” thing, they are an “and.” And that’s what open authority is: The coming together of museum expertise with meaningful contributions from our communities, both online and on-site. So the next time the question of open access comes up, be sure to soothe the fears of those around you. With open authority, it’s not about giving up anything—it’s about collaborating with our communities so that our institutional expertise can be made even better, together.

Talk at Re:Publica – Curating the Digital Commons

- May 14, 2013 in Events/Workshops, Featured, Opinion

Last week, the thirteenth edition of the Re:Publica conference was organised in Berlin. With more than 5000 people attending, it is one of the biggest events around new media, journalism and activism. The OpenGLAM team was there to give a talk about the curation of the digital cultural commons. Together with Daniel Dietrich, chairman of the Open Knowledge Foundation Germany, and member of the OpenGLAM working group, I prepared the talk which is largely inspired by the recent post on OpenGLAM about Small Data in GLAMs. At the moment we are able to get access to such vast amounts of data, that it not longer becomes comprehensible. We therefore need better infrastructure, access and tools to create the most value out of all this metadata and content.
We started the talk by explaining the notion of a commons: the cultural and natural resources accessible to all members of a society. The traditional notion of the environmental commons has been debated many times and are often referred to as ‘the tragedy of the commons’ as these natural resources are not as non-rivalrous and non-excludable as we used to think. However, a digital commons has the quality that when I make a copy of it, any other person is still able to make that exact same copy of the dataset, which will never deplete.
“Digital commons are defined as an information and knowledge resources that are collectively created and owned or shared between or among a community and that be (generally freely) available to third parties. Thus, they are oriented to favor use and reuse, rather than to exchange as a commodity.” - Mayo Fusster Morelli
The fact that these digital artefacts can be re-used by anybody is perhaps the greatest assest of the digital commons, everybody can curate, connect, annotate and remix these materials indefinitely. After an explanations about the difference between metadata and content (and how difficult the distinction often is!) and an overview of some leading open culture projects such as Europeana and the Digital Public Library of America it became clear how much content we actually have access to at the moment. Just Europeana and the DPLA together provide 30.000.000 metadata records that all link to a digitised object. Wikimedia Commons and the Internet Archive give access to another 25.000.000 media objects. How can a user make sense of that? For that reason we need to stop thinking about just adding more data and creating huge databases. The Commons need to be structured and made accessible in a way that the user can get meaningful results out of this content and data, and is able to collect the relevant data for his research. The institutions and the users should be able to easily create small data ‘packages’, for example collecting all of Van Gogh’s work. The internet is exceptionally well placed to bring together content in one place, something that would never be possible physically. At the same time we can provide relevant links between collections, artists, time-periods and so on, so the user can explore more related content. This also comes down to good quality metadata, something that is not always there at the moment, not surprising when combining data from thousands of cultural institutions. Finally we need the relevant tools that allow us to re-use the digital commons. With them, we are able to curate, annotate, visualise, mashup, and much more. Combined, the user and the cultural institution can work together to create the most value out of this enormous amount of digitised content and data. For a video recording of the talk, click here.

Big Data vs. Small Data: What about GLAMs?

- May 2, 2013 in Featured, Opinion

Last week, co-founder of the Open Knowledge Foundation Rufus Pollock published the first blogpost in a series on small data. In his post ‘Forget Big Data, Small Data is the real revolution‘, Pollock writes:Meanwhile we risk overlooking the much more important story here, the real revolution, which is the mass democratisation of the means of access, storage and processing of data. This story isn’t about large organisations running parallel software on tens of thousand of servers, but about more people than ever being able to collaborate effectively around a distributed ecosystem of information, an ecosystem of small data. picture [...] Size in itself doesn’t matter – what matters is having the data, of whatever size, that helps us solve a problem or address the question we have. And when we want to scale up the way to do that is through componentized small data: by creating and integrating small data “packages” not building big data monoliths, by partitioning problems in a way that works across people and organizations, not through creating massive centralized silos. This next decade belongs to distributed models not centralized ones, to collaboration not control, and to small data not big data.” How does this relate to the cultural sector? Europeana now offers access to more than 27 million metadata records, Wikimedia Commons has 16 million media files available, Internet Archive 9 million objects and last week the Digital Public Library of America launched with 2.5 million metadata records, and are quickly expanding. This is a fantastic achievement, but this amount of material is incomprehensible for any person and it is still just a fraction of all the digitised material, which is only a fraction of what could be digitised. How to make sense of that? As Pollock describes, it is not about the size of your database, the real revolution is the mass democratisation of the public institutions. It is possible to create packages with the complete works of Shakespeare, beautiful paintings by Van Gogh or a set of Medieval Maps. Packages that are ready for re-use which can be linked to other sets of content for further exploration. One question that arises is: who should create these packages of data? Who decides what content should be put together? Should we leave this to the traditional ‘experts’, the curators and archivists, or do we need to let the community do this? The most logical answer to this question is: both, or better, together. The dialogue between the public institutions and the user has traditionally been very important and when users have access to such vast amounts of content and metadata, guidance and curation becomes perhaps even more needed. At the same time these experts get the chance to work with thousands of contributors who can give feedback, enrich their data, link it, and work with it in ways that could not be imagined by the institution. For this reason – besides releasing content and data under an open license and providing a standardised technical open infrastructure as described in the OpenGLAM principles – the Open GLAM should be prepared to engage in the discussion and build value together with the community. Opening up data is not about dumping it online and never look at it again, it is about a dialogue where the public institutions tries as much as possible to send the user on his way, only to see him wander off and explore paths and directions never seen before. We would love to hear your opinion on this topic. Please subscribe to the OpenGLAM mailing list to join the discussion.

Why the German Digital Library should learn from Europeana

- December 12, 2012 in Featured, Opinion

picture Launch of the DDB. Jill Cousins, Hermann Parzinger, Elke Harjes-Ecker, Matthias Harbort (from left to right) – Photo: Julia Hoppen

On the 29th of November 2012, the beta version of the German Digital Library (DDB) was officially launched. After five years of preparation and discussions with a large number of cultural institutions, it was finally time to bring it to the public. Herman Parzinger, head of the Prussian Cultural Heritage Foundation, explained in the press-release:

“The goal of the Deutsche Digitale Bibliothek (DDB) is to offer everyone unrestricted access to Germany’s cultural and scientific heritage, that is, access to millions of books, archived items, images, sculptures, pieces of music and other sound documents, as well as films and scores, from all over Germany”
To reach this goal, a lot of work needs to be done. At the moment, around 5.5 million metadata records can be found in the portal. Around 3 million come from a single institution, the Landesarchiv Baden-Württemberg. Currently 90 institutions provide data to the library and the three biggest organisations make up more than 80% of all records. Goal of the DBB is to include the metadata records of more than 30.000 German cultural institutions. In many ways, the German Digital Library reminds of the Europeana project when it was launched in 2008. At that time, France was responsible for about 50% of all records in the Europeana portal and many countries were not present at all. In the past four years, Europeana has managed to include data from each EU country, and continues expanding it (see visualisation). The interface of the DDB is very similar to Europeana as well. A simple search box combined with the possibility to filter the results in many different ways, for example by content provider, period, or location. As Europeana, the DDB is a search portal which links the user to the actual digitised object on the institutions webpage. They only host the metadata.
picture Homepage of the German Digital Library

A very nice aspect of the German Digital Library is the fact that the data providers have taken extra care for the quality of their metadata. This results in a very rich collection of information about the objects. Europeana, who has currently more than 20 million objects in its database, has suffered greatly from bad quality metadata and is going to make improvement of it one of their top-priorities for 2013. As the German Digital Library will become the official national aggregator for Germany to Europeana, this will greatly help. Unfortunately, one major difference with the current Europeana project is how the DDB deals with copyright. Europeana has recently released all of their metadata records under a CC0 public domain waiver, making all of their metadata records free to use and reuse by anybody for any purpose without any restrictions. This combined with the new API (which the DDB aims to provide soon as well) has led to various ways of reuse by for example other national aggregators such as Hispana, and a number of applications that search the Europeana collection in innovative ways. Europeana also included a search filter for the copyright status of the content so users can get only results where they can be sure it can be reused without any legal implications. The German Digital Library is quite the opposite. Their Terms of Use state clearly that:
  1. The DDB and its data suppliers retain all copyright and other industrial property rights to the public, freely accessible (and free of charge) digital content, derivatives and metadata including layout, software and their content in line with the principle of „free access – rights reserved“.
  2. Any use of the digital content, derivatives and metadata for commercial purposes is hereby prohibited, unless approved by the title holder in individual cases. As a rule, the DDB is not the title holder itself, but only acts as an agency providing access to the digital content, derivatives and metadata, so that any further rights of utilisation must be acquired from the respective institute that has contributed said content.
These copyright restrictions make it very hard for users to do anything with the metadata from the DDB. Especially when the API is launched, it is practically impossible for developers to create something with it as they will constantly have to ask the hundreds of different institutions if it is allowed. When Europeana started, there was also no consensus how to deal with the rights of the aggregated metadata and it took them four years to solve this issue. Over the last couple of years, the European Union, Europeana itself, and many other organisations have released reports and documents that clearly outline the advantages of open data for cultural institutions, as well as for society and research. This combined with the fact that it is not possible to find any legal information about the digitised objects themselves on the DDB website, let alone a filter function, makes it really hard for a user to decide whether or not they can make use of the object. It seems like a strange move that the DDB is so restrictive, especially as they are to become the official German aggregator to Europeana. Europeana has been very clear since last September that the rights of all the data provided have to be waived away by using the CC0 declaration. Furthermore, many objects from for example the Landesarchiv Baden-Württemburg can already be found on Europeana, under a free license. With all of the world’s heritage is becoming available online, great new possibilities arise. Different collections can be connected and linked and institutions can enrich their own data with the use of others. The history of Germany can not only be found in German institutions, but all over the world. By combining these different collections, historians can create a much more sophisticated history and find new stories and insights. This can only be achieved if the licenses being used by the different institutions and aggregators allow this, and the DDB term of use clearly do not do this. As the German Digital Library is still in a beta-version, much can change. They are a direct partner of Europeana so it seems very easy to learn from the experiences of Europeana and how decisions in the past about copyright have worked out for them. Europeana has shown that European institutions are willing to provide data that can be freely reused, why start the discussion all over again in Germany?