You are browsing the archive for Open Humanities.

What does the history of global trade look like? The collaborative database RICardo opens up trade data to shed light on this question

- February 21, 2018 in Digital Humanities, economics, historical data, Open Data, Open Humanities

RICardo (Research on International Commerce) is a project dedicated to trade between nations over a period spanning the beginning of the Industrial Revolution to the eve of the Second World War. It combines a historical trade database covering all of the world’s countries and a website which invites to an exploration of the history of international trade through data visualizations. The project has recently released a web application and accompanying dataset, which is freely available under the Open Database License. In this blogpost, Beatrice Dedinger (economic historian) and Paul Girard (IT engineer) illustrate its’ use cases and background. The new RICardo web tool has been officially released in December 2017, on the occasion of the bicentenary of David Ricardo’s famous work, On the Principles of Political Economy and Taxation. It is the achievement of an experiment to combine economic history with digital humanities. The RICardo project is devoted to bilateral and total trade of all the world’s countries over a period spanning from the beginning of the 19th century to 1938. Bilateral trade means the distribution of trade of a country by partners, on the export and the import side. Total trade is the sum of all bilateral flows. Notice that RICardo focuses on trade by countries; it does not provide statistics of trade by products and thus, does not allow for the analysis of trade specialization or comparative advantages. We purposefully assembled data from the 19th and early 20th century, as this database never existed as such before. Governments did not start to publish printed documents of official trade statistics before the end of the Napoleonic wars. This is mostly true for the European states but also for other areas in the world that were under European influence. Since the end of the Second World War, the International Monetary Fund is in charge of gathering bilateral trade statistics of all countries; they are now freely available on http://www.imf.org/en/Data. The RICardo database includes around 300.000 data points (December 2017 version) that have been collected by hand from archives found in French or foreign libraries. This is currently the most exhaustive trade database dedicated to historical bilateral trade statistics. Original data (trade flows, names of countries) being in different currencies and languages, they have been converted into a usable format by creating a relational database. The entire RICardo dataset is now freely available under the Open Database License in our versioned data repository described under the Data Package format.

Source: Estadística Comercial de la Republica de Chile (1845)

Source: RICardo_flows database

RICardo is meant for studying and discovering the history of trade and trade globalization. How did countries become economically interdependent? How did the trade volume and variety of exchanges of goods and services develop across nations? Trade databases are needed to address these and similar questions. As an example, economic historians, relying on limited trade datasets, have first demonstrated that a “First” globalization occurred over 1870-1914. When they were afforded with extended trade databases, they challenged this conclusion to now affirm that trade globalization started around the 1840s. But RICardo also allows for the study of neglected areas of the history of trade, largely because of the lack of data. It can help to explore the history of geopolitical trade relationships. If you are interested, for example, in the trade history of Chile over the 19th-mid20th century, RICardo provides you with visualizations and a dataset to describe Chile’s relationships with all its partners over the period of your choice. RICardo offers the opportunity to discover the history of international trade not only through aggregate world trade curves but also by looking at the details of bilateral trade flows: visual exploration is key to handle the complexity of trade data by switching from an aggregate to a detailed level, or from one country to another. To do so, the tool uses  a method developed at Sciences Po médialab called “datascape”. By considering data visualization from the very beginning, the research team can gain creative constraints that help to better design the dataset. Alternatively, data visualizations are a very efficient way to take care of the data, in particular, to check data integrity. This project was very enriching on a personal level in that it taught us to work in a new way. At the beginning, in 2004, the project was launched by a team of researchers at Sciences Po Paris working on financial and trade history and needing historical trade datasets to perform a research idea. It was (still is) usual that each researcher builds by him/herself a trade database for the needs of personal research, ever trying to do better than the other. This way of working points to a competitive state of mind from which we moved away. During more than ten years of work, we have faced a lot of problems that eventually led us to work in a more collaborative, creative, and challenging way. This was the driving force in the achievements of RICardo. That is why we are keen to open our data to everyone, to share the results of our work with the widest audience, to open it to contributions, to foster its usage by the community, and to arouse the curiosity of the public about a subject a priori austere but that we try to address in an enjoyable way.

The Crusade for Curious Images

- December 19, 2014 in #BLdigital, Digital Humanities, Events/Workshops, Front Page, Open Humanities

In December last year the British Library released over a million images on to Flickr Commons. The images were taken from the pages of 17th, 18th and 19th century books digitised by Microsoft and gifted to the British Library. One-year on and it seems pertinent to mark the anniversary with an event held at the […]

New Open Knowledge Initiative on the Future of Open Access in the Humanities and Social Sciences

- October 21, 2014 in OKF Projects, Open Access, Open Humanities, Open Research, WG Humanities

Screen Shot 2014-10-21 at 11.57.15 To coincide with Open Access Week, Open Knowledge is launching a new initiative focusing on the future of open access in the humanities and social sciences. The Future of Scholarship project aims to build a stronger, better connected network of people interested in open access in the humanities and social sciences. It will serve as a central point of reference for leading voices, examples, practical advice and critical debate about the future of humanities and social sciences scholarship on the web. If you’d like to join us and hear about new resources and developments in this area, please leave us your details and we’ll be in touch. For now we’ll leave you with some thoughts on why open access to humanities and social science scholarship matters:
“Open access is important because it can give power and resources back to academics and universities; because it rightly makes research more widely and publicly available; and because, like it or not, it’s beginning and this is our brief chance to shape its future so that it benefits all of us in the humanities and social sciences” – Robert Eaglestone, Professor of Contemporary Literature and Thought, Royal Holloway, University of London.
*
“For scholars, open access is the most important movement of our times. It offers an unprecedented opportunity to open up our research to the world, irrespective of readers’ geographical, institutional or financial limitations. We cannot falter in pursuing a fair academic landscape that facilitates such a shift, without transferring prohibitive costs onto scholars themselves in order to maintain unsustainable levels of profit for some parts of the commercial publishing industry.” Dr Caroline Edwards, Lecturer in Modern & Contemporary Literature, Birkbeck, University of London and Co-Founder of the Open Library of Humanities
*
“If you write to be read, to encourage critical thinking and to educate, then why wouldn’t you disseminate your work as far as possible? Open access is the answer.” – Martin Eve, Co-Founder of the Open Library of Humanities and Lecturer, University of Lincoln.
*
“Our open access monograph The History Manifesto argues for breaking down the barriers between academics and wider publics: open-access publication achieved that. The impact was immediate, global and uniquely gratifying–a chance to inject ideas straight into the bloodstream of civic discussion around the world. Kudos to Cambridge University Press for supporting innovation!” — David Armitage, Professor and Chair of the Department of History, Harvard University and co-author of The History Manifesto
*
“Technology allows for efficient worldwide dissemination of research and scholarship. But closed distribution models can get in the way. Open access helps to fulfill the promise of the digital age. It benefits the public by making knowledge freely available to everyone, not hidden behind paywalls. It also benefits authors by maximizing the impact and dissemination of their work.” – Jennifer Jenkins, Senior Lecturing Fellow and Director, Center for the Study of the Public Domain, Duke University
*
“Unhappy with your current democracy providers? Work for political and institutional change by making your research open access and joining the struggle for the democratization of democracy” – Gary Hall, co-founder of Open Humanities Press and Professor of Media and Performing Arts, Coventry University

Open Humanities Hack: 28 November 2014, London

- October 10, 2014 in event, Events, london, Meetups, Open Humanities

This is a cross-post from the DM2E-blog, see the original here On Friday 28 November 2014 the second Open Humanities Hack event will take place at King’s College, London. This is the second in a series of events organised jointly by the King’s College London Department of Digital Humanities , the Digitised Manuscripts to Europeana (DM2E) project, the Open Knowledge Foundation and the Open Humanities Working Group Humanities-WG The event is focused on digital humanists and intended to target research-driven experimentation with existing humanities data sets. One of the most exciting recent developments in digital humanities include the investigation and analysis of complex data sets that require the close collaboration between Humanities and computing researchers. The aim of the hack day is not to produce complete applications but to experiment with methods and technologies to investigate these data sets so that at the end we can have an understanding of the types of novel techniques that are emerging. Possible themes include but are not limited to
  • Research in textual annotation has been a particular strength of digital humanities. Where are the next frontiers? How can we bring together insights from other fields and digital humanities?

  • How do we provide linking and sharing humanities data that makes sense of its complex structure, with many internal relationships both structural and semantic. In particular, distributed Humanities research data often includes digital material combining objects in multiple media, and in addition there is diversity of standards for describing the data.

  • Visualisation. How do we develop reasonable visualisations that are practical and help build on overall intuition for the underlying humanities data set

  • How can we advance the novel humanities technique of network analysis to describe complex relationships of ‘things’ in social-historical systems: people, places, etc.

With this hack day we seek to form groups of computing and humanities researchers that will work together to come up with small-scale prototypes that showcase new and novel ways of working with humanities data. Date: Friday 28 November 2014
Time: 9.00 – 21.00
Location: King’s College, Strand, London
Sign up: Attendance is free but places are limited: please fill in the sign-up form to register . For an impression of the first Humanities Hack event, please check this blog report .

Newsflash! OKFestival Programme Launches

- June 4, 2014 in Events, Featured, Free Culture, Join us, network, News, OKFest, OKFestival, Open Access, Open Data, Open Development, Open Economics, Open GLAM, Open Government Data, Open Humanities, Open Knowledge Foundation, Open Knowledge Foundation Local Groups, Open Research, Open Science, Open Spending, Open Standards, open-education, Panton Fellows, privacy, Public Domain, training, Transparency, Working Groups

At last, it’s here! Check out the details of the OKFestival 2014 programme – including session descriptions, times and facilitator bios here! Screen Shot 2014-06-04 at 4.11.42 PM

We’re using a tool called Sched to display the programme this year and it has several great features. Firstly, it gives individual session organisers the ability to update the details on the session they’re organising; this includes the option to add slides or other useful material. If you’re one of the facilitators we’ll be emailing you to give you access this week.

Sched also enables every user to create their own personalised programme to include the sessions they’re planning to attend. We’ve also colour-coded the programme to help you when choosing which conversations you want to follow: the Knowledge stream is blue, the Tools stream is red and the Society stream is green. You’ll also notice that there are a bunch of sessions in purple which correspond to the opening evening of the festival when we’re hosting an Open Knowledge Fair. We’ll be providing more details on what to expect from that shortly!

Another way to search the programme is by the subject of the session – find these listed on the right hand side of the main schedule – just click on any of them to see a list of sessions relevant to that subject.

As you check out the individual session pages, you’ll see that we’ve created etherpads for each session where notes can be taken and shared, so don’t forget to keep an eye on those too. And finally; to make the conversations even easier to follow from afar using social media, we’re encouraging session organisers to create individual hashtags for their sessions. You’ll find these listed on each session page.

We received over 300 session suggestions this year – the most yet for any event we’ve organised – and we’ve done our best to fit in as many as we can. There are 66 sessions packed into 2.5 days, plus 4 keynotes and 2 fireside chats. We’ve also made space for an unconference over the 2 core days of the festival, so if you missed out on submitting a proposal, there’s still a chance to present your ideas at the event: come ready to pitch! Finally, the Open Knowledge Fair has added a further 20 demos – and counting – to the lineup and is a great opportunity to hear about more projects. The Programme is full to bursting, and while some time slots may still change a little, we hope you’ll dive right in and start getting excited about July!

We think you’ll agree that Open Knowledge Festival 2014 is shaping up to be an action-packed few days – so if you’ve not bought your ticket yet, do so now! Come join us for what will be a memorable 2014 Festival!

See you in Berlin! Your OKFestival 2014 Team

Network Summit

- July 19, 2013 in network, OKF, OKFN Local, Open GLAM, Open Government Data, Open Humanities, Open Science, Our Work, Talks, Working Groups

Twice-yearly the whole community of the Open Knowledge Foundation gathers together to share with, learn from and support one another. The Summer Summit 2013 took place in Cambridge (UK) last week (10th-14th July), with staff updates on the Thursday and network representatives joining on the Friday, Saturday and Sunday. It was so inspiring to hear what our network has been doing to further the Open movement recently and over the last 6 months! We heard from Local Groups about how these groups have been effecting change in all our locations around the world:
  • Alberto for OKFN Spain has been promoting open transparency in budgets, including their own, and using the power of events to gather people;
  • OKFN Taiwan, represented by TH (who we believe travelled the furthest to be with us in person), has also been investing in many large events, including one event for developers and others attracting 2,000 people! They have also been supporting local and central governments on open data regulation;
  • Charalampos of OKFN Greece highlighted the recent support of their works by Neelie Kroes, and took us through crashmap.okfn.gr which maps accidents using data from police departments and census data along with crowd-sourced data;
  • Pierre at OKF France reported that they have been helping redesign the national open data portal, as well as developing an open data portal for children and young people which kids which may align well with School of Data;
  • OpenData.ch, the Swiss Chapter of the Open Knowledge Foundation of course is hosting OKCon in September, and Hannes updated on exciting developments here. He also reported on work to lobby and support government by developing visualisations of budget proposals, developing a federal-level open data strategy and policy, and promoting a national open data portal. Thanks to their efforts, a new law was accepted on open weather data, with geodata next up;
  • David updated on OKFN Australia where there is support from government to further the strong mandate for open scientific data. The newspaper the Age has been a firm ally, making data available for expenses and submissions to political parties, and a project to map Melbourne bicycle routes was very successful;
  • Francesca of OKF Italy has been working alongside Open Streetmap and Wikimedia Italy, as well as with parliament on the Open Transport manifesto. They have also been opening up ecological data, from “spaghetti open data”;
  • OKFN Netherlands was represented by Kersti, who reported a shared sense of strength in open government data and open development, as well as in the movement Open for Change (where OKCon is listed as the top ‘Open Development Event’!);
  • Dennis, for OKF Ireland, has been pushing the local events and gathering high-profile ‘rock stars’ of the open data world as well as senior government representatives. He has also presented on open data in parliament;
  • OKF Scotland is a growing grassroots community, as conveyed by Ewan – an Open Data Day asserted the importance of connecting to established grassroots communities who are already doing interesting things with data. They are also working closely with government to release data and organised local hackdays with children and young people;
  • Bill joined us remotely to update on OKF Hong Kong, where regular meet-ups and hackdays are providing a great platform for people to gather around open knowledge. Although not able to join us in person (like Everton / Tom from OKF Brasil) Bill was keen to report that OKF Hong Kong will be represented at OKCon!
  • OKF Austria‘s update was given by Walter, who informed us that transport data is now properly openly licensed and that several local instances of the international Working Groups have been set up. Which segues nicely, as…
It wasn’t just during the planned sessions where community-building and networking occurred: despite the scorching 30°C (86°F) heat – somewhat warmer than the Winter Summit in January! – people made the most of lunchtimes and breaks to share ideas and plan. We also heard from Working Groups about how crossing international boundaries is making a difference to Open for all of us:
  • Open Sustainability was represented by Jack who explained Cleanweb (an initiative to use clean technologies for good, engaging with ESPA to open up data) and has set up @opensusty on Twitter as a communication route for anyone wanting to connect;
  • Ben, newly involved with Open Development, explained about the group’s plans to make IATI‘s released data useful, and bringing together existing initiatives to create a data revolution;
  • Open Science, represented by Ross, has been very active with lobbying and events, with the mailing list constantly buzzing with discussions on open data, licensing and convincing others;
  • Daniel explained that Open Government Data, being one of the largest groups with 924 mailing list members, has provided an important role as being at the heart of the Open Government Data movement, as a place for people to go to for questions and – hopefully! – answers. Daniel will be stepping down, so get in touch if you would like to help lead this group; in the meantime, the Steering Committee will be helping support the group;
  • OpenGLAM has also developed an Advisory Board, said Joris. There is good global reach for Open GLAM advocacy, and people are meeting every month. Documents, case studies, slide-decks and debates are available to new joiners to get started, and the Austrian instance of the Working Group demonstrated the process works. (Joris has now sadly left Open Knowledge Foundation ‘Central’, but we are delighted he will stay on as volunteer Coordinator for this group!);
  • Public Domain, with Primavera reporting, has been working on Public Domain Calculators in partnership with the government. PD Remix launched in France in May, and Culture de l’Europe will present at OKCon;
  • Primavera also updated on Open Design, where future planning has taken priority. The Open Design Definition has been a highlight but funding would help further activity and there are plans to seek this proactively. Chuff, the Open Knowledge Foundation Mascot, was pleased to get a mention…
It should be noted that these activities and updates are brief highlights only – distilling the activities of our groups into one or two sentences each is very much unrepresentative of the amount of things we could talk about here! We also made time for socialising at the Summit, and much fun was had with Scrabble, playing frisbee and punting – not to mention celebrating Nigel‘s birthday! As an aside, I was going to state that “we only need an Antarctic representative and the Open Knowledge Foundation will have all seven continents in our network”; however, it appears there is no definitive number of continents or agreed land-masses! An amalgamated list is Africa (Africa/Middle East and North Africa), America (Central/North/South), Antarctica, Australia (Australia/Oceania) and Eurasia (Europe/Asia)… but, however you wish to define the global divisions (and isn’t it pleasing that it’s difficult to do so?), Antarctica is the only area the Open Knowledge Foundation is not represented! Are you reading this from an outstation at the South Pole, or know someone there, and want to contribute to open knowledge? Apply to become an Ambassador and be the person to cement the Open Knowledge Foundation as the fully global demonstration of the Open movement. If you’re in an unrepresented area – geographic or topic – we’d love to hear from you, and if you’re in a represented area we’d love to put you in touch with others. Get Involved and connect with the Open Knowledge Foundation Network – and maybe we’ll see you at the next Summit! Images 1, 4-7 and front page: Velichka Dimitrova. Images 2 and 3: Marieke Guy, CC-BY-NC-ND

Let’s Map Open Correspondence Data!

- May 16, 2013 in Digital Humanities, Featured, Open Humanities

At the Open Knowledge Foundation we seek to empower people to use open data and open content in ways that improve the world. In part this is about the provision of tools, such as our world-renowned CKAN open data portal, but it’s also about bringing people together who are passionate about making a change and giving them a space whether that’s online or face-to-face to wrangle open data, write code and take action together. At the recent Open Interests hack participants developed a suite of apps that help us understand lobbying in the EU and how money is spent. A couple of weeks ago Open Data Maker Night in London people wrangled data from local authority websites to find out which companies receives the lion’s share of the Greater London’s Authorities resources. Across our various Working Group mailing lists people from all over the world are debating, sharing data and experimenting with code in a huge variety of domains from open science to open government data.
Screen Shot 2013-05-16 at 13.24.58
At bottom this is about bringing people with bright ideas coming together to collaborate around open content and open data to build things that have transformative potential.

The Open Humanities Hangout

Over the past few months a group of people interested in open culture, including myself, have been getting together on Google Hangout in order to build stuff with the vast amount of open cultural data and content that’s out there. In the cultural sphere much of the transformative potential of open lies in widening access to our treasured cultural heritage whether that’s classic literary texts or the paintings of the great masters. But as ever it’s not only about opening up huge amounts of data and content, there’s already a hell of a lot of that already on the Internet Archive and Wikimedia Commons, this is also about empowering people to actually use this material in ways that they deem valuable. So on the Open Humanities Hangout we’ve tried to do things that address both these challenges: In order to address the problem of access we’ve held hangouts on how to run a book scanning workshop and how share the works we’ve digitised online. On another occasion, we collectively reflected on how to evangelise about opening up cultural resources and distilled the results in a set of principles which we then shared and discussed on a public mailing list. In terms of building stuff to help re-use, we’ve built an app that helps you to get to know Shakespeare better called Bardomatic. We’ve hacked on an annotation tool for public domain texts called TEXTUS trying to make it easier to use and deploy on Word Press. We’ve created interactive timelines of the great Western medieval philosophers helping to improve and de-bug the Timeliner tool in the process.
Screen Shot 2013-05-16 at 13.26.10

The Challenge: Mapping Networks of Correspondence

I want more people to join the Open Humanities hangouts – more Java Script coders, more designers, more literature students, more bloggers… anyone who loves the humanities and wants to see the great works of our past accessible and re-usable by everyone regardless of their background or location. I’m putting forward a challenge for our next set of monthly Hangouts based on some of the great work some of the Open Humanities Working Group members have been doing around open correspondence data and open booking scanning. I’m challenging the Open Humanities Hangout crew to construct a workflow that will enable anyone to turn a published set of letters and turn it into a visualisation of a network of correspondence. One of the great success stories of the so-called Digital Humanities is the wonderful Mapping the Republic of Letters project, a collaboration between Stanford and Oxford Universities that visualises the networks of correspondence of early modern scholars. The beautiful and insightful visualisations that have been created in the process have captured the imaginations of technologists and humanists world wide. roflviz_dashboard-800x497 I want to see a million Mapping the Republic of Letters project. I want it to be as easy as possible to map the correspondence of historial figures, so that anyone can do this. This includes the first year school students wanting some beautiful images for their coursework and the scholar who will use much richer data to give a more through, in-depth and academic visual story for a research paper. I want the underlying tools to be open source and well documented and perhaps, most importantly, I want the underlying data, that collection of metadata about who sent what when to be open for everyone to use and add to. This effort doesn’t require the existence of a huge repository of data about letters that we tap into (although this might merge in the process). This is about small sets of open data, sourced and formatted in appropriate ways by passionate groups of people all around the world that can be combined and connected easily using open source web-based components.

How do we begin?

To my eyes, this effort will involve the documentation of at least 4 steps:
  1. Scan in a published collection of letters
  2. Turn this scans intro structured data that contains relevant information on respondent, date, location
  3. Geo-code all those locations
  4. Visualise the results on a map
We’ve already made some progress on steps 1. – 2. and there’s a wealth of information already available on how to do your own scanning and OCRing including manuals on how to build your own scanner. For 3. – 4. there’s already some brilliant information over on the School of Data. However, I want to see this information synthesised into a single point — so any student, teacher or researcher can get all the information on how to go from that collected volume of letters of so-and-so on their shelf to a beautiful visualisation.

What might result if we’re successful?

Well for one, I hope that a beautiful and insightful set of visualisations might emerge about the correspondence of a number of important figures all over the web. But perhaps a longer term goal is to stimulate the creation of databases of correspondence that are open to everyone to use and add to. To begin with we’ll be constrained to the published volumes of correspondence in print, but if we get enough people contributing we can re-combine these published volumes in all sorts of interesting ways filling in gaps and ultimately creating datasets that might enable us to map whole networks of correspondence for a given period.

Get involved

So the challenge is on. The next Open Humanities Hangout will take place at 5pm BST on Tuesday May 28th. If you’re thinking of joining ping me a quick message on sam.leon@okfn.org!

Announcing the Open Humanities Award Winners

- May 8, 2013 in Featured, Open GLAM, Open Humanities

awards-logo
Earlier this year, as part of the DM2E project, we put out a call to humanities academics and technologists to see if they could come up with innovative ideas for small technology projects that would further humanities research by using open content, open data and/or open source. We’re very pleased to announce that the winners are Dr Bernhard Haslhofer (University of Vienna) and Dr Robyn Adams (Centre for Editing Lives and Letters, University College London). Both winners will receive financial support to help them undertake the work they proposed and will be blogging about the progress of their project. You can follow their progress via the DM2E blog.

Award 1: Semantic tagging for old maps… and other things

Screen Shot 2013-05-07 at 11.02.15 The first Award goes to Dr Bernhard Haslhofer of Vienna University. His project will involve building on an open source web application he has been working on called Maphub. Dr Haslhofer told us a little bit about the inspiration for his project:
“People love old maps” is a statement that we heard a lot from curators in libraries. This combined with the assumption that many people also have knowledge to share or stories to tell about historical maps, was our motivation to build Maphub.
In essence Maphub is an open source Web application that, first of all, pulls out digitized historical maps from closed environments, adds zooming functionality, and assigns Web URIs so that people can talk about them online. It also supports two main use cases: (i) georeferencing maps by linking points on the map to Geonames locations; (ii) commenting on maps or map regions by creating annotations. While users are entering their comments, Maphub analyzes the entered text on the fly and suggests so-called semantic tags, which the user accepts or rejects. Semantic tags appear like “normal” tags on the user interface, but are in fact links to DBpedia resources. In that way, the user links her annotations and therefore also the underlying historical map with resources from two open data sources. Besides consuming open data during the annotation authoring process, Maphub also contributes collected knowledge back as open data by exposing all annotations following the W3C Open Annotation specification. In that way, Maphub supports people in a loop of using and producing open data in the context of historical maps. Dr Haslhofer looks forward to seeing how collaborations will blossom between these various web annotation systems:
We believe that people also love other things on the Web and that Web annotation tools should support semantic tagging as well. Therefore, we will make it available as a plugin for Annotorious. Annotorious is a JavaScript image annotation library that can be used in any Website, and is also compatible with the Open Knowledge’s Foundations’s Annotator. Annotorious and Maphub have common origins and the Open Humanities will support us in unifing parallel development streams into a single, reusable annotation tool that works for digitized maps but also for other media. We will also conduct another user study to inform the design of that function for other application contexts.

Award 2: Joined Up Early Modern Diplomacy: Linked Data from the Correspondence of Thomas Bodley

Thomas_Bodley The second award goes to Dr Robyn Adams of the Centre for Editing Lives and Letters, University College London. The project will re-purpose the open resource that Dr Adams has been building with a team of others: the Diplomatic Correspondence of Thomas Bodley. The project will use ‘additional’ information that was encoded into the digitisation of early modern letters that took place at the Centre for Editing Lives and Letters. In the initial incarnation of the project this data which included biographical and geographical information contained within letters was not used (although it was encoded). Dr Adams told us a little bit about what she plans on doing with the money from the Awards:
With the prize funding from the Open Humanities Awards, we propose to mine the data that was generated but not fully used in the first phase of the project. This data is a rich source of biographical and geographical information, the visualization of which evokes the complex and diverse texture of the late sixteenth-century European diplomatic and military landscape. Bodley’s position in The Hague as the only English representative on the Dutch Council of State put him at the centre of a heterogeneous nexus of correspondents a time long before the Republic of Letters burgeoned in the subsequent century.
The project will interrogate three data fields within the larger data set of Bodley’s diplomatic correspondence in order to generate visualizations; the network of correspondents and recipients, and the people and places mentioned within the letters. These visualizations will be incorporated into the project website, where they will enhance and extend the knowledge derived from the existing corpus of correspondence. The visualizations, which will have scope to be playful while drawn from scrupulous scholarship, will offer an alternative pathway for scholars and the interested public to understand that in this period especially, the political, university and kinship networks were fundamental to advancement and prosperity. “In mapping the relational activity between data sets,” Dr Adams went on, “I hope to further illuminate and reanimate Bodley’s position within the Elizabethan compass. Furthermore, I hope to demonstrate that fruitful routes of enquiry can result if scholars commit to going the extra mile to encode and record data in their research that may not have immediate relevance to their own studies.”
We offer our heartiest congratulations to the both Dr Haslhofer and Dr Adams both of whom will be presenting their work at the forthcoming Web as Literature conference at the British Library and this year’s OKCon in Geneva. Follow the progress of the Awards recipients via the DM2E project website.

What We Hope the Digital Public Library of America Will Become

- April 17, 2013 in Bibliographic, Featured, Free Culture, Open Content, Open GLAM, Open Humanities, Policy, Public Domain

Tomorrow is the official launch date for the Digital Public Library of America (DPLA). If you’ve been following it, you’ll know that it has the long term aim of realising “a large-scale digital public library that will make the cultural and scientific record available to all”. More specifically, Robert Darnton, Director of the Harvard University Library and one of the DPLA’s leading advocates to date, recently wrote in the New York Review of Books, that the DPLA aims to:
make the holdings of America’s research libraries, archives, and museums available to all Americans—and eventually to everyone in the world—online and free of charge
What will this practically mean? How will the DPLA translate this broad mission into action? And to what extent will they be aligned with other initiatives to encourage cultural heritage institutions to open up their holdings, like our own OpenGLAM or Wikimedia’s GLAM-WIKI? Here are a few of our thoughts on what we hope the DPLA will become.

A force for open metadata

The DPLA is initially focusing its efforts on making existing digital collections from across the US searchable and browsable from a single website. Much like Europe’s digital library, Europeana, this will involve collecting information about works from a variety of institutions and linking to digital copies of these works that are spread across the web. A super-catalogue, if you will, that includes information about and links to copies of all the things in all the other catalogues. Happily, we’ve already heard that the DPLA is releasing all of this data about cultural works that they will be collecting using the CC0 legal tool – meaning that anyone can use, share or build on this information without restriction. We hope they continue to proactively encourage institutions to explicitly open up metadata about their works, and to release this as machine-readable raw data. Back in 2007, we – along with the late Aaron Swartz – urged the Library of Congress to play a leading role in opening up information about cultural works. So we’re pleased that it looks like DPLA could take on the mantle. But what about the digital copies themselves?

A force for an open digital public domain

The DPLA has spoken about using fair use provisions to increase access to copyrighted materials, and has even intimated that they might want to try to change or challenge the state of the law to grant further exceptions or limitations to copyright for educational or noncommercial purposes (trying to succeed where Google Books failed). All of this is highly laudable. But what about works which have fallen out of copyright and entered the public domain? Just as they are doing with metadata about works, we hope that the DPLA takes a principled approach to digital copies of works which have entered the public domain, encouraging institutions to publish these without legal or technical restrictions. We hope they become proactive evangelists for a digital public domain which is open as in the Open Definition, meaning that digital copies of books, paintings, recordings, films and other artefacts are free for anyone to use and share – without restrictive clickwrap agreements, digital rights management technologies or digital watermarks to impose ownership and inhibit further use or sharing. The Europeana Public Domain Charter, in part based on and inspired by the Public Domain Manifesto, might serve as a model here. In particular, the DPLA might take inspiration from the following sections:
What is in the Public Domain needs to remain in the Public Domain. Exclusive control over Public Domain works cannot be re-established by claiming exclusive rights in technical reproductions of the works, or by using technical and or contractual measures to limit access to technical reproductions of such works. Works that are in the Public Domain in analogue form continue to be in the Public Domain once they have been digitised. The lawful user of a digital copy of a Public Domain work should be free to (re-) use, copy and modify the work. Public Domain status of a work guarantees the right to re-use, modify and make reproductions and this must not be limited through technical and or contractual measures. When a work has entered the Public Domain there is no longer a legal basis to impose restrictions on the use of that work.
The DPLA could create their own principles or recommendations for the digital publication of public domain works (perhaps recommending legal tools like the Creative Commons Public Domain Mark) as well as ensuring that new content that they digitise is explicitly marked as open. Speaking at our OpenGLAM US launch last month, Emily Gore, the DPLA’s Director for Content, said that this is definitely something that they’d be thinking about over the coming months. We hope they adopt a strong and principled position in favour of openness, and help to raise awareness amongst institutions and the general public about the importance of a digital public domain which is open for everyone.

A force for collaboration around the cultural commons

Open knowledge isn’t just about stuff being able to freely move around on networks of computers and devices. It is also about people. We think there is a significant opportunity to involve students, scholars, artists, developers, designers and the general public in the curation and re-presentation of our cultural and historical past. Rather than just having vast pools of information about works from US collections – wouldn’t it be great if there were hand picked anthologies of works by Emerson or Dickinson curated by leading scholars? Or collections of songs or paintings relating to a specific region, chosen by knowledgable local historians who know about allusions and references that others might miss? An ‘open by default’ approach would enable use and engagement with digital content that breathes a life into it that it might not otherwise have – from new useful and interesting websites, mobile applications or digital humanities projects, to creative remixing or screenings of out of copyright films with new live soundtracks (like Air’s magical reworking of Georges Méliès’s 1902 film Le Voyage Dans La Lune). We hope that the DPLA takes a proactive approach to encouraging the use of the digital material that it federates, to ensure that it is as impactful and valuable to as many people as possible.

Open Humanities Awards: 10 Days Left to Apply!

- March 4, 2013 in Open Humanities

OpenHumanitiesLogos A couple of weeks ago we announced the Open Humanities Awards a fantastic new initiative to support innovative projects that use open data, open content or open source to further teaching and research in the humanities. There are €15,000 of prizes on offer for 3-5 projects lasting up to 6 months. The winners will be given the opportunity to present their work at the world’s largest Open Knowledge event, OKFestival. The deadline for applications is 13th March so there is less than 10 days to go before we start judging the applications. We want to support a whole variety of projects that support humanities research and use the open web. So whether you’re interested in patterns of allusion in Aristotle, networks of correspondence in the Jewish Enlightenment or digitising public domain editions of Dante do think about applying! Go to the Awards website to apply and for any queries email me on sam.leon@okfn.org.