२०१४ खुला डेटा सूचकांकले प्रमुख डेटा खोल्न सरकारले ढिलो प्रगति गरेको देखाउँछ

Nikesh Balami - December 21, 2014 in Open Data

खुल्ला ज्ञानले विश्वव्यापी खुला डेटाको स्थिति मुल्याङ्कन गर्दै २०१४ खुला डेटा सूचकांक प्रकाशित गरेको छ

खुल्ला ज्ञानले २०१४ खुला डेटा सूचकांक प्रकाशित गरेको छ जसले केहि प्रगति भएको देखाएता पनि अझै अधिकांश सरकारहरुले आफ्ना नागरिक र व्यवसायहरुलाई सुलभ रूपमा प्रमुख जानकारीहरु उपलब्ध गराइरहेका छैनन् भन्ने दर्शाउँछ। मैक्किंजे तथा अन्यले खुल्ला डेटाको सम्भावित लाभ १ खरब डलरभन्दा बढि रहेको अनुमान लगाइरहेका बेला, सुस्त प्रगतिले यस प्रमुख अवसरलाई क्षति गर्ने जोखिम देखिन्छ।

रफस पोलक, संस्थापक तथा खुला ज्ञानका अध्यक्ष भन्छन्,

‘सरकारी डेटा खुल्ला गर्नाले लोकतन्त्र, जवाफदेहिता र नवीनता उजागर हुन्छ। यसले नागरिकलाई आफ्नो अधिकार जान्न र अभ्यास गर्न सक्षम पार्छ , र यसले समाजमा समेत लाभ ल्याउँछ: परिवहन देखि, शिक्षा र स्वास्थ्य सम्म। विगत केही वर्षमा सरकारबाट खुल्ला डेटाको समर्थनमा स्वागतयोग्य वृद्धि भएता पनि यो वर्षको सूचकांकले वास्तविक प्रगति जहिल्यै लफ्फाजीभन्दा धेरै पछि परेको देखाउँछ।’

यो सूचकांकले बिभिन्न देशहरुलाई सरकारी खर्च, चुनावी परिणाम, यातायात समयतालिका, र प्रदूषण स्तर सहित दस मुख्य क्षेत्रमा जानकारीको उपलब्धता र पहुँचका आधारमा सुचिकृत गर्दछ।

बिगतको स्थितिलाई कायम राख्दै समग्र ९६% स्कोरसहित बेलायत सुचकांकको शीर्ष स्थानमा छ भने, त्यसलाई पछ्याउँदै डेनमार्क दोस्रो र गत बर्ष १२औँ स्थानमा रहेको फ्रान्स तेस्रो स्थानमा छन्। त्यस्तै, फिनल्याण्ड चौथो स्थानमा छ भने पाचौँ स्थानका लागि अष्ट्रेलिया र न्यूजील्याण्डले साझेदारी गरेका छन्।संयुक्त १२औँ स्थानमा आएका ल्याटिन अमेरिकी मुलुक कोलम्बिया र उरुग्वे साथै १०औँ स्थानमा आएको भारत ( पहिला २७ औँ ) ले प्रभावशाली परिणाम देखाएका छन्।

सियरा लियोन, माली, हैटी र गिनी, मूल्यांकन गरिएका देशहरूमध्ये सबै भन्दा कम स्थानमा छन् , तर यस्ता देश हरु पनि धेरै छन् जहाँ सरकार कम खुला छन् र त्यस्ता देशहरु पर्याप्त खुलापन वा नागरिक समाजको सक्रियता अभावका कारण मूल्यांकन गरिएका छैनन्।

फ्रान्सिस मौड, बेलायत खुला डेटा एजेन्डा लागि जिम्मेवार बेलायती मन्त्रिमण्डल कार्यालयका मन्त्री, भन्छन्:

‘म बेलायत खुला डेटा सूचकांकको शीर्ष स्थानमा रहेको हेर्न पाएकोमा अत्यन्तै खुसी छु। हाम्रो दीर्घकालीन आर्थिक योजनाको एक भागका रूपमा सरकारले गत साढे चार बर्षदेखि एक प्रभावशाली पारदर्शिता एजेन्डा संचालनमा ल्याएको थियो। हामि हाम्रा नागरिकले हाम्रो हरेक कार्यप्रति खबरदारी गरुन् भन्ने चाहन्छौँ र यसका लागि खुल्ला डेटा सूचकांक एक महान उपकरण हो।यो सधैं सजिलो छैन तर हामी संसारमा सबैभन्दा पारदर्शी र खुला सरकार हुने हाम्रो प्रयासमा प्रतिबद्ध रहने छौँ। ‘

समग्र खुल्ला डेटासेटको संख्यामा (८७ देखि १०४) अर्थपूर्ण सुधार आए पनि, सबै सर्वेक्षित देशहरुमा खुला डेटासेटको प्रतिशत कम (११% मात्र) रह्यो।

तैपनि अझै खुल्ला सरकारी डेटाका क्षेत्रमा नेताहरूबीच पनि सुधारको ठाऊँ छः उदाहरणका लागि,अमेरिका र जर्मनीले त्यहाँका निगमहरुको एक समेकित तथा खुला दर्ता प्रदान गर्दैनन्।अधिकांश देशहरुमा या त सबैमा जानकारीहरु प्रदान गर्न नसकेर वा सीमित जानकारी मात्र उपलब्ध गराएर सरकारी खर्च विवरणको वरिपरि खुलापन को स्थिति हदै निराशाजनक देखिएको छ। ९७ मध्ये केवल दुई देशले (बेलायत र ग्रीस) मात्र यस सन्दर्भमा पुर्णाङ्क प्राप्त गर्न सफल भएका छन्।धेरै देशहरूमा सुस्त वृद्धि र जारी मितव्ययिताको अवधिमा नागरिक र व्यवसायहरुलाई यस प्रकारको डेटाको स्वतन्त्र र खुला पहुँच दिनु पैसा बचत र सरकारी दक्षता सुधारका लागि एक प्रभावकारी माध्यम हुने कुरा उल्लेखनीय छ।

रफस पोलक भन्छन्:

‘खुला डेटा को साँचो लाभ महसुस गर्न, सरकारले केही स्प्रिेडसिटलाई अनलाइन राख्ने भन्दा अझ बढी गृहकार्य गर्नुपर्छ। जानकारी सजिलै पाइने र बुझिने, र जुनै उद्देश्यका लागि पनि प्रयोग गर्न सकिने, र जोकोहीले पनि जहाँसुकै र जुनै उदेश्यका लागि पुनर्प्रयोग र बाँड्न सक्ने खालको हुनुपर्छ। ‘

२०१४ खुल्ला डेटा सूचकांकमा नेपाल ६३औँ स्थानमा छ।सूचकांक कथाहरुको ब्लग मार्फत विश्व खुला डेटा सूचकांकका बारे खुला ज्ञान नेपालको धारणा बुझ्न सकिनेछ।

विस्तृत जानकारी र ग्राफिक्स डाउनलोडको लागि http://index.okfn.org/ मा जानुहोस्।

Open Training for Open Science

Jenny Molloy - December 21, 2014 in Featured, Reproducibility, research, tools

This is part of series of blog posts highlighting focus points for the Open Science Working Group in 2015. These draw on activities started in the community during 2014 and suggestions from the Working Group Advisory Board.

By opensourceway on Flickr under CC-BY-SA 2.0

By opensourceway on Flickr under CC-BY-SA 2.0

The Open Science Working group have long supported training for open science and early introduction of the principles of open and reproducible research in higher education (and earlier!). This area was a focus in 2013-4 and grows in importance as we enter 2015 with the level of interest in openness in science increases at a rapid rate. This post attempts to provide examples of training initiatives in which members of the working group have been involved and particular areas where work is lacking.

  1. Openness in higher education
  2. Strategies for training in open science
  3. The Open Science Training Initiative (OSTI)
  4. Developing Open Science Training Curricula
  5. Incorporating Open Science Training into Current Courses
  6. Conclusion
  7. Getting Involved in Open Science Training

Openness in higher education

Openness has the potential to radically alter the higher education experience. For instance, Joss Winn and Mike Neary posit that democratisation of participation and access could allow a reconstruction of the student experience in higher eduction to achieve this social relevance, they propose:

“To reconstruct the student as producer: undergraduate students working in collaboration with academics to create work of social importance that is full of academic content and value, while at the same time reinvigorating the university beyond the logic of market economics.”[1]

Openness focuses on sharing and collaboration for public good, at odds with the often competitive ethos in research and education. This involves more than simply implementing particular pedagogies or publishing open access articles – as Peters and Britez state bluntly in the first sentence of their book on open education:

“Open education involves a commitment to openness and is therefore inevitably a political and social project.” [2]

This could equally apply to open science. Openness is a cultural shift that is facilitated but not driven by legal and technical tools. In open education, for instance, open pedagogy makes use of now abundant openly licensed content but also places an emphasis on the social network of participants and the learner’s connections within this, emphasising that opening up the social institution of higher education is the true transformation. In open science, a lot of training focuses on the ability to manage and share research data, understand licensing and use new digital tools including training in coding and software engineering. However, understanding the social and cultural environment in which research takes place and how openness could impact that is arguably even more fundamental.

This section will focus on three topics around open science training, offering relevant linkages to educational literature and suggestions for teaching design:

  1. Use of open data and other open research objects in higher education.
  2. Use of open science approaches for research-based learning.
  3. Strategies for training in open science.

Strategies for training in open science

As openness is a culture and mindset, socio-cultural approach to learning and the construction of appropriate learning environments is essential. While the Winn and Neary [1] focus on the student as producer, Sophie Kay [3] argues that this can be detrimental as it neglects the role of students as research consumers which in turn neglects their ability to produce research outputs which are easily understood and reuseable.

Training in evolving methods of scholarly communication is imperative because there are major policy shifts towards a requirement for open research outputs at both the funder and learned society levels in the UK, EU and US. This is in addition to a growing grassroots movement in scientific communities, accelerated by the enormous shifts in research practice and wider culture brought about by pervasive use of the internet and digital technologies. The current generation of doctoral candidates are the first generation of `digital natives’, those who have grown up with the world wide web, where information is expected to be available on demand and ‘prosumers’ who consume media and information as well as producing their own via social media sites, are the norm. This norm is not reflected in most current scientific practice, where knowledge dissemination is still largely based on a journal system founded in the 1600s, albeit now in digital format. Current evidence suggests that students are not prepared for change, for example a major study of 17,000 UK graduate students [4] revealed that students:

  • hold many misconceptions about open access publishing, copyright and intellectual property rights;
  • are slow to utilise the latest technology and tools in their research work, despite being proficient in IT;
  • influenced by the methods, practices and views of their immediate peers and colleagues.

While pre-doctoral training is just as important, the majority of open science training initiatives documented thus far have aimed at the early career research stage, including doctoral students.

The Open Science Training Initiative (OSTI)

Photo courtesy of Sophie Kay, licensed under CC-BY.

OSTI photo courtesy of Sophie Kay, licensed under CC-BY.

Open Knowledge Panton Fellow Sophie Kay developed an Open Science Training Initiative (OSTI) [3], trialled in the Life Science Interface Doctoral Training Centre at the University of Oxford, which employs `rotation based learning’ (RBL) to cement the role of students as both producers and consumers of research through learning activities which promote the communication of coherent research stories that maximise reproducibility and usefulness. The content involves a series of mini-lectures around concepts, tools and skills required to practice openly, including an awareness of intellectual property rights and licensing, digital tools and services for collaboration, storage and dissemination, scholaraly communication and broader cultural contexts of open science.

The novel pedagogical approach employed was the creation of groups during an initiator phase where each group reproduces and documents a scientific paper, ensuring that outputs are in appropriate formats and properly licensed. Next the successor phase sees the reproduced work being rotated to another group who must again validate and build upon it in the manner of a novel research project, with daily short meetings with instructors to address any major issues. No intergroup communication is allowed during either phase, meaning that deficiencies in documentation and sticking points become obvious and hopefully leads to greater awareness among students of the adequacy of their future documentation. The pilot course involved 43 students and had a subject-specific focus on computational biology. Feedback was excellent with students feeling that they had learnt more about scientific working practises and indicating they were highly likely to incorporate ideas introduced during the course into their own practice.

This course design offers great scope for inter-institutional working and as it uses OERs the same training can be delivered in several locations but remains adaptable to local needs. RBL would be more challenging to mirror in wet labs but could be adapted for these settings and anyone is encouraged to remix and run their own instance. Sophie is especially keen to see the materials translated into further languages.

Developing Open Science Training Curricula

OSTI is one of the first courses to specifically address open science training but is likely the first of many as funding is becoming available from the European Commission and other organisations specifically aimed at developing open access and open science resources and pedagogies. Some of the key consideration for teaching design in this space are:

  1. How to address socio-cultural aspects in addition to imparting knowledge about legal and technical tools or subject-specific content and skills training.
  2. The current attitudes and perceptions of students towards intellectual property and the use of digital technologies and how this will impact their learning.
  3. The fast pace of change in policy requirements and researcher attitudes to aspects of open science.
  4. Additional time and resources required to run additional courses vs amelioration of existing activities.
Open science curriculum map at MozFest 2014.  Photo by Jenny Molloy, dedicated to the public domain via a CCZero waiver.

Open science curriculum map at MozFest 2014. Photo by Jenny Molloy, dedicated to the public domain via a CCZero waiver.

There are numerous one-off training events happening around the world, for instance the series of events funded by the FOSTER EU programme, which includes many workshops on open science. There are also informal trainings through organisations such as the Open Science working group local groups. Open science principles are incorporated into certain domain-specific conferences or skill-specific programmes like Software Carpentry Workshops, which have a solid focus on reproducibility and openness alongside teaching software engineering skills to researchers.

There are no established programmes and limited examples of open science principles incorporated into undergraduate or graduate curricula across an entire module or course. Historically, there have been experiments with Open Notebook Science, for instance Jean-Claude Bradley’s work used undergraduates to crowdsource solubility data for chemical compounds. Anna Croft from Bangor University presented her experiences encouraging chemistry undergraduates to use open notebooks at OKCon 2011 and found that competition between students was a barrier to uptake. At a graduate level, Brian Nosek has taught research methods courses incorporating principles of openness and reproducibility (Syllabus) and a course on improving research (Syllabus). The Centre for Open Science headed by Nosek also has a Collaborative Replications and Education Project (CREP) which is an excellent embodiment of the student as producer model and incorporates many aspects of open and reproducible science through encouraging students to replicate studies. More on this later!

It is clear that curricula, teaching resources and ideas would be useful to open science instructors and trainer at this stage. Billy Meinke and Fabiana Kubke helpfully delved into a skills-based curriculum in more depth during Mozilla Festival 2014 with their mapping session. Bill Mills of Mozilla Science Lab recently published a blog post on a similar theme and has started a pad to collate further information on current training programmes for open science. In the US, NCAES ran a workshop developing a curriculum for reproducible science followed by a workshop on Open Science for Synthesis .

NESCent ran a curriculum building workshop in Dec 2014 (see wiki). Several participants in the workshop have taught their own courses on Tools for Reproducible Research (Karl Broman) or reproducibility in statistics courses (Jenny Bryan). This workshop was heavily weighted to computational and statistical research and favoured R as the tool of choice. Interestingly their curriculum looked very different to the MozFest map, which goes to show the breadth of perspectives on open science within various communities of researchers!

All of these are excellent starts to the conversation and you should contribute where possible! There is a strong focus on data-rich, computational science so work remains to rethink training for the wet lab sciences. Of the branches of skills identified by Billy and Fabiana, only two of seven relate directly to computational skills, suggesting that there is plenty of work to be done! For further ideas and inspiration, the following section details some ways in which the skills can be further integrated into the curriculum through existing teaching activities.

Skills map for Reproducible, Open and Collaborative Science. Billy Meinke and Fabiana Kubke's session at MozFest 2014.

Skills map for Reproducible, Open and Collaborative Science. Billy Meinke and Fabiana Kubke’s session at MozFest 2014.

Incorporating Open Science Training into Current Courses

Using the open literature to teach about reproducibility

Data and software is increasingly published alongside papers, ostensibly enabling reproduction of research. When students try to reanalyse, replicate or reproduce research as a teaching activity they are developing and using skills in statistical analysis, programming and more in addition to gaining exposure to the primary literature. As much published science is not reproducible, limitations of research documentation and experimental design or analysis techniques may become more obvious, providing a useful experiential lesson.

There is public benefit to this type of analysis. Firstly, whether works are reproducible or not is increasingly of interest particularly to computational research and various standards and marks of reproducibility have been proposed but the literature is vast and there is no mechanism widely under consideration for systematic retrospective verification and demarcation of reproduciblity. Performing this using thousands of students in the relevant discipline could rapidly crowdsource the desired information while fitting easily into standard components of current curricula and offering a valid and useful learning experience.

The effect of `many eyes’ engaging in post-publication peer review and being trained in reviewing may also throw up substantive errors beyond a lack of information or technical barriers to reproduction. The most high profile example of this is the discovery by graduate student Thomas Herndon of serious flaws in a prominent economics paper when he tried to replicate its findings [5,6]. These included coding errors, selective exclusion of data and unconventional weighting of statistics, meaning that a result which was highly cited by advocates of economic austerity measures and had clear potential to influence fiscal policy was in fact spurious. This case study provides a fantastic example of the need for open data and the social and academic value of reanalysis by students, with the support of faculty.

This possibility has not been picked up in many disciplines but the aforementioned CREP project aims to perform just such a crowd-sourced analysis and asks instructors to consider what might be possible through student replication. Grahe et al., suggest that:

“Each year, thousands of undergraduate projects are completed as part of the educational experience…these projects could meet the needs of recent calls for increased replications of psychological studies while simultaneously benefiting the student researchers, their instructors, and the field in general.” [7]

Frank and Saxe [8] support this promise, reporting that they found teaching replication to be enjoyable for staff and students and an excellent vehicle for educating about the importance of reporting standards, and the value of openness. Both publications suggest approaches to achieving this in the classroom and are well worth reading for further consideration and discussion about the idea.

Reanalysing open data

One step from reproduction of the original results is the ability to play with data and code. Reanalysis using different models or varying parameters to shift the focus of the analysis can be very useful, with recognition of the limitations of experimental design and the aims of the original work. This leads us to the real potential for novel research using open datasets. Some fields lend themselves to this more than others. For example, more than 50% of public health masters projects across three courses examined by Feldman et al. [9] used secondary data for their analyses rather than acquiring expensive and often long-term primary datasets. Analysis of large and complex public health data is a vital graduate competency, therefore the opportunity to grapple with the issues and complexities of real data rather than a carefully selected or contrived training set is vital.

McAuley et al. [10] suggest that the potential to generate linked data e.g. interconnecting social data, health statistics and travel information, is the real power of open data and can produce highly engaging educational experiences. Moving beyond educational value, Feldman et al. [9] argue that open data use in higher education research projects allows for a more rapid translation of science to practise. However, this can only be true if that research is itself shared with the wider community of practise, as advocated by Lompardi [11]. This can be accomplished through the canonical scientific publishing track or using web tools and services such as the figshare or CKAN open data repositories, code sharing sites and wikis or blogs to share discoveries.

In order to use these digital tools that form the bedrock of many open science projects and are slowly becoming fully integrated into scholarly communication systems, technological skills and understanding of the process of knowledge production and disemmination in the sciences is required. Students should be able to contextualise these resources within the scientific process to prepare them for a future in a research culture that is being rapidly altered by digital technologies. All of these topics, including the specific tools mentioned above, are covered by the ROCS skills mapping from MozFest, demonstrating that the same requirements are coming up repeatedly and independently.

Use of open science approaches for research-based learning

There are several powerful arguments as to why engaging students in research-based activities leads to higher level and higher quality learning in higher education and the Boyer Commission on Educating Undergraduates in the Research University called for research based learning to become the standard, stating a desire:

“…to turn the prevailing undergraduate culture of receivers into a culture of inquirers, a culture in which faculty, graduate students, and undergraduates share an adventure of discovery.”

The previous section emphasised the potential role of open content, namely papers, data and code in research-based learning. In addition, the growing number of research projects open to participation by all – including those designated as citizen science – can offer opportunities to engage in research that scales and contributes more usefully to science than small research projects that may be undertaken in a typical institution as part of an undergraduate course. These open science activities offer options for both wet and dry lab based activities in place or in addition to standard practical labs and field courses.

The idea of collaborative projects between institutions and even globally is not new, involvement in FOSS projects for computational subjects has long been recognised as an excellent opportunity to get experience of collaborative coding in large projects with real, often complex code bases and a `world-size laboratory’ [12]. In wet lab research there are examples of collaborative lab projects between institutions which have been found to cut costs and resources as well as increasing the sample size of experiments performed to give publishable data [13]. Openness offers scaling opportunities to inter-institutional projects which might otherwise not exist by increasing their visibility and removing barriers to further collaborative partners joining.

Tweet from @O_S_M requesting assistance synthesising molecules.

Tweet from @O_S_M requesting assistance synthesising molecules.

There are several open and citizen science projects which may offer particular scope for research-based learning. One could be the use of ecology field trips and practicals to contribute to the surveys conducted by organisations such as the UK Biological Records Centre, thus providing useful data contributions and access to a wider but directly relevant dataset for students to analyse. NutNet is a global research cooperative which sets up node sites to collect ecosystem dynamics data using standard protocols for comparison across sites globally, as this is a longitudinal study with most measurements being taken only a couple of times a year it offers good scope for practical labs. On a more ad hoc basis, projects such as Open Source Malaria offer many project and contribution opportunities e.g. a request to help make molecules on their wishlist and a GitHub hosted to do list. One way of incorporating these into curricula are team challenges in a similar vein to the iGEM synthetic biology project, which involves teams of undergraduates making bacteria with novel capabilities and contributes the DNA modules engineered to a public database of parts known as BioBricks.

In conclusion, open and citizen science projects which utilise the internet to bring together networks of people to contribute to live projects could be incorporated into inquiry-based learning in higher education to the benefit of both students and the chosen projects, allowing students to contribute truly scientifically and socially important data in the `student as producer’ model while maintaining the documented benefits of research-based pedagogies. This ranges from controlled contributions to practice particular skills through discovery-oriented tasks and challenges such as iGEM, allowing students to generate research questions independently.

There are significant challenges in implementing these types of research-based activities, many of which are true of `non-open’ projects. For instance, there are considerations around mechanisms of participation and sharing processes and outputs. Assessment becomes more challenging as students are collaborating rather than providing individual evidence of attainment. As work is done in the open, provenance and sharing of ideas requires tracking.

Conclusion

This post has introduced some ideas for teaching open science focusing on the student as both a producer and consumer of knowledge. The majority of suggestions have centred around inquiry-based learning as this brings students closer to research practices and allows social and cultural aspects of science and research to be embedded in learning experiences.

Explicitly articulating the learning aims and values that are driving the teaching design would be useful to enable students to critique them and arrive at their own conclusions about whether they agree with openness as a default condition. There is currently little systematic evidence for the proposed benefits of open science, partly because it is not widely practised in many disciplines and also as a result of the difficulty of designing research to show direct causality. Therefore, using evidence-based teaching practices that attempt to train students as scientists and critical thinkers without exposing the underlying principles of why and how they’re being taught would not be in the spirit of the exercise.

Support for increased openness and a belief that it will lead to better science is growing, so the response of the next generation of scientists and their decision about whether to incorporate these practices into their work has great implications for the future research cultures and communities. At the very least, exposure to these ideas during under- and postgraduate training will enable students to be aware of them during their research careers and make more informed decisions about their practises, values and aims as a researcher. There are exciting times ahead in science teaching!

If you’ve found this interesting, please get involved with a growing number of like-minded people via the pointers below!

Getting Involved in Open Science Training

More projects people could get involved with? Add them to the comments and the post will be updated.

References

  1. Neary, M., & Winn, J. (2009). The student as producer: reinventing the student experience in higher education.
  2. Peters, M. A., & Britez, R. G. (Eds.). (2008). Open education and education for openness. Sense Publishers.
  3. For a peer-reviewed paper on the OSTI initiative, see Kershaw, S.K. (2013). Hybridised Open Educational Resources and Rotation Based Learning. Open Education 2030. JRC−IPTS Vision Papers. Part III: Higher Education (pp. 140-144). Link to the paper in Academia.edu
  4. Carpenter, J., Wetheridge, L., Smith, N., Goodman, M., & Struijvé, O. (2010). Researchers of Tomorrow: A Three Year (BL/JISC) Study Tracking the Research Behaviour of’generation Y’Doctoral Students: Annual Report 2009-2010. Education for Change.
  5. Herndon, T., Ash, M., & Pollin, R. (2014). Does high public debt consistently stifle economic growth? A critique of Reinhart and Rogoff. Cambridge journal of economics, 38(2), 257-279.
  6. Roose, Kevin. (2013). Meet the 28-Year-Old Grad Student Who Just Shook the Global Austerity Movement}. New York Magazine. Available from http://nymag.com/daily/intelligencer/2013/04/grad-student-who-shook-global-austerity-movement.html. Accessed 20 Dec 2014.
  7. Grahe, J. E., Reifman, A., Hermann, A. D., Walker, M., Oleson, K. C., Nario-Redmond, M., & Wiebe, R. P. (2012). Harnessing the undiscovered resource of student research projects. Perspectives on Psychological Science, 7(6), 605-607.
  8. Frank, M. C., & Saxe, R. (2012). Teaching replication. Perspectives on Psychological Science, 7(6), 600-604.
  9. Feldman, L., Patel, D., Ortmann, L., Robinson, K., & Popovic, T. (2012). Educating for the future: another important benefit of data sharing. The Lancet, 379(9829), 1877-1878.
  10. McAuley, D., Rahemtulla, H., Goulding, J., & Souch, C. (2012). 3.3 How Open Data, data literacy and Linked Data will revolutionise higher education.
  11. Lombardi, M. M. (2007). Approaches that work: How authentic learning is transforming higher education. EDUCAUSE Learning Initiative (ELI) Paper, 5.
  12. O’Hara, K. J., & Kay, J. S. (2003). Open source software and computer science education. Journal of Computing Sciences in Colleges, 18(3), 1-7.
  13. Yates, J. R., Curtis, N., & Ramus, S. J. (2006). Collaborative research in teaching: collaboration between laboratory courses at neighboring institutions. Journal of Undergraduate Neuroscience Education, 5(1), A14.

Licensing

Text is licensed under the Creative Commons CC0 1.0 Universal waiver. To the extent possible under law, the author(s) have dedicated all copyright and related and neighbouring rights to this text to the public domain worldwide.

Open Books Image: by opensourceway on Flickr under CC-BY-SA 2.0

2014: Das Jahr in 12 IFG-Anfragen

Arne Semsrott - December 19, 2014 in Featured, FragdenStaat, IFG, Open Knowledge Foundation

Dieses Jahr war viel los auf FragDenStaat. In diesem Rückblick führen wir noch einmal durch die Höhen und Tiefen von 2014 mit einer Anfrage pro Monat.

Januar: Die Berliner Polizei hat klammheimlich eine Vorratsdatenbank für Veranstaltungen erstellt, in der u.a. die Daten von Demo-Anmeldern für drei Jahre gespeichert werden. Die Ausgestaltung der Datenbank wird durch eine IFG-Anfrage deutlich. Mehr Infos / Zur AnfrageFebruar: Wie viel kosten die Give-Aways im Bundestag? Eine Antwort der Bundestagsverwaltung aus dem Februar gibt Auskunft.

Zur Anfrage

März: Die Berliner Verkehrsbetriebe hatten im Zuge von Cross-Border-Leasing-Geschäften Verträge u.a. mit der Investmentbank JP Morgan geschlossen. Offenlegen will sie Dokumente dazu aber nicht, eine endgültige Antwort steht immer noch aus. Trotzdem schließt die BVG in jeder Mail: “Wir hoffen, Sie auch künftig als unseren Kunden begrüßen zu können. Zur Anfrage

April: Bei einer öffentlichen Sitzung des Innenausschusses in der Hamburger Bürgerschaft zeigte die Polizei ein Video, auf dem Ausschreitungen im Zusammenhang mit einer Demonstration zu sehen waren. Bei der IFG-Anfrage auf Herausgabe des Videos sperrt sich die Polizei aber plötzlich. Erst nach Vermittlung durch den Datenschutzbeauftragten und einem halben Jahr Wartezeit gibt sie nach. Zur Anfrage

Mai: Die Bundesregierung hat zwei Gutachten in Auftrag gegeben, das Bundestagsabgeordneten mit Strafverfolgung in den USA droht, sollte Edward Snowden in den Bundestag eingeladen werden. Die Kosten der Gutachten werden durch eine IFG-Anfrage publik. Zur Anfrage

Juni: Anfang des Jahres kündigte der Bundesnachrichtendienst eine Transparenzkampagne an. Nachfragen zur Transparenz will er dann aber nicht beantworten. Mehr Infos / Zur Anfrage

Juli: Wir haben das #Zensurheberrecht besiegt! In einem Anerkennungsurteil bescheinigt uns das Landgericht Berlin, dass wir ein Gutachten des wissenschaftlichen Dienstes des Bundestags veröffentlichen dürfen. Das Innenministerium hatte uns deswegen verklagt. Mehr Infos / Zur Anfrage

August: Mit welchen Lobbyisten treffen sich Minister und Staatssekretäre? Über das IFG lässt sich das herausfinden. Zur Anfrage

September: Laut Angela Merkel ist die volle Souveränität der deutschen Geheimdienste hergestellt, was sich an Verbalnoten mit Frankreich, Großbritannien und den USA ablesen lasse. Nach anfänglichem Mauern schickt das Bundeskanzleramt Teile der Verbalnoten. Zur Anfrage

Oktober: Anstatt seine Anfrage zu beantworten droht das Ahlener Jobcenter im Kreis Wahrendorf dem Antragssteller Timo H. mit Sanktionen. Nach öffentlichem Druck lenkte der Kreis aber ein und zeiht die Androhung zurück. Mehr Infos / Zur Anfrage

November: Hamburg ist deutschlandweit Vorbild in Sachen Transparenz. Nur die mittelbare Staatsverwaltung, also z.B. die Handelskammer, will da nicht mitmachen. Womöglich wird sie bald aber per Gerichtsbeschluss dazu gezwungen. Mehr Infos / Zur Anfrage

Dezember: Das Bundesamt für Sicherheit in der Informationstechnik hatte einen Vertrag mit der Firma Vupen, die Informationen über Sicherheitslücken und Exploits verkauft. Damit konnten Sicherheitsbehörden die Lücken ausnutzen statt sie zu stopfen. Den Vertrag gibt das BSI auf Anfrage frei. Mehr Infos / Zur Anfrage

… und natürlich in diesem Jahr 1400 andere.

Wer davon noch nicht genug hat, trifft uns Ende des Monats beim 31c3. Als Höhepunkt Stefan Wehrmeyer auf der Main Stage am 29.12. um 20.30 Uhr: “IFG – Mit freundlichen Grüßen”

Um unser Portal 2015 ausbauen zu können, sind wir auf Spenden angewiesen. Bitte unterstütze uns mit 20 oder 50 Euro hier. Dankeschön!

The Crusade for Curious Images

Marieke Guy - December 19, 2014 in #BLdigital, Digital Humanities, Events/Workshops, Front Page, Open Humanities

In December last year the British Library released over a million images on to Flickr Commons. The images were taken from the pages of 17th, 18th and 19th century books digitised by Microsoft and gifted to the British Library. One-year on and it seems pertinent to mark the anniversary with an event held at the […]

So why does Belgium rank so low?

pjpauwels - December 19, 2014 in belgium, Featured, Open Data, Open Data News, opendataindex

In our article on the 9th of December we’ve talked about Belgium scoring slightly higher on the Global Open Data Index. We went from 58th to 53rd. And even though we have positive aspirations for 2015 because we now have a federal minister of the Digital Agenda who is directly responsible for Open Data and multiple mentions in the policy agreement. We did get a few questions and remarks on our results:

Why is Belgium so low in relation to it’s neighbouring countries? Every Western European country sits at the top.

How can a country with an established local Open Knowledge Chapter and projects like iRail still rank only halfway the index?

To sum it up, we were asked the question: Why does Belgium rank so low?

Screen Shot 2014-12-18 at 12.52.52

The Twitter-conversation with Phil Archer from W3C and Pieter Colpaert sums up our anwser in only a few tweets:

But don’t get us wrong, the Global Data Index is a great tool to benchmark countries on their national open data efforts. And to have a relative simple tool for the open community to crowdsource a global ranking you need to make certain choices. But that doesn’t mean that Belgium is doing a bad job. Open Data in Belgium and broader open knowledge is mostly emerging from bottom up initiatives by numerous organisations and local/regional governments.

Cities like Ghent, Antwerp and Kortrijk are pushing the local envelope by organising hackathons and datadives for their citizens. But that’s not part of global index.

Open Data Forum has proven that there is public support in all layers of the Flemish government and associated organisations for opening up data, they support local initiatives and show best practices on a federal level. And a packed data portal. But that’s not part of the index.

AWT is putting similar efforts in motion in Wallonia together with the Hackathon e-Gov Wallonia team who just organised the first Brussels hackathon as well. Still not a part of the index.

The brilliant researchers at iMinds, the research groups and the different universities have helped us tremendously on a strategical / scientific level as well in supporting a lot of our causes and activities. Not part of the index.

And iRail. Well… They open up national transport data in Belgium through their API for 3rd parties, but iRail is not an official source. So you guessed it, not applicable for the index.

To make a long story short. Belgium is not doing a bad job concerning open data. There is still a lot to be done, but there are a lot of efforts and little victories that you don’t see in the Global Data Index. And those efforts and victories are not just a result made possible by Open Knowledge Belgium. No, we are just a part of a bigger network of organisations, ambassadors and projects of which I have probably forgot to mention a lot of.

I’m not even going to try to mention everyone that helped us to reach the point where we are now. We can only humbly say thank you, we hope to work with all of you in the coming years and make Belgium truly a country where everybody can benefit from open knowledge. Thank You

Still not convinced? Feel the need to discuss this? For those people who want to discuss Open Data efforts in Belgium and get an overview on what initiatives are active today in Belgium we happily invite you to join us at the Open Belgium Conference on the 23rd February in Namur. We’ll have a panel on Open Data efforts in the different governmental layers and an overview current efforts, practical workshops on open culture, open science, open transport and business models and so much more. And we have early bird tickets until the end of this month.

Aged Come In We're Open

Avoin päätösdata -ainutlaatuisen hyvää ja harvinaista

raimom - December 18, 2014 in asianhallinta, avoin data, avoin demokratia, avoindata.net, demokratia, Featured, helsinki, hyödyt, kansalaisvaikuttaminen, kustannukset, open ahjo, Open Democracy, Open Government Data, osallistuminen, päätösdata, rajapinta, Tampere, vaikuttaminen

Niin kaupungit kuin yhteiskunnat rakentuvat nykyään verkostoina, eivät hierarkioina. Verkostojen kehittyminen edellyttää, että yksittäiset toimijat voivat vapaasti toimia parhaaksi katsomallaan tavalla. Kuva: Wikimedia/Nasa

Niin kaupungit kuin yhteiskunnat rakentuvat nykyään verkostoina, eivät hierarkioina. Verkostojen kehittyminen edellyttää, että yksittäiset toimijat voivat vapaasti toimia parhaaksi katsomallaan tavalla.

Mitä tarkoittaa päätösdata?

Julkishallinnollisilla organisaatioilla, kuten esimerkiksi kunnilla ja eduskunnalla, on usein yksi keskeinen tietojärjestelmä päätösten ja päätösesitysten valmisteluun ja tallentamiseen. Tätä kutsutaan usein asianhallintajärjestelmäksi. Esimerkiksi Tampereella on käytössä KT Web. Sen ensisijaisia sisällöntuottajia ovat virkamiehet tai viranhaltijat ja luottamushenkilöt. Usein järjestelmä julkaisee organisaation www-sivuille esityslistat, pöytäkirjat ja monissa tapauksissa myös liiteaineistot. Nämä ovat tarkoitettu ihmissilmän luettavaksi. Viranhaltijapäätösten julkaisu on harvinaisempaa, ja siinä Helsinki on avoimuudessa edelläkävijä. Helsinki on edelläkävijä myös aineiston tarjoamisessa avoimena, koneluettavana datana. Rajapinta tarjoaa xml:ää, mutta siitä on jatkojalostettu myös json:ia. Tämän blogiartikkelin kirjoittaja on puhunut Tampereella päätösdatan avaamisen puolesta jo noin kaksi vuotta, mutta valitettavan vähäisin tuloksin. Tänään järjestetty tapaaminen eduskunnassa innoitti kirjoittamaan Osallistuvalle Tampereelle Facebook-viestin päätösdatan julkaisun hyödyistä. Julkaisen nyt kirjoituksen tuoreeltaan pienin muokkauksin tässä myös blogimuodossa.

Eduskunta järjesti tänään työpajan avoimesta datasta, aiheena kansanedustaja- ja päätösdata. OKFFI:n Joonas Pekkanen dokumentoi tilaisuutta viestillä Open Democracy Finland -ryhmässä.

Keskustelu Eduskunnassa avoimesta datasta 18.12. 2014. Kuva: Joonas Pekkanen.

Keskustelu Eduskunnassa avoimesta datasta 18.12. 2014. Kuva: Joonas Pekkanen.

Avoimen demokratian verkosto teki joulukuussa 2010 eduskunnalle esityksen datojen avaamisesta, mutta edistyminen on hidasta tai olematonta. Samoin olen ehdottanut, että Tampere avaisi myös päätösdatansa. Valitettavasti Open Data Tampere -regionissa toimineiden Matti Saastamoisen ja Jarkko Moilasen mukaan tulevan tietojärjestelmäuudistuksen vuoksi nykyisen järjestelmän datojen avaamisen kustannus-hyötysuhde on kannattamaton.


Voisiko hyödyistä ja kustannuksista käydä avointa keskustelua ja arvioida niitä yhteistuumin?


Tietojärjestelmätoimittaja Tiedon mukaan “Open Ahjo voitti vuonna 2014 WeGOn (The World e-Governments Organization of Cities and Local Governments) järjestämän julkisen hallinnon Open City -kategorian. Viime vuonna Open Ahjo voitti Apps4Finland-kisan mahdollistaja-sarjan.”

Kuva: Wikimedia Commons / Woody993.

Verkoston kasvaessa toimijoiden välisten suhteiden määrä lisääntyy moninkertaisesti. Hierarkinen tiedonkulku ja vuorovaikutus ei kykene pysymään mukana verkostomaisessa ympäristössä. Avoimuus on tehokas lääke hierarkisen hallinnon ja avoimen yhteiskunnan vuoropuhelun edistämiseen.

Helsinki Region Infosharen mukaan HRI voitti vuonna 2013 kannustavan 100 000 euron “European Prize for Innovation in Public Administration” palkinnon, jonka perusteluna komission mukaan oli “avoimen datan avaaminen kansalaisten ottamiseksi mukaan päätöksentekoon”.

Palkintorahat tuli käyttää kansalaisten osallistumis- ja tiedonsaantimahdollisuuksien parantamiseksi. Niillä rahoitettiin Suomen ensimmäinen kokonaan verkossa toteutettu osallistuva budjetointi Datademo, josta on tuotettu blogauksia, sovelluksia, visualisointeja, rajapintoja ja avointa lähdekoodia dokumentoineen edistämään avointa demokratiaa tähän mennessä 40 tuotoksen verran. Yhden rahoituskierroksen tulokset ovat vielä tulossa. Esimerkiksi tamperelainen Spartacus Technologies pyöräytti Helsinkikanavan valtuustovideot taskussa kulkevaan muotoon.

Avoimella tai “ruudulta raavitulla” päätösdatalla on tehty Suomessa jotoistakymmentä sovellusta. Tekijöitä ovat yksittäiset sovelluskehittäjät, tavalliset kansalaiset ja pienyrittäjät. Näyttävimpiä esimerkkejä ovat Helsingin Päätökset -palvelu ja Eduskunta explorer.

Tämä on keskustelunavaukseni avoimen päätösdatan hyötyihin.

Keskustelua voi jatkaa esimerkiksi Avoindata.net -kysymykset ja vastaukset -palvelussa.

Fourth Workshop on Linked Data in Linguistics (LDL-2015): Resources and Applications

Christian Chiarcos - December 18, 2014 in ldl, ldl-2015, linguistics, linked data, linked data in linguistics, llod, natural language processing, ontologies, open linguistics, Semantic Web, WG Linguistics

We are very happy to announce the next instantiation of the OWLG’s Linked Data in Linguistics (LDL) workshop series. The OWLG’s fourth Workshop on Linked Data in Linguistics is becoming increasingly international, and, for the first time, will be held outside of Europe: on June 31st, 2015, in Beijing, China, collocated with ACL-IJCNLP 2015.

See you in Beijing!


 

4th Workshop on Linked Data in Linguistics (LDL-2015): Resources and Applications Beijing, June 31st, 2015, http://ldl2015.linguistic-lod.org, collocated with ACL-IJCNLP 2015

 

Workshop Description

The substantial growth in the quantity, diversity and complexity of linguistic data accessible on the Web has led to many new and interesting research areas in Natural Language Processing (NLP) and linguistics. However, resource interoperability represents a major challenge that still needs to be addressed, in particular if information from different sources is combined. With its fourth instantiation, the Linked Data in Linguistics workshop continues to provide a major forum to discuss the creation of linguistic resources on the web using linked data principles, as well as issues of interoperability, distribution protocols, access and integration of language resources and natural language processing pipelines developed on this basis.

As a result of the preceding workshops, a considerable number of resources is now available in the Linguistic Linked Open Data (LLOD) cloud [1]. LDL-2015 will thus specifically welcome papers addressing the usage aspect of Linked Data and related technologies in NLP, linguistics and neighboring fields, such as Digital Humanities.

Organized by the interdisciplinary Open Linguistics Working Group (OWLG) [2], the LDL workshop series is open to researchers from a wide range of disciplines, including (computational) linguistics and NLP, but also the Semantic Web, linguistic typology, corpus linguistics, terminology and lexicography. In 2015, we plan to increase the involvement of the LIDER project [3] and the W3C Community Group on Linked Data for Language Technology (LD4LT) [4], to build on their efforts to facilitate the use of linked data and language resources for commercial applications, and to continue the success of LIDER‘s roadmapping workshop series in engagement with enterprise.

[1] http://linguistics.okfn.org/resources/llod/ [2] http://linguistics.okfn.org/ [3] http://www.lider-project.eu/ [4] http://www.w3.org/community/ld4lt/

Topics of Interest

We invite presentations of algorithms, methodologies, experiments, use cases, project proposals and position papers regarding the creation, publication or application of linguistic data collections and their linking with other resources, as well as descriptions of such data. This includes, but is not limited to, the following:

A. Resources

  • Modelling linguistic data and metadata with OWL and/or RDF.
  • Ontologies for linguistic data and metadata collections as well as cross-lingual retrieval.
  • Descriptions of data sets following Linked Data principles.
  • Legal and social aspects of Linguistic Linked Open Data.
  • Best practices for the publication and linking of multilingual knowledge resources.

B. Applications

  • Applications of such data, other ontologies or linked data from any subdiscipline of linguistics or NLP.
  • The role of (Linguistic) Linked Open Data to address challenges of multilinguality and interoperability.
  • Application and applicability of (Linguistic) Linked Open Data for knowledge extraction, machine translation and other NLP tasks.
  • NLP contributions to (Linguistic) Linked Open Data.

We invite both long (8 pages and 2 pages of references, formatted according to the ACL-IJCNLP guidelines) and short papers (4 pages and 2 pages of references) representing original research, innovative approaches and resource types, use cases or in-depth discussions. Short papers may also represent project proposals, work in progress or data set descriptions.

Dataset Description Papers

In addition to full papers and regular short papers, authors may submit short papers with a dataset descriptions describing a resource’s availability, published location and key statistics (such as size). Such papers do not need to show a novel method for the creation or publishing of the data but instead will be judged on the quality, usefulness and clarity of description given in the paper.

For contact information, submission details and last-minute updates, please consult our website under http://ldl2015.linguistic-lod.org

Important Dates

  • May 8th, 2015: Paper submission
  • June 5th, 2015: Notification of Acceptance
  • June 21st, 2015: Camera-Ready Copy
  • June 31st, 2015: Workshop

Organizing Committee

  • Christian Chiarcos (Goethe University Frankfurt, Germany)
  • Philipp Cimiano (Bielefeld University, Germany)
  • Nancy Ide (Vassar College, USA)
  • John P. McCrae (Bielefeld University, Germany)
  • Petya Osenova (Bulgarian Academy of Sciences, Bulgaria)

Program Committee

  • Eneko Agirre (University of the Basque Country, Spain)
  • Guadalupe Aguado (Universidad Politécnica de Madrid, Spain)
  • Claire Bonial (University of Colorado at Boulder, USA)
  • Peter Bouda (Interdisciplinary Centre for Social and Language Documentation, Portugal)
  • Antonio Branco (University of Lisbon, Portugal)
  • Martin Brümmer (University of Leipzig, Germany)
  • Paul Buitelaar (INSIGHT, NUIG Galway, Ireland)
  • Steve Cassidy (Macquarie University, Australia)
  • Nicoletta Calzolari (ILC-CNR, Italy)
  • Thierry Declerck (DFKI, Germany)
  • Ernesto William De Luca (University of Applied Sciences Potsdam, Germany)
  • Gerard de Melo (University of California at Berkeley)
  • Judith Eckle-Kohler (Technische Universität Darmstadt, Germany)
  • Francesca Frontini (ILC-CNR, Italy)
  • Jeff Good (University at Buffalo)
  • Asunción Gómez Pérez (Universidad Politécnica de Madrid, Spain)
  • Jorge Gracia (Universidad Politécnica de Madrid, Spain)
  • Yoshihiko Hayashi (Waseda University, Japan)
  • Fahad Khan (ILC-CNR, Italy)
  • Seiji Koide (National Institute of Informatics, Japan)
  • Lutz Maicher (Universität Leipzig, Germany)
  • Elena Montiel-Ponsoda (Universidad Politécnica de Madrid, Spain)
  • Steven Moran (Universität Zürich, Switzerland)
  • Sebastian Nordhoff (Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany)
  • Antonio Pareja-Lora (Universidad Complutense Madrid, Spain)
  • Maciej Piasecki (Wroclaw University of Technology, Poland)
  • Francesca Quattri (Hong Kong Polytechnic University, Hong Kong)
  • Laurent Romary (INRIA, France)
  • Felix Sasaki (Deutsches Forschungszentrum für Künstliche Intelligenz, Germany)
  • Andrea Schalley (Griffith University, Australia)
  • Gilles Sérraset (Joseph Fourier University, France)
  • Kiril Simov (Bulgarian Academy of Sciences, Sofia, Bulgaria)
  • Milena Slavcheva (JRC-Brussels, Belgium)
  • Armando Stellato (University of Rome, Tor Vergata, Italy)
  • Marco Tadic (University of Zagreb, Croatia)
  • Marieke van Erp (VU University Amsterdam, The Netherlands)
  • Daniel Vila (Universidad Politécnica de Madrid)
  • Cristina Vertan (University of Hamburg, Germany)
  • Walther v. Hahn (University of Hamburg, Germany)
  • Menzo Windhouwer (Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands)

Thank You to Our Outgoing CEO

Rufus Pollock - December 18, 2014 in News

This is a joint blog post by Open Knowledge CEO Laura James and Open Knowledge Founder and President Rufus Pollock.

In September we announced that Laura James, our CEO, is moving on from Open Knowledge and we are hiring a new Executive Director.

From Rufus: I want to express my deep appreciation for everything that Laura has done. She has made an immense contribution to Open Knowledge over the last 3 years and has been central to all we have achieved. As a leader, she has helped take us through a period of incredible growth and change and I wish her every success on her future endeavours. I am delighted that Laura will be continuing to advise and support Open Knowledge, including joining our Advisory Council. I am deeply thankful for everything she has done to support both Open Knowledge and me personally during her time with us.

From Laura: It’s been an honour and a pleasure to work with and support Open Knowledge, and to have the opportunity to work with so many brilliant people and amazing projects around the world. It’s bittersweet to be moving on from such a wonderful organisation, but I know that I am leaving it in great hands, with a smart and dedicated management team and a new leader joining shortly. Open Knowledge will continue to develop and thrive as the catalyst at the heart of the global movement around freeing data and information, ensuring knowledge creates power for the many, not the few.

Projekt OpenLaws.eu im Interview

Thomas Lohninger - December 18, 2014 in Featured, Open Data, open laws

Open LawsAnlässlich des Starts des Projektes OpenLaws.eu haben wir ein Interview mit dem Verantwortlichen Clemens Wass geführt.
Worum geht es bei der Öffnung von Gesetzestexten und warum ist dir dieses Thema wichtig?
Recht betrifft jeden von uns. Wir haben das nur schon vergessen bzw. haben wir uns damit abgefunden, dass es eine Expertenmaterie ist. Es kann und darf in einem Rechtsstaat nicht sein, dass ich auf einfache rechtliche Fragen keine Antworten finde und in einer Flut von Gesetzen untergehe. Wäre es nicht praktisch, vernünftig, und ökonomisch und politisch sinnvoll, wenn BürgerInnen über ihre Rechte besser informiert wären? Und in Zeiten von Web 2.0 auch gleichzeitig ihre Meinung zu gewissen Punkten veröffentlichen könnten? Einen Schritt in mehr direkte Demokratie zu tun?
Gesetze und Entscheidungen in ganz Europa werden zunehmend als Open Data zur Verfügung gestellt. Wir nehmen diese Daten, präsentieren sie über eine benutzerfreundliche Oberfläche und entwickeln Lösungen, damit der Zugang zum Recht für BürgerInnen möglichst einfach wird.

Clemens Wass

Wie bist du zum Thema Open Legislation gekommen?
Ich habe bereits meine Diplomarbeit im Jahr 2001 darüber geschrieben, dass Gesetze und Entscheidungen “freie Werke” sind, die keinem urheberrechtlichen Schutz unterliegen. Das ist eine absolute Sonderstellung im Urheberrecht und dient dem Zweck, dass BürgerInnen sich bestmöglich darüber informieren können, welche rechtliche Rahmenbedingungen ihnen vorgegeben sind. Für mich war das Internet schon immer eine faszinierende Möglichkeit, diesen Zugang zu verbessern. Als ich dann vor zwei Jahren einerseits ein Studium mit Schwerpunkt Entrepreneurship und Innovation absolviert habe und andererseits entdeckt habe, dass Rechtsdatenbanken als Open Data zur Verfügung gestellt werden, war es für mich an der Zeit mehr aus diesem Thema zu machen.
Wie sieht es international in dem Thema aus und wo steht Österreich da im Vergleich?
Österreich nimmt im eGovernment eine europäische und weltweite Spitzenposition ein – obwohl das kaum jemand weiß. In internationalen Rankings liegt Österreich immer ganz vorne und stellt BürgerInnen sehr viel fortschrittliche Lösungen zur Verfügung. Das ist natürlich immer im Vergleich zu anderen zu beurteilen, selbstverständlich kann man noch immer vieles stark verbessern. Die Systeme sind teilweise alt und die Schnittstellen erfüllen nicht immer alles Standards, die sich Entwickler von heute vorstellen. Aber ich hoffe dass Österreich hier schnell vorangeht, damit in der Rechtsinformatik ebenfalls bald Standards herrschen, die in anderen Bereichen längst üblich sind.
Hinter der RIS-Anwendung steckt eine der schönsten und frühesten Erfolgsgeschichten von Open Data in Österreich. Was ist die Geschichte dahinter und wie kam es zu der Anwendung?
Im Grunde ist es für mich schon ein kleines Märchen – von einer App zu einem EU Projekt. Ich habe beim “Digitalen Österreich” des Bundeskanzleramtes angerufen und gefragt, ob sie an einer App für das RIS interessiert seien. Die Kollegen dort waren begeistert und wir sind sofort kosgestartet. Ich konnte die Computerwissenschaften der Universität Salzburg gewinnen, gemeinsam mit mir an dem Projekt zu arbeiten und nach einigen Monaten Entwicklungszeit stand die iPhone Version, später folgte die Android Variante. Mittlerweile hat sich die Universität Salzburg aus diesem Projekt zurückgezogen und ich arbeite mit einer Softwareentwicklungsfirma an einer HTML5 Variante der RIS:App, was uns aber teilweise vor ziemliche Herausforderungen stellt.
Wir haben entdeckt, dass die Nachfrage nach einfachem, technologieunterstützen Zugang zum Recht groß ist und haben in Folge ein EU Projekt beantragt. Ab April startet das Projekt OpenLaws.eu, über das wir eine europäische Rechtsinformationsplattform aufbauen. Das Projekt wird großteils von der Generaldirektion Justiz der Europäischen Kommission gefördert.
Welche Widerstände musstet ihr in dem RIS-Projekt überwinden?
Leider braucht ein solches Projekt immer viel Zeit, auch wenn es nach außen nur wie eine kleine App aussieht. Die User sind heute von vielen ausgezeichneten kostenlosen Apps verwöhnt und erwarten einen sehr hohen Standard – was ja auch gut ist. Allerdings wird übersehen, was die Entwicklung professioneller Apps kostet und unser Budget kann einfach nicht mit dem Großunternehmen mithalten. Selbst wenn wir ein paar wenige Euros für die App bekommen könnten, würde das bei dem kleinen österreichischen Markt kaum nennenswert sein und außerdem soll es ja eine kostenlose Rechtsinformations App geben. Das ist etwas, das wir noch klarer an die User kommunizieren müssen.
Die Strategien um zum Ziel zu kommen müssen in einer Gemeinschaftsentwicklung immer gut abgestimmt werden. gerade wenn es keine sicher Finanzierung gibt und wenn es auch keine Aussichten auf den großen Gewinn gibt. Oft gibt es unterschiedliche Ansätze, wie man am besten vorangeht, und dann wird schon mal diskutiert. Aber solange das Ziel klar ist, lässt sich das alles lösen.
Gibt es ein Geschäftsmodell hinter dem Projekt, oder wie finanziert ihr euch?
Wir werden ab April 2014 von der EU Kommission finanziert. Bislang wurde ich vom Business Creation Center Salzburg unterstützt und habe auch erfolgreich einen FFG Antrag gestellt. Mittelfristig braucht das Projekt ein nachhaltiges Geschäftsmodell, etwa auf einer Freemium Modell Basis. Wir wollen nicht, dass OpenLaws von Fördermitteln abhängig bleibt um Probleme zu vermeiden, wie sie etwa das Kulturprojekt Europeana derzeit hat.
Du bist ja auch Mitglied der Open Legislation Arbeitsgruppe der OKF Central. Mit welchen Fragen beschäftigt ihr euch dort?
Wir sind gerade dabei die Open Legislation Arbeitsgruppe neu zu beleben. Seit der OK Con in Genf 2013 ist beim Law Mining Hackathon etwas frische Luft in die Segel gekommen. In der Mailinglist werden immer wieder aktuelle Projekte vorgestellt und diskutiert, die aus meiner Sicht jedoch oft eher techniklastig sind. Wichtig ist es aus meiner Sicht eine gute Übersicht insgesamt zu bekommen, wofür ein Blog ev ein gute Lösung wäre. Wir werden sehen was hier entsteht.
Welche Vorteile und Möglichkeiten eröffnen sich mit der Öffnung von Gesetzestexte in maschinenlesbaren Formaten?
Wir sollten dadurch am Ende hoffentlich ein umfassenderes aber zugleich auch einfacheres Gesamtbild von einschlägigen Rechtsvorschriften haben. Idealerweise bekomme ich die relevantesten Aussagen zu meinen Fragen. Wenn ich etwa wissen möchte, ob ich den überhängenden Ast vom Baum meines Nachbarn abschneiden darf, wäre es doch hilfreich die entsprechende Gesetzesstelle, einige zusammengefasste Entscheidungen (am besten aus vergleichbaren Situationen) und ein paar Kommentare von Experten gesammelt an einem Ort zu haben.
Was ist das Projekt openlaws.eu?
OpenLaws.eu ist ein von mir initiertes EU Projekt, das die oben geschilderte Verknüpfungen von Gesetzen, Entscheidungen und juristischer Literatur zum Ziel hat. Wir bauen auf Open Data, Open Innovation und Open Source Software auf. Mit dabei sind die Universität Amsterdam als Lead Partner, die London School of Economics, die Fachhochschule Salzburg, die Universität Sussex, das Softwareunternehmen Alpenite sowie mein eigenes Unternehmen, die BY WASS GmbH.
Neben der Entwicklung der Plattform arbeiten wir auch an einer Empfehlung für die Kommission, wie sie Open Data und Open Innovation im juristischen Umfeld am sinnvollsten nutzen kann.

Guía de Política Pública de Datos Abiertos de Ecuador

Eduardo Bejar - December 17, 2014 in datos abiertos, ecuador, Noticias, Open Data, snap

guiadatosabiertos

En noviembre de 2014 la Secretaría Nacional de la Administración Pública de Ecuador (SNAP) publicó la Guía de Política Pública de Datos Abiertos (GPP-DA-v01-2014), la cual junto con el Plan Nacional de Gobierno Electrónico 2014-2017 constituyen los primeros antecedentes oficiales que el Gobierno Ecuatoriano presenta sobre la adopción de datos abiertos en las entidades de la Administración Pública Central, Institucional y Dependiente de la Función Ejecutiva (APCID).

Esta Guía define dato abierto como “cualquier dato accesible, liberado, publicado o expuesto sin naturaleza reservada o confidencial y que pueden ser utilizados, reutilizados y redistribuidos por cualquier persona”, y adicionalmente presenta los principios básicos que deben cumplir los datos para ser considerados abiertos, a partir de los 8 principios definidos inicialmente por el Grupo de Trabajo de Gobierno Abierto que se reunió en Sebastopol, California (Estados Unidos) en 2007.

La Guía promueve la liberación de datos ya publicados y todo otro que considere necesario y exija y permita la Ley Orgánica de Transparencia y Acceso a la Información Pública (LOTAIP), que básicamente consisten en los datos públicos de gobierno que se definen en el Artículo 7 de la LOTAIP y que han venido siendo publicados en los sitios web gubernamentales en la pestaña “Transparencia”, a excepción de:

  • Lo indicado en el Artículo 6 sobre la Información Confidencial en la LOTAIP, correspondiente a los datos personales.
  • Lo indicado en los Artículos 17 y 18 sobre la Información Reservada de la LOTAIP, correspondiente a los datos que pueden ser sensibles por aspectos de seguridad nacional.
  • Lo que afecte o comprometa aspectos de seguridad informática que pudieran resultar en un peligro de seguridad.

Finalmente, la guía promueve la liberación de datos en un formato unificado y común y compromete el nivel de 3 estrellas, del esquema de desarrollo de 5 Estrellas de Datos Abiertos propuesto por Tim Berners-Lee en 2009, como un primer paso de liberación de datos de Ecuador, esto es, datos publicados en la web, bajo una licencia abierta, estructurados en un formato que pueda ser interpretado o procesado por máquinas y en formatos no propietarios.

La Guía completa puede consultarse en el siguiente enlace: GPP-DA-v01-20141128-SNAP-SGE así como el Plan Nacional de Gobierno Electrónico: Plan Gobierno Electronico V1