You are browsing the archive for Open Access.

OpenGLAMの原則:文化遺産のオープンアクセスへの道

- November 12, 2019 in Featured, Open Access, OpenGLAM, Special, オープンデータ

(訳注:この記事はOpen Knowledge本家によるOpenGLAM Principles: ways forward to Open Access for cultural heritage(2019/4/30)を日本語化したもので、OpenGLAMの原則は2019/11月現在改定中です。)

OpenGLAMとは?

2010年初の初めに、OpenGLAM(ギャラリー、図書館、資料館、博物館)が立ち上げられました。オープンアクセスをサポートする文化施設間の交流とコラボレーションをサポートするネットワークです。OpenGLAMは現在Open Knowledge Internationalとして知られているOpen Knowledge Foundation(OKFN)(訳注:2019/6月現在、再度OKFNに名称復帰しています)のイニシアチブおよびワーキンググループであり、欧州委員会が共同資金提供しています。クリエイティブ・コモンズCommunia Association、及びGLAM-Wikiコミュニティは最初から仲間でした。 とりわけヨーロッパでは、いくつかの地域OpenGLAMグループが結成されました。ネットワークは、専用ウェブサイトや OpenGLAMのTwitterアカウントといったいくつかのコミュニケーション・チャネルを通じてアウトリーチ活動を行っています。さらには別のOKFNによるイニシアチブ組織(現在は独立)であるPublic Domain Reviewと一緒に働いています。 デジタル文化遺産への自由で開かれたアクセスの背後にある共有された価値を概説するために、ワーキンググループは文化遺産部門におけるオープン機関とは何を意味するのかを定義する目的で、2013年に一組のOpenGLAMの原則を起草しました。 2013年に起草されたOpenGLAMの原則のスクリーンショット オープンアクセスが文化の部門で広く採用されるようになるにつれて、この領域の利害関係者間でのより強力なコラボレーションの必要性が高まっています。2018年に、クリエイティブ・コモンズ、ウィキメディア財団、そしてOpen Knowledge Internationalにつながっている人々のグループが、OpenGLAMネットワークの活性化と次のステップについて考えるためのイニシアチブをとりました。クリエイティブ・コモンズはこの分野で基本的な仕事をしています。文化遺産機関がその標準的なライセンスを通してそしてクリエイティブ・コモンズ認定制度のようなトレーニングを提供することでコンテンツの公表を手助けします。最初のステップは、投稿者の公募を通じて、@OpenGLAMのTwitterアカウントで新たな命を吹き込むことであり、OpenGLAMの原則について「温度チェック」調査を実施することでした。ここでは、いくつかの結論と次のステップについて説明します。こちらで調査の完全な分析にアクセスしてコメントすることができます。

OpenGLAMの原則に関する調査

調査はソーシャルメディアを通じて、主に@OpenGLAMアカウントを通じて公表し、調査を希望する特定の人々に連絡を取りました。合計109件の回答がありました。参加者の大部分はヨーロッパ(30%)とオセアニア(25%)に属し、北アメリカ(19%)とラテン&中米(19%)がそれに続きました。アジアや中東からの回答は非常に少なく、アフリカからの回答はありませんでした。私たちのアウトリーチ戦略の欠陥はさておき、これは原則の問題を示す側面の1つとも言えます:それらは英語でしか利用できず、それ故に参加へのさらなる障壁を追加しています。 私たちはまた、回答者がGLAM機関とどのような関係にあるのかについても知りたかったのです。調査では図書館員が最も多く(27%)、博物館の専門家(11%)、学者およびコミュニティの主催者、すなわちWikimedians in Residence(23%)がそれに続きます。わずかに7%が文書館に属し、続いて8%の人がGLAM組織の顧問または外部コンサルタントとして働いていました。複数の役割を果たしていたり、複数の機能を持つ機関で働いていると回答したのは21%でした。 私たちは、原則があまり知られていないことを発見しました。回答者のほぼ半数(45%)は、調査を受ける前にはこれに気付いていませんでした。そして、参加者にこの原則が自分たちの仕事に役立つと考えているかどうかを述べるよう依頼したところ、大多数から積極的な回答(72%)が得られたたものの、25%が「多分」と考え、ごくわずかな割合(3%)だけが役に立つとは思えない、という回答でした。 役に立たないと考えた人々のうち、たいていの批判は公的組織からのサポートの欠如、文化遺産機関とのコミュニケーションやつながりの欠如、そしてそれらのためのサポート構造の不在、といったものでした。とある回答者の要約のように、
「オープンデータなどは文化の分野では関係のない組織です。それらには関連組織によるサポートが必要です。より良い構造とネットワークを構築するために、私たちには議論するためのガイドラインと価値が必要です。」

これを有用だと考えた人々のうち、ほとんどは自分の仕事のためのガイドとして利用するためのフレームワークと値のセットを持つことの有用性を示しました。しかしながら現在の版では、原則はほとんどあるいは全くガイダンスを提供していないようです。 提供されている例の範囲が限られていることに加えて、データの公開に主な焦点が当てられていること、オープンアクセスと、疎外されたグループや先住民コミュニティなどの他の関係者の利益と権利との間にある緊張関係に関する認識の欠如、文化遺産に関するより広範で世界的な視点の不在が、今後の見直しの中で対処される必要がある関心事項として知らされました。ある回答者はこう述べています:
「個人的、文化的または社会的制約を伴う情報は、伝統的な知識のように単に『公表』されるべきではありません。私たちには文化的知識の複雑さについてある程度の認識が必要です。」
調査の参加者はまた、原則を更新する必要があると考えたかどうか、また必要である場合はどのように変えたら良いのかを尋ねられました。回答者は、原則を実際に適用する方法について、より多くのガイダンスを見たいと述べました。彼らは、オープンアクセスに関わっているより多様な機関のより良い例を望むでしょう。参加者はまた、原則のメンテナンスを説明できるより良い構造を設定する必要性を表明しました。価値とのつながりやより広い有用な宣言が欠如している点もまた、原則の弱点として現れました。別の回答者は次のように言っています:
「人権の観点により重点を置く必要があります。文化遺産へのアクセスは、いくつかの人権憲章および宣言に明記されている権利です。」
このOpenGLAM 原則に関する短時間での評価以外にも、自分たちに自問する必要があります。そのより広い機能と有用性とは何でしょうか?私たちは文化遺産機関がそのコレクションにオープンアクセスポリシーを適用するために、より良い指導を必要としていることを知っています。 この声明の裏付けとなる権利声明の正確性についての Europeanaが委託したレポート、およびAndrea WallaceとDouglas McCarthyが作成したGLAMオープンアクセスポリシーと実践、といった調査を含む証拠は増えていて、文化遺産機関をまたぐオープンアクセスポリシーの適用における格差を示しています。そしてCC認定制度などのより多くの訓練や、より良いアドボカシーやツールが適時に設定できる一方で、推奨事項と宣言は、組織機関の内部でまたは連携して活動しているアドボケイトにとって有用な要素となる可能性があります。 Open Access Directoryによって管理されているOpen Access をサポートする宣言のリストは 、学術コミュニケーションおよび科学データのOpen Access出版に重点を置いており、特に伝統的な知識、先住民の権利、またはデジタル化とオープンアクセスリリースに関するその他の問題のある側面、といったあたりの関心事項のいくつかを含む文化遺産に取り組む原則または宣言における明らかなギャップを示しています。 私たちは、オープンアクセスのためのより良いガイドラインをめぐり、文化遺産セクターとより幅広い会話をするために集まることを願っています。この幅広い会話の一環として、私たちは現在草案を作成しており、支持者や実務家と毎月電話をしています。可能な限り多くの人々を巻き込むために、私たちは年間を通じてより多くのフォローアップ戦略を持つ予定です。 会話への参加に興味があるなら、OpenGLAMメーリングリストを通して 連絡を取るか、そこで発表される毎月のオープンコミュニティコールに参加するか、またはCreative CommonsのSlackの#cc-openglamチャンネルに参加してください。 こちらでOpenGLAM Principlesサーベイの広範なレポートを読んだりコメントしたりすることができます。 原文(OpenGLAM Principles: ways forward to Open Access for cultural heritage より):
Original post 2019/4/30 OpenGLAM Principles: ways forward to Open Access for cultural heritage / Open Knowledge Foundation, licensed under CC BY 4.0.

Ethiopia adopts a national open access policy

- October 9, 2019 in ethiopia, Open Access

In September, Ethiopia adopted a national open access policy for higher education institutions. In a guest blog on the EIFL website, Dr Solomon Mekonnen Tekle, librarian at Addis Ababa University Library, organiser of the Open Knowledge Ethiopia group and EIFL Open Access Coordinator in Ethiopia, celebrates the adoption of the policy. This is a repost of the original blog at EIFL (Electronic Information for Libraries) – a not-for-profit organization that works with libraries to enable access to knowledge in developing and transition economy countries in Africa, Asia Pacific, Europe and Latin America. The new national open access policy adopted by the Ministry of Science and Higher Education of Ethiopia (MOSHE) will transform research and education in our country. The policy comes into effect immediately. It mandates open access to all published articles, theses, dissertations and data resulting from publicly-funded research conducted by staff and students at universities that are run by the Ministry – that is over 47 universities located across Ethiopia. In addition to mandating open access to publications and data, the new policy encourages open science practices by including ‘openness’ as one of the criteria for assessment and evaluation of research proposals. All researchers who receive public funding must submit their Data Management Plans to research offices and to university libraries for approval, to confirm that data will be handled according to international FAIR data principles. (FAIR data are data that meet standards of Findability, Accessibility, Interoperability and Reusabililty.)

EIFL guest blogger, Dr Solomon Mekonnen Tekle: “And now the work begins!”

We will have to adapt quickly!

Our universities and libraries will have to adapt quickly to comply with the new policy. Each university will have to develop an open access policy to suit its own institutional context, and which is also aligned with the national policy. We have a long way to go – at present, only three of the 47 universities that fall under the Ministry (Hawassa, Jimma and Arba Minch universities) have adopted open access policies. The policy requires universities to ensure that all publications based on publicly-funded research are deposited in the National Academic Digital Repository of Ethiopia (NADRE) as well as in an institutional repository, if the university has one. NADRE is supported by MOSHE, and also harvests and aggregates deposits from institutional repositories. Right now, about 13 universities under the Ministry have institutional repositories, but only four are openly available because of policy and technical issues, so there is work to be done.

Ministry support for universities

To speed up and support compliance with the new policy, MOSHE has launched a project in partnership with Addis Ababa University. I am managing the project, and I am very happy to have the support of the Consortium of Ethiopian Academic and Research Libraries (CEARL) through its chairperson, Dr Melkamu Beyene, who has been named chairperson of the project board. We can also draw on the expertise and experience of Iryna Kuchma, Manager of the EIFL Open Access Programme, who serves as an international advisor for the project. The project has just started. It will ensure that all public universities that do not have institutional repositories establish them as soon as possible. The project will also strengthen the Ethiopian Journals Online (EJOL) open access journals platform that currently includes 24 journals and is hosted by Addis Ababa University. And the project will support researchers in making their research available through the EthERNet platform for research data, and through NADRE. There is a strong capacity building component to the project to train repository managers and administrators to manage their new institutional repositories and open access journals.

Positive impact of the new policy

When we first began making research outputs openly available in Ethiopia there was fierce resistance from the academic community who were worried that their work would be plagiarized. But now, researchers and students come to my office in the library and ask for their research to be published in open access so that others, like potential employers, for example, can find and read it. They see the benefits. The main impact of the new policy will be to increase the visibility of Ethiopian research, within the national and international research communities. The quality of our research will improve, because researchers will be able to see and verify each others’ work, and to comment on the integrity of the methodology and results. Practitioners in organizations will have access to our research and will be able to base their work on it and so our research will have real impact. Sharing of research and data through open access will minimize duplication, thereby saving costs, time and effort. The journey to achieving the new policy was long. We began reaching out to MOSHE over three years ago, and they formed a Working Group to draft a national open access policy based on a model that had been developed by CEARL and Addis Ababa University, with support from EIFL. Success resulted from a collective effort by many colleagues and partners. I am proud to have been part of the process and I am now looking forward to working with our partners to achieve full implementation of the policy through the Ministry’s project.

Open in order to ensure healthy lives and promote well-being for all at all ages

- November 12, 2018 in Open Access, Open Access Button, Open Science

The following blog post is an adaptation of a talk given at the OpenCon 2018 satellite event hosted at the United Nations Headquarters in New York City. Slides for the talk can be found here. When I started medical school, I had no idea what Open Access was, what subscriptions were and how they would affect my everyday life. Open Access is important to me because I have experienced first hand, on a day to day basis, the frustration of not being able to keep up to date with recent discoveries and offer patients up-to-date evidence-based treatment. For health professionals based in low and middle income countries the quest of accessing research papers is extremely time consuming and often unsuccessful. In countries where resources are scarce, hospitals and institutions don’t pay for journal subscriptions, and patients ultimately pay the price. Last week while I was doing rounds with my mentor, we came across a patient who was in a critical state. The patient had been bitten by a snake and was treated with antivenom serum, but was now developing a severe acute allergic reaction to the treatment he had received. The patient was unstable, so we quickly googled different papers to make an informed treatment decision. Unfortunately, we hit a lot of paywalls. The quest of looking for the right paper was time consuming. If we did not make a quick decision the patient could enter anaphylactic shock.

I remember my mentor going up and down the hospital looking for colleagues to ask for opinions, I remember us searching for papers and constantly hitting paywalls, not being able to do much to help. At the end of the day, the doctor made some calls, took a treatment decision and the patient got better. I was able to find a good paper in Scielo, a Latin American repository, but this is because I know where to look, Most physicians don’t. If Open Access was a norm, we could have saved ourselves and the patient a lot of time.This is a normal day in our lives, this is what we have to go through everytime we want to access medical research and even though we do not want it to, it ends up affecting our patients.
This is my story, but I am not a one in a million case. I happen to read stories just like mine from patients, doctors, and policy makers on a daily basis at the Open Access Button where we build tools that help people access the research they need without the training I receive. It is a common misconception to think that when research is published in a prestigious journal, to which most institutions in Europe and North America are subscribed, the research is easily accessible and therefore impactful, which is usually not the case. Often, the very people we do medical research to help are the ones that end up being excluded from reading it.

Why does open matter at the scale of diseases?

A few years ago, when Ebola was declared a public health crisis, the whole world turned to West Africa. The conventional wisdom among public health authorities believed that Ebola was a new phenomenon, never seen in West Africa before year 2013. As it turned out, the conventional wisdom was wrong. In 2015, the New York Times issued a report stating that Liberia’s Ministry of Health had found a paper that proved that Ebola existed in the region before. In the future, the authors asserted, “Medical personnel in Liberian health centers should be aware of the possibility that they may come across active cases and thus be prepared to avoid nosocomial epidemics” This paper was published in 1982, in an expensive, subscription European journal. Why did Liberians not have access to the research article that could have warned about the outbreak? The paper was published in a European journal, and there were no Liberian co-authors in the study. The paper costs $45, which is the equivalent of 4 days of salary for a medical professional in Liberia. The average price of a health science journal is $2,021, this is the equivalent of 2.4 years of preschool education, 7 months of utilities and 4 months of salary for a medical professional in Liberia. Let’s think about the impact open could have had in this public health emergency. If the paper had been openly accessible, Liberians could have easily read it. They could have been warned and who knows? Maybe they could have even been able to catch the disease before it became a problem. They could have been equipped with the qualities they needed to face the outbreak. They could have asked for funds and international help way before things went bad. Patients could have been informed and campaigns could have been created. These are only a few of the benefits of Open Access that we did not get during the Ebola outbreak.

What happens when open wins the race?

The Ebola outbreak is a good example of what happens when health professionals do not get access to research.However, sometimes Open Access wins and great things happen. The Human Genome Project was a pioneer for encouraging access to scientific research data. Those involved in the project decided to release all the data publicly. The Human Genome data could be downloaded in its entirety, chromosome by chromosome, by anyone in the world. The data sharing agreement required all parts of the human genome sequenced during the project to be distributed into the public domain within 24 hours of completion. Scientists believed that these efforts would accelerate the production of the human genome. This was a deeply unusual approach , with scientists by default not publishing their data at the time. When a private company wanted to patent some of the sequences, everyone was worried, because this would mean that advances arising from the work, such as diagnostic tests and possibly even cures for certain inherited diseases, would be under their control. Luckily, The Human Genome Project was able to accelerate their work and this time, open won the race. In 2003, the human genetic blueprint was completed. Since that day, because of Open Access to the research data, the Human Genome Project has generated $965 billion in economic output, 295 billion in personal income, 4 billion in economic output and helped developed at least 30% more diagnostic tools for diseases (source). It facilitated the scientific understanding of the role of genes in specific diseases, such as cancer, and led to the development of a number of DNA screening tests that provide early identification of risk factors of developing diseases such as colon cancer and breast cancer. The data sharing initiative of the Human Genome Project was agreed after a private company decided to patent the genes BRCA1 & 2 used for screening breast and colon cancer. The company charged nearly $4,000 for a complete analysis of the two genes. About a decade after the discovery, patents for all genes where ruled invalid. It was concluded that gene patents interfere with diagnosis and treatment, quality assurance, access to healthcare and scientific innovation. Now that the patent was invalidated, people can get tested for much less money. The Human Genome Project proved that open can be the difference between a whole new field of medicine or private companies owning genes.

Call to action

We have learned how research behind a paywall could have warned us better about Ebola 30 years before the crisis. In my work, open would save us crucial minutes while our patients suffer. Open Access has the power to accelerate advancement not only towards good health and well being, but towards all sustainable development goals. I have learned a lot about open because of excellent librarians, who have taken the time to train me and help me understand everything I’ve discussed above. I encourage everyone to become leaders and teachers in open practices within your local institutions. Countries and organizations all over the world look up to the United Nations for leadership and guidance on what is right, and what is practical. By being bold on open, the UN can inspire and even enable action towards open and accelerate progress on SDGs. When inspiration doesn’t cut it, The UN and other organizations can use their power as funders to mandate open . We can make progress without Open Access, and we have for a long time, but while we make progress with closed, with open as a foundation things happen faster and equality digs in. Health inequality and access inequality exists today, but we have the power to change that. We need open to be central, and for that to happen we need you to be able to see it as foundational as well.   Written by Natalia Norori with contributions by Joseph McArthur, CC-BY 4.0.  

Sources:

Scaling up paywalled academic article sharing by legal means

- August 23, 2018 in Featured, Open Access, Open Science, r4r

“If you read a paper, 100% goes to the publisher. If you just email us to ask for our papers, we are allowed to send them to you for free, and will be genuinely delighted to do so.” This recent tweet by Holly Witteman inspired Iris.ai to launch the R4R initiative  (Research for Researchers) that is intended to facilitate sharing of research articles by legal means. This is implemented as an application that automates article requests and sharing among researchers via email. Sharing an article you authored via email with your peers is generally allowed. While being far from the most efficient way to share knowledge, email still remains the last resort when the alternative is that content is behind an expensive paywall. Technically, R4R is a fairly simple tool, implemented as a browser extension. The Iris.ai blog post explains it in more detail, but here’s the idea in a nutshell:
  1. Imagine you just found an interesting academic paper using search engines. It’s relevant, but behind a paywall.
  2. Having installed the R4R browser extension, a tab on your screen will let you know if sending an email to the author automatically is available. A single click on the tab sends an email requesting the paper to the author.
  3. R4R automatically drafts a response to the person requesting the paper and adds the relevant scholarly article as an attachment.
  4. The author reviews the request and makes the final decision on whether or not to share the paper with the requester.
In the beginning, the browser plug-in will only allow sending emails to the authors who have expressed their willingness to do so. If you are happy to share your publications with peers this way, you can add your name on this list. Or if you would like to be among the first ones to be notified when the software is ready, sign up for the waitlist via this link.  At the time of writing this blog post, the OKF Finland could not confirm yet whether the full source code of the service will be open but we support the general idea of promoting free sharing of articles that the plug-in implements. While the R4R initiative does not make copyrighted and paywalled articles open access, it increases knowledge exchange and thus hopefully also encourages openness on a personal level. This is why we at Open Knowledge Finland fully support this initiative. We hope that R4R will help researchers around the world to share their discoveries with those who need them, while working to advance more comprehensive shifts towards open access in the overall publishing system. Read more on Medium! Engage with us on Twitter: @mariaritola, @antagomir, @okffi The post Scaling up paywalled academic article sharing by legal means appeared first on Open Knowledge Finland.

Scaling up paywalled academic article sharing by legal means

- August 23, 2018 in Featured, Open Access, Open Science, r4r

New edition of Data Journalism Handbook to explore journalistic interventions in the data society

- January 12, 2018 in Data Journalism, data journalism handbook, data literacy, journalism, Open Access

This blog has been reposted from http://jonathangray.org/2017/12/20/new-edition-data-journalism-handbook/ The first edition of The Data Journalism Handbook has been widely used and widely cited by students, practitioners and researchers alike, serving as both textbook and sourcebook for an emerging field. It has been translated into over 12 languages – including Arabic, Chinese, Czech, French, Georgian, Greek, Italian, Macedonian, Portuguese, Russian, Spanish and Ukrainian – and is used for teaching at many leading universities, as well as teaching and training centres around the world. A huge amount has happened in the field since the first edition in 2012. The Panama Papers project undertook an unprecedented international collaboration around a major database of leaked information about tax havens and offshore financial activity. Projects such as The Migrants Files, The Guardian’s The Counted and ProPublica’s Electionland have shown how journalists are not just using and presenting data, but also creating and assembling it themselves in order to improve data journalistic coverage of issues they are reporting on.

The Migrants’ Files saw journalists in 15 countries work together to create a database of people who died in their attempt to reach or stay in Europe.

Changes in digital technologies have enabled the development of formats for storytelling, interactivity and engagement with the assistance of drones, crowdsourcing tools, satellite data, social media data and bespoke software tools for data collection, analysis, visualisation and exploration. Data journalists are not simply using data as a source, they are also increasingly investigating, interrogating and intervening around the practices, platforms, algorithms and devices through which it is created, circulated and put to work in the world. They are creatively developing techniques and approaches which are adapted to very different kinds of social, cultural, economic, technological and political settings and challenges. Five years after its publication, we are developing a revised second edition, which will be published as an open access book with an innovative academic press. The new edition will be significantly overhauled to reflect these developments. It will complement the first edition with an examination of the current state of data journalism which is at once practical and reflective, profiling emerging practices and projects as well as their broader consequences.

“The Infinite Campaign” by Sam Lavigne (New Inquiry) repurposes ad creation data in order to explore “the bizarre rubrics Twitter uses to render its users legible”.

Contributors to the first edition include representatives from some of the world’s best-known newsrooms data journalism organisations, including the Australian Broadcasting Corporation, the BBC, the Chicago Tribune, Deutsche Welle, The Guardian, the Financial Times, Helsingin Sanomat, La Nacion, the New York Times, ProPublica, the Washington Post, the Texas Tribune, Verdens Gang, Wales Online, Zeit Online and many others. The new edition will include contributions from both leading practitioners and leading researchers of data journalism, exploring a diverse constellation of projects, methods and techniques in this field from voices and initiatives around the world. We are working hard to ensure a good balance of gender, geography and themes. Our approach in the new edition draws on the notion of “critical technical practice” from Philip Agre, which he formulates as an attempt to have “one foot planted in the craft work of design and the other foot planted in the reflexive work of critique” (1997). Similarly, we wish to provide an introduction to a major new area of journalism practice which is at once critically reflective and practical. The book will offer reflection from leading practitioners on their experiments and experiences, as well as fresh perspectives on the practical considerations of research on the field from leading scholars. The structure of the book reflects different ways of seeing and understanding contemporary data journalism practices and projects. The introduction highlights the renewed relevance of a book on data journalism in the current so-called “post-truth” moment, examining the resurgence of interest in data journalism, fact-checking and strengthening the capacities of “facty” publics in response to fears about “alternative facts” and the speculation about a breakdown of trust in experts and institutions of science, policy, law, media and democracy. As well as reviewing a variety of critical responses to data journalism and associated forms of datafication, it looks at how this field may nevertheless constitute an interesting site of progressive social experimentation, participation and intervention. The first section on “data journalism in context” will review histories, geographies, economics and politics of data journalism – drawing on leading studies in these areas. The second section on “data journalism practices” will look at a variety of practices for assembling data, working with data, making sense with data and organising data journalism from around the world. This includes a wide variety of case studies – including the use of social media data, investigations into algorithms and fake news, the use of networks, open source coding practices and emerging forms of storytelling through news apps and data animations. Other chapters look at infrastructures for collaboration, as well as creative responses to disappearing data and limited connectivity. The third and final section on “what does data journalism do?”, examines the social life of data journalism projects, including everyday encounters with visualisations, organising collaborations across fields, the impacts of data projects in various settings, and how data journalism can constitute a form of “data activism”. As well as providing a rich account of the state of the field, the book is also intended to inspire and inform “experiments in participation” between journalists, researchers, civil society groups and their various publics. This aspiration is partly informed by approaches to participatory design and research from both science and technology studies as well as more recent digital methods research. Through the book we thus aim to explore not only what data journalism initiatives do, but how they might be done differently in order to facilitate vital public debates about both the future of the data society as well as the significant global challenges that we currently face.

Open Knowledge Finland to produce report on the openness of key scientific publishers

- October 29, 2017 in costs of publishing, creative commons, csc, elsevier, Featured, ministry of education and culture, Open Access, Open Science, publishing costs

The project:

To round off a great Open Access  week, we’d like to announce a new interesting project we’ve started. Continuing our efforts in the field of Open Science, Open Knowledge Finland was commissioned by CSC – IT Center for Science and the Finnish Ministry of Education and Culture to implement a Study on the Openness of Scientific Publishers.

The challenge:

The key goal of the project is to look at the practices of open access publishing by major publishers, and “rank” these publishers according to metrics / scorecard developed in the project. The project will particularly look at some of the major scientific publishers, namesly Elsevier,  Springer,  Nature, Wiley-Blackwell, American Chemical Society (ACS), Taylor & Francis, Sage, Lippincott Williams & Wilkins (LWW), IEEE,  and ACM. Our assumption is that the ranking will be based on
  • Number of open access journals / full journal list
  • Costs of open access publishing
  • Creative Commons licenses
  • Self-archiving
  • Data-mining possibilities
What do you think? Perhaps you’d like to contribute to Leo Lahti’s tweet:  

Expected results:

This report looks at the practices of open access publishing as it is presented in easily accessible online sources. The need of information is linked to a wider framework of investigating the current status of open access practices across the academic field in Finland. Previous reports have scrutinised Universities and research institutions (2015, 2016) and the sources for research funding (2016). This report concentrates on a further piece of research infrastructure: channels of publication.

Who’s doing it?

The leading expert and manager for this project is Leo Lahti, a long-time researcher, expert and activist on open science. OKFI is doing this in collaboration with Oxford Research – with Juho-Matti Paavola and Anna Björk doing much of the data crunching and writing. Assoc. Prof. Mikael Laakso gives guidance and Teemu Ropponen supports with admin and communications. The project kicked off a few weeks ago, early October. Key results will be delivered in November, and the project will finalize in December. So, in short, this is indeed a “rapid action” project!

How can you participate:

  See also: Want to know more? Contact: Leo Lahti, leo.lahti@okf.fi Teemu Ropponen, teemu.ropponen@okf.fi The post Open Knowledge Finland to produce report on the openness of key scientific publishers appeared first on Open Knowledge Finland.

How Wikimedia helped authors make over 3000 articles green open access via Dissemin

- October 26, 2017 in Open Access, Open Access Week, wikimedia

In light of this year’s Open Access week, Michele Marchetto of Wikimedia Italia shares the story of how they helped authors to make their open access articles more widely available. This post has been cross-posted from Wikimedia Italia. Wikipedia is probably the most effective initiative in the world to increase the readership of academic literature: for instance, wikipedia.org is a top 10 source of clicks for doi.org. Wikipedia contributors are among the biggest consumers of scientific publications in the world, because Wikipedia articles are not allowed to be primary sources: the five pillars allow anyone to edit but require copyleft and a neutral point of view based on reliable sources. Readers are advised to trust what they read only insofar it’s confirmed by provided sources. So, does free culture need all sources to be accessible, affordable and freely licensed?

Open access

Scholarly sources, while generally high quality, are problematic for Wikipedia users in that they are often paywalled and ask for hefty payments from readers. Open access wants research output to be accessible online without restrictions, ideally under a free license, given it’s produced by authors, reviewers and editors “for free” (as part of their duties). This includes papers published in journals and conference proceedings, but also book chapters, books, experiment data. A cost-effective open science infrastructure is possible but requires political will and proprietary private platforms grow to fill unmet needs, but authors can make their works green open access autonomously and for free, thanks to open archives and publisher or employer policies. The problem is, how much effort does it take? We tried to find out.

The easy way out

In the past year we saw many developments in the open access landscape. On the reading side, DOAI and then oaDOI plus Unpaywall have made it possible to access some 40 % of the literature in just one click, collecting data from thousands of sources which were formerly rather hard to use. It was also proven that cancelling subscriptions produces little pain. On the authoring side, the SSRN fiasco paved the way to various thematic open archives and general-purpose repositories like Zenodo (offered by OpenAIRE and CERN), that make sure that an open access platform is available for all authors in the world, whatever their outputs. Publishers begin to understand the importance of metadata, although much work needs to be done, and the Open Access button staff helps connect with authors. Finally, the web platform Dissemin put ORCID and all the above initiatives together to identify 36 million works which could benefit from green open access. Authors can deposit them from Dissemin to an open archive in a couple clicks, without to need to enter metadata manually. With the possibility of a “legal Sci-Hub” within our reach, what does it take to get the authors to help?

Frontpage of the Dissem.in platform

Wikimedia Italia takes initiative

Wikimedia projects contributor Federico Leva, frustrated at the number of pay-walled articles linked from the Italian and English Wikipedia, decided to contact their authors directly. Using the available data, almost half a million depositable articles by a million authors were found. An email was sent to each of them where possible: the message thanked them for contributing sources to Wikipedia, presented them with the dilemma of a simple volunteer editor who wants to link an open access copy for all Wikipedia users to see, and asked to check the publication on Dissemin to read more about its legal status and to deposit it. The response has been overwhelmingly positive: over 15 % of the recipients clicked the links to find out more, thousands wrote encouraging replies, over 3000 papers were deposited via Dissemin in two months. Wikimedia Italia, active since 2008 in open access, covered the costs (few hundreds euro on phplist.com) and provided its OTRS instance to handle replies. With AISA’s counsel, hundreds of support requests have been handled (mostly about the usual pains of green OA, such as locating an appropriate manuscript).

Tell me a story

Our reasoning has been driven by examples such as the story of Jack Andraka, which showed how open access can change the world. Jack, as high school student, proposed a cheap method for an early diagnose of pancreatic cancer. Jack’s research, like every invention, is based on previous scientific results. Jack was not affiliated to any research entity and was not able to access paywalled research, but he was able to consult the extensive body of open access research provided by NIH’s PubMed Central, which is often in the public domain or under a free Creative Commons license. Jack’s story was a potent message in mass media on how open access can save lives.

Some reactions and what we learnt

The authors’ responses taught us what makes a difference:
  • make deposit easy and authors will love open archives;
  • focus on their own work and its readership;
  • show the concrete difference they can make, rather than talk abstractly about open access;
  • lead by example: list other colleagues who archived papers from the same journal;
  • some will adopt a free Creative Commons license to facilitate further reuse, if told about it.
More warmth came from Peter Suber’s supportJohn Dove’s proposal for OA journals to accelerate depositing of papers they reference and a lively discussion. Surprisingly many authors simply don’t know about green open access possibilities: they just need to hear about it in a way that rings true to their ears. If you work with a repository, an OA journal or other, you have a goldmine of authors to ask for deposits and stories relevant to them: why not start doing it systematically? If you are a researcher, you can just search your name on Dissemin and see what is left to make open access; when you are done, you can ask your colleagues to do the same. It’s simple and, as with Jack Andraka, you can really change the world around us.

How Wikimedia helped authors make over 3000 articles green open access via Dissemin

- October 26, 2017 in Open Access, Open Access Week, wikimedia

In light of this year’s Open Access week, Michele Marchetto of Wikimedia Italia shares the story of how they helped authors to make their open access articles more widely available. This post has been cross-posted from Wikimedia Italia. Wikipedia is probably the most effective initiative in the world to increase the readership of academic literature: for instance, wikipedia.org is a top 10 source of clicks for doi.org. Wikipedia contributors are among the biggest consumers of scientific publications in the world, because Wikipedia articles are not allowed to be primary sources: the five pillars allow anyone to edit but require copyleft and a neutral point of view based on reliable sources. Readers are advised to trust what they read only insofar it’s confirmed by provided sources. So, does free culture need all sources to be accessible, affordable and freely licensed?

Open access

Scholarly sources, while generally high quality, are problematic for Wikipedia users in that they are often paywalled and ask for hefty payments from readers. Open access wants research output to be accessible online without restrictions, ideally under a free license, given it’s produced by authors, reviewers and editors “for free” (as part of their duties). This includes papers published in journals and conference proceedings, but also book chapters, books, experiment data. A cost-effective open science infrastructure is possible but requires political will and proprietary private platforms grow to fill unmet needs, but authors can make their works green open access autonomously and for free, thanks to open archives and publisher or employer policies. The problem is, how much effort does it take? We tried to find out.

The easy way out

In the past year we saw many developments in the open access landscape. On the reading side, DOAI and then oaDOI plus Unpaywall have made it possible to access some 40 % of the literature in just one click, collecting data from thousands of sources which were formerly rather hard to use. It was also proven that cancelling subscriptions produces little pain. On the authoring side, the SSRN fiasco paved the way to various thematic open archives and general-purpose repositories like Zenodo (offered by OpenAIRE and CERN), that make sure that an open access platform is available for all authors in the world, whatever their outputs. Publishers begin to understand the importance of metadata, although much work needs to be done, and the Open Access button staff helps connect with authors. Finally, the web platform Dissemin put ORCID and all the above initiatives together to identify 36 million works which could benefit from green open access. Authors can deposit them from Dissemin to an open archive in a couple clicks, without to need to enter metadata manually. With the possibility of a “legal Sci-Hub” within our reach, what does it take to get the authors to help?

Frontpage of the Dissem.in platform

Wikimedia Italia takes initiative

Wikimedia projects contributor Federico Leva, frustrated at the number of pay-walled articles linked from the Italian and English Wikipedia, decided to contact their authors directly. Using the available data, almost half a million depositable articles by a million authors were found. An email was sent to each of them where possible: the message thanked them for contributing sources to Wikipedia, presented them with the dilemma of a simple volunteer editor who wants to link an open access copy for all Wikipedia users to see, and asked to check the publication on Dissemin to read more about its legal status and to deposit it. The response has been overwhelmingly positive: over 15 % of the recipients clicked the links to find out more, thousands wrote encouraging replies, over 3000 papers were deposited via Dissemin in two months. Wikimedia Italia, active since 2008 in open access, covered the costs (few hundreds euro on phplist.com) and provided its OTRS instance to handle replies. With AISA’s counsel, hundreds of support requests have been handled (mostly about the usual pains of green OA, such as locating an appropriate manuscript).

Tell me a story

Our reasoning has been driven by examples such as the story of Jack Andraka, which showed how open access can change the world. Jack, as high school student, proposed a cheap method for an early diagnose of pancreatic cancer. Jack’s research, like every invention, is based on previous scientific results. Jack was not affiliated to any research entity and was not able to access paywalled research, but he was able to consult the extensive body of open access research provided by NIH’s PubMed Central, which is often in the public domain or under a free Creative Commons license. Jack’s story was a potent message in mass media on how open access can save lives.

Some reactions and what we learnt

The authors’ responses taught us what makes a difference:
  • make deposit easy and authors will love open archives;
  • focus on their own work and its readership;
  • show the concrete difference they can make, rather than talk abstractly about open access;
  • lead by example: list other colleagues who archived papers from the same journal;
  • some will adopt a free Creative Commons license to facilitate further reuse, if told about it.
More warmth came from Peter Suber’s supportJohn Dove’s proposal for OA journals to accelerate depositing of papers they reference and a lively discussion. Surprisingly many authors simply don’t know about green open access possibilities: they just need to hear about it in a way that rings true to their ears. If you work with a repository, an OA journal or other, you have a goldmine of authors to ask for deposits and stories relevant to them: why not start doing it systematically? If you are a researcher, you can just search your name on Dissemin and see what is left to make open access; when you are done, you can ask your colleagues to do the same. It’s simple and, as with Jack Andraka, you can really change the world around us.

Understanding the costs of scholarly publishing – Why we need a public data infrastructure of publishing costs

- October 24, 2017 in Open Access

Scholarly communication has undergone a seismic shift away from closed publishing towards an ever-growing support for open access. With closed publishing models, academic libraries faced a so-called  “serials crisis” and were not able to afford the materials they needed for their researchers and students. Partly in response to this problem, open access advocates have argued for increased access, whilst also changing the cost structure of scholarly publishing. In many countries this has led to experiments with ‘author pays’ models, where the prices of large commercial publishers have remained high, but the costs have shifted from readers to researchers. Public data about the costs of these changing publishing models remains scarce. There is an increasing concern that they may perpetuate oligopolistic and dysfunctional structures that do not serve the interests of researchers or their students, readers and audiences. Some studies suggest that prices of open access publishing might unfairly discriminate against some institutions and point out the sometimes stark pricing differences across institutions. Funding organisations and institutions worry that hybrid journals might levy ‘Article Processing Charges’ (a common way of funding open access publishing) while not providing a proportionate decrease in subscription costs – thereby charging researchers twice (so-called “double dipping”). Yet, evidence is fragmented and displays incomplete information. Members of Open Knowledge International’s network have been following this issue for several years.  Jenny Molloy wrote a blogpost on this issue for Open Access week three years ago. We have supported research in this area undertaken by Stuart Lawson, Jonathan Gray and Michele Mauri, and we published an associated white paper as part of the PASTEUR4OA project. To date public data about scholarly publishing finances remains fragmentary, partial and scattered.   The lack of publicly accessible financial information is problematic for at least three reasons:
  1. It hinders the evaluation of existing publishing policies and financing models. For example, incomplete and conflicting data prevents funders from making the best decisions where to allocate resources to.
  2. Financial opacity also prevents us from getting a detailed view how much money is paid in a country, per funder, academic sector, universities, libraries, and individual researchers.
  3. Ultimately, a lack of knowledge about payments weakens the negotiation power of universities and libraries around market-coordinated forms of scholarly publishing.
  As we celebrate International Open Access Week, Open Knowledge International strongly pushes for public data infrastructures of scholarly finances. Such infrastructures would enable the tracking, documentation, publication, and discussion of different costs associated with scholarly publishing. Thereby public data infrastructures would provide the evidence base for a well-informed discussion about alternative ways of organising and financing scholarly publication in a world where open access to academic outputs becomes increasingly the norm.  Below you see a model of the financial flows that could be captured by such a data infrastructure, focussing on the United Kingdom.     

Caption: “Model of Financial Flows in Scholarly Publishing for the UK, 2014”, from Lawson, S., Gray, J., & Mauri, M. (2016). Opening the Black Box of Scholarly Communication Funding: A Public Data Infrastructure for Financial Flows in Academic Publishing. Open Library of Humanities, 2(1). https://doi.org/10.16995/olh.72

  Rising momentum for a public data infrastructure of scholarly finances There is rising momentum within the larger research community – including funders, institutions and institutional libraries – to address the current lack of financial data around publishing. Earlier this year, Knowledge Exchange published a report underlining the importance to understand the total cost of publishing and the role of standard documentation formats and information systems to capture those. Funding bodies in different countries insert reporting clauses in their funding policies to gain a better picture how funds are spent. The UK’s higher education infrastructure body Jisc has worked with Research Councils UK to create a template for UK higher education institutions to report open access expenditures in a standardised way and release it openly. This effort should support negotiations with journal publishers around the total costs of publishing. In different European countries, funders and institutional associations start to create databases collecting the amount of money paid through APCs to single journals. The Wellcome Trust UK published information on how much it spent on open access publishing each year from 2010 to 2014. In a similar vein the German Open APC initiative, part of the initiative Transparent Infrastructure for Article Charges, crowdsources data on GitHub to publicly disclose money spent by different European institutions on open access publishing. And Open Knowledge International hosts a wiki for payment documents requested via FOI.   More financial transparency enables to rethink how scholarly publishing is organised These examples are important signposts towards public data infrastructures of scholarly publishing costs. Yet, more concerted efforts and collaborations are needed to bring a deeper shift in how scholarly publishing is organised. Full transparency would require knowing how much each institution pays to each publisher for each journal, ideally allowing to relate these payments to public funds. To gain these insights, it is necessary to understand the ways scholarly publishing is organised and to address diverse obstacles to transparency, including:
  • Multiple income sources and financial management in institutions preventing from a disaggregated view on how much public funding is spent on open access
  • Payment models such as bundles, ‘big deals’, or annual lump sums preventing a clear image of open access costs
  • Policies mandating the reporting of payments differently and only covering certain disciplines
  • Non-disclosure agreements preventing transparent cost evaluations
  • Inaccurate or diverging price information such as price lists that do not necessarily display real payments.
  What can be done to start contributing to a public database? To lay the ground for collaboration Open Knowledge International wants to spark a dialogue across open access advocates, funders, universities, libraries, individual researchers, and publishers. In what follows we outline next steps that can be taken together towards a public data infrastructure: Funders should insert reporting and disclosure clauses in funding policies, addressing both subscription payments and APC charges at a micropayment level (costs per article). Legal measures to prevent non-disclosure agreements can include to restrict or stop funds to publishers refraining from non-disclosure agreements. Funding organisations, institutions and individual researchers should increase (inter)national research activities on the topic 1) to understand the magnitude of different cost types, 2) to identify new cost factors, necessary data representing them, and factors rendering them opaque, 3) and to analyse the benefits of alternative payment models for specific disciplines and institutions. Research should reflect rising administration costs and offer recommendations how to mitigate them. Institutions, funding organisations and individual researchers can deliver evidence by disclosing their payments in databases accessible by everyone. If other institutions follow their model this allows for public comparisons of actual payments, to detect unreasonable pricing discrimination and to publishers not complying with open access funding policies.   How to support a move towards more transparent scholarly publishing  
  • Get in touch with our team at Open Knowledge International. We are exploring actions to move the debate forward. Please email Danny Lämmerhirt (Research Coordinator) and Sander van der Waal (Head of Network and Partnerships) at research@okfn.org.
  • Let us know your ideas, thoughts, and comments on our forum.
  • Support the collection, maintenance, and use of public data as outlined in our recommendations.
  • Share this blogpost in your networks and get in touch with your institution, library, or funding organisation.
  I’d like to thank Stuart Lawson and Jonathan for their thoughtful comments and advice while writing this blogpost. This post draws from their paper “Opening the Black Box of Scholarly Communication Funding: A Public Data Infrastructure for Financial Flows in Academic Publishing.” Open Library of Humanities, 2(1). https://doi.org/10.16995/olh.72