You are browsing the archive for Open Access.

Open in order to ensure healthy lives and promote well-being for all at all ages

- November 12, 2018 in Open Access, Open Access Button, Open Science

The following blog post is an adaptation of a talk given at the OpenCon 2018 satellite event hosted at the United Nations Headquarters in New York City. Slides for the talk can be found here. When I started medical school, I had no idea what Open Access was, what subscriptions were and how they would affect my everyday life. Open Access is important to me because I have experienced first hand, on a day to day basis, the frustration of not being able to keep up to date with recent discoveries and offer patients up-to-date evidence-based treatment. For health professionals based in low and middle income countries the quest of accessing research papers is extremely time consuming and often unsuccessful. In countries where resources are scarce, hospitals and institutions don’t pay for journal subscriptions, and patients ultimately pay the price. Last week while I was doing rounds with my mentor, we came across a patient who was in a critical state. The patient had been bitten by a snake and was treated with antivenom serum, but was now developing a severe acute allergic reaction to the treatment he had received. The patient was unstable, so we quickly googled different papers to make an informed treatment decision. Unfortunately, we hit a lot of paywalls. The quest of looking for the right paper was time consuming. If we did not make a quick decision the patient could enter anaphylactic shock.

I remember my mentor going up and down the hospital looking for colleagues to ask for opinions, I remember us searching for papers and constantly hitting paywalls, not being able to do much to help. At the end of the day, the doctor made some calls, took a treatment decision and the patient got better. I was able to find a good paper in Scielo, a Latin American repository, but this is because I know where to look, Most physicians don’t. If Open Access was a norm, we could have saved ourselves and the patient a lot of time.This is a normal day in our lives, this is what we have to go through everytime we want to access medical research and even though we do not want it to, it ends up affecting our patients.
This is my story, but I am not a one in a million case. I happen to read stories just like mine from patients, doctors, and policy makers on a daily basis at the Open Access Button where we build tools that help people access the research they need without the training I receive. It is a common misconception to think that when research is published in a prestigious journal, to which most institutions in Europe and North America are subscribed, the research is easily accessible and therefore impactful, which is usually not the case. Often, the very people we do medical research to help are the ones that end up being excluded from reading it.

Why does open matter at the scale of diseases?

A few years ago, when Ebola was declared a public health crisis, the whole world turned to West Africa. The conventional wisdom among public health authorities believed that Ebola was a new phenomenon, never seen in West Africa before year 2013. As it turned out, the conventional wisdom was wrong. In 2015, the New York Times issued a report stating that Liberia’s Ministry of Health had found a paper that proved that Ebola existed in the region before. In the future, the authors asserted, “Medical personnel in Liberian health centers should be aware of the possibility that they may come across active cases and thus be prepared to avoid nosocomial epidemics” This paper was published in 1982, in an expensive, subscription European journal. Why did Liberians not have access to the research article that could have warned about the outbreak? The paper was published in a European journal, and there were no Liberian co-authors in the study. The paper costs $45, which is the equivalent of 4 days of salary for a medical professional in Liberia. The average price of a health science journal is $2,021, this is the equivalent of 2.4 years of preschool education, 7 months of utilities and 4 months of salary for a medical professional in Liberia. Let’s think about the impact open could have had in this public health emergency. If the paper had been openly accessible, Liberians could have easily read it. They could have been warned and who knows? Maybe they could have even been able to catch the disease before it became a problem. They could have been equipped with the qualities they needed to face the outbreak. They could have asked for funds and international help way before things went bad. Patients could have been informed and campaigns could have been created. These are only a few of the benefits of Open Access that we did not get during the Ebola outbreak.

What happens when open wins the race?

The Ebola outbreak is a good example of what happens when health professionals do not get access to research.However, sometimes Open Access wins and great things happen. The Human Genome Project was a pioneer for encouraging access to scientific research data. Those involved in the project decided to release all the data publicly. The Human Genome data could be downloaded in its entirety, chromosome by chromosome, by anyone in the world. The data sharing agreement required all parts of the human genome sequenced during the project to be distributed into the public domain within 24 hours of completion. Scientists believed that these efforts would accelerate the production of the human genome. This was a deeply unusual approach , with scientists by default not publishing their data at the time. When a private company wanted to patent some of the sequences, everyone was worried, because this would mean that advances arising from the work, such as diagnostic tests and possibly even cures for certain inherited diseases, would be under their control. Luckily, The Human Genome Project was able to accelerate their work and this time, open won the race. In 2003, the human genetic blueprint was completed. Since that day, because of Open Access to the research data, the Human Genome Project has generated $965 billion in economic output, 295 billion in personal income, 4 billion in economic output and helped developed at least 30% more diagnostic tools for diseases (source). It facilitated the scientific understanding of the role of genes in specific diseases, such as cancer, and led to the development of a number of DNA screening tests that provide early identification of risk factors of developing diseases such as colon cancer and breast cancer. The data sharing initiative of the Human Genome Project was agreed after a private company decided to patent the genes BRCA1 & 2 used for screening breast and colon cancer. The company charged nearly $4,000 for a complete analysis of the two genes. About a decade after the discovery, patents for all genes where ruled invalid. It was concluded that gene patents interfere with diagnosis and treatment, quality assurance, access to healthcare and scientific innovation. Now that the patent was invalidated, people can get tested for much less money. The Human Genome Project proved that open can be the difference between a whole new field of medicine or private companies owning genes.

Call to action

We have learned how research behind a paywall could have warned us better about Ebola 30 years before the crisis. In my work, open would save us crucial minutes while our patients suffer. Open Access has the power to accelerate advancement not only towards good health and well being, but towards all sustainable development goals. I have learned a lot about open because of excellent librarians, who have taken the time to train me and help me understand everything I’ve discussed above. I encourage everyone to become leaders and teachers in open practices within your local institutions. Countries and organizations all over the world look up to the United Nations for leadership and guidance on what is right, and what is practical. By being bold on open, the UN can inspire and even enable action towards open and accelerate progress on SDGs. When inspiration doesn’t cut it, The UN and other organizations can use their power as funders to mandate open . We can make progress without Open Access, and we have for a long time, but while we make progress with closed, with open as a foundation things happen faster and equality digs in. Health inequality and access inequality exists today, but we have the power to change that. We need open to be central, and for that to happen we need you to be able to see it as foundational as well.   Written by Natalia Norori with contributions by Joseph McArthur, CC-BY 4.0.  

Sources:

Scaling up paywalled academic article sharing by legal means

- August 23, 2018 in Featured, Open Access, Open Science, r4r

“If you read a paper, 100% goes to the publisher. If you just email us to ask for our papers, we are allowed to send them to you for free, and will be genuinely delighted to do so.” This recent tweet by Holly Witteman inspired Iris.ai to launch the R4R initiative  (Research for Researchers) that is intended to facilitate sharing of research articles by legal means. This is implemented as an application that automates article requests and sharing among researchers via email. Sharing an article you authored via email with your peers is generally allowed. While being far from the most efficient way to share knowledge, email still remains the last resort when the alternative is that content is behind an expensive paywall. Technically, R4R is a fairly simple tool, implemented as a browser extension. The Iris.ai blog post explains it in more detail, but here’s the idea in a nutshell:
  1. Imagine you just found an interesting academic paper using search engines. It’s relevant, but behind a paywall.
  2. Having installed the R4R browser extension, a tab on your screen will let you know if sending an email to the author automatically is available. A single click on the tab sends an email requesting the paper to the author.
  3. R4R automatically drafts a response to the person requesting the paper and adds the relevant scholarly article as an attachment.
  4. The author reviews the request and makes the final decision on whether or not to share the paper with the requester.
In the beginning, the browser plug-in will only allow sending emails to the authors who have expressed their willingness to do so. If you are happy to share your publications with peers this way, you can add your name on this list. Or if you would like to be among the first ones to be notified when the software is ready, sign up for the waitlist via this link.  At the time of writing this blog post, the OKF Finland could not confirm yet whether the full source code of the service will be open but we support the general idea of promoting free sharing of articles that the plug-in implements. While the R4R initiative does not make copyrighted and paywalled articles open access, it increases knowledge exchange and thus hopefully also encourages openness on a personal level. This is why we at Open Knowledge Finland fully support this initiative. We hope that R4R will help researchers around the world to share their discoveries with those who need them, while working to advance more comprehensive shifts towards open access in the overall publishing system. Read more on Medium! Engage with us on Twitter: @mariaritola, @antagomir, @okffi The post Scaling up paywalled academic article sharing by legal means appeared first on Open Knowledge Finland.

Scaling up paywalled academic article sharing by legal means

- August 23, 2018 in Featured, Open Access, Open Science, r4r

“If you read a paper, 100% goes to the publisher. If you just email us to ask for our papers, we are allowed to send them to you for free, and will be genuinely delighted to do so.” This recent tweet by Holly Witteman inspired Iris.ai to launch the R4R initiative  (Research for Researchers) that is intended to facilitate sharing of research articles by legal means. This is implemented as an application that automates article requests and sharing among researchers via email. Sharing an article you authored via email with your peers is generally allowed. While being far from the most efficient way to share knowledge, email still remains the last resort when the alternative is that content is behind an expensive paywall. Technically, R4R is a fairly simple tool, implemented as a browser extension. The Iris.ai blog post explains it in more detail, but here’s the idea in a nutshell:
  1. Imagine you just found an interesting academic paper using search engines. It’s relevant, but behind a paywall.
  2. Having installed the R4R browser extension, a tab on your screen will let you know if sending an email to the author automatically is available. A single click on the tab sends an email requesting the paper to the author.
  3. R4R automatically drafts a response to the person requesting the paper and adds the relevant scholarly article as an attachment.
  4. The author reviews the request and makes the final decision on whether or not to share the paper with the requester.
In the beginning, the browser plug-in will only allow sending emails to the authors who have expressed their willingness to do so. If you are happy to share your publications with peers this way, you can add your name on this list. Or if you would like to be among the first ones to be notified when the software is ready, sign up for the waitlist via this link.  At the time of writing this blog post, the OKF Finland could not confirm yet whether the full source code of the service will be open but we support the general idea of promoting free sharing of articles that the plug-in implements. While the R4R initiative does not make copyrighted and paywalled articles open access, it increases knowledge exchange and thus hopefully also encourages openness on a personal level. This is why we at Open Knowledge Finland fully support this initiative. We hope that R4R will help researchers around the world to share their discoveries with those who need them, while working to advance more comprehensive shifts towards open access in the overall publishing system. Read more on Medium! Engage with us on Twitter: @mariaritola, @antagomir, @okffi The post Scaling up paywalled academic article sharing by legal means appeared first on Open Knowledge Finland.

New edition of Data Journalism Handbook to explore journalistic interventions in the data society

- January 12, 2018 in Data Journalism, data journalism handbook, data literacy, journalism, Open Access

This blog has been reposted from http://jonathangray.org/2017/12/20/new-edition-data-journalism-handbook/ The first edition of The Data Journalism Handbook has been widely used and widely cited by students, practitioners and researchers alike, serving as both textbook and sourcebook for an emerging field. It has been translated into over 12 languages – including Arabic, Chinese, Czech, French, Georgian, Greek, Italian, Macedonian, Portuguese, Russian, Spanish and Ukrainian – and is used for teaching at many leading universities, as well as teaching and training centres around the world. A huge amount has happened in the field since the first edition in 2012. The Panama Papers project undertook an unprecedented international collaboration around a major database of leaked information about tax havens and offshore financial activity. Projects such as The Migrants Files, The Guardian’s The Counted and ProPublica’s Electionland have shown how journalists are not just using and presenting data, but also creating and assembling it themselves in order to improve data journalistic coverage of issues they are reporting on.

The Migrants’ Files saw journalists in 15 countries work together to create a database of people who died in their attempt to reach or stay in Europe.

Changes in digital technologies have enabled the development of formats for storytelling, interactivity and engagement with the assistance of drones, crowdsourcing tools, satellite data, social media data and bespoke software tools for data collection, analysis, visualisation and exploration. Data journalists are not simply using data as a source, they are also increasingly investigating, interrogating and intervening around the practices, platforms, algorithms and devices through which it is created, circulated and put to work in the world. They are creatively developing techniques and approaches which are adapted to very different kinds of social, cultural, economic, technological and political settings and challenges. Five years after its publication, we are developing a revised second edition, which will be published as an open access book with an innovative academic press. The new edition will be significantly overhauled to reflect these developments. It will complement the first edition with an examination of the current state of data journalism which is at once practical and reflective, profiling emerging practices and projects as well as their broader consequences.

“The Infinite Campaign” by Sam Lavigne (New Inquiry) repurposes ad creation data in order to explore “the bizarre rubrics Twitter uses to render its users legible”.

Contributors to the first edition include representatives from some of the world’s best-known newsrooms data journalism organisations, including the Australian Broadcasting Corporation, the BBC, the Chicago Tribune, Deutsche Welle, The Guardian, the Financial Times, Helsingin Sanomat, La Nacion, the New York Times, ProPublica, the Washington Post, the Texas Tribune, Verdens Gang, Wales Online, Zeit Online and many others. The new edition will include contributions from both leading practitioners and leading researchers of data journalism, exploring a diverse constellation of projects, methods and techniques in this field from voices and initiatives around the world. We are working hard to ensure a good balance of gender, geography and themes. Our approach in the new edition draws on the notion of “critical technical practice” from Philip Agre, which he formulates as an attempt to have “one foot planted in the craft work of design and the other foot planted in the reflexive work of critique” (1997). Similarly, we wish to provide an introduction to a major new area of journalism practice which is at once critically reflective and practical. The book will offer reflection from leading practitioners on their experiments and experiences, as well as fresh perspectives on the practical considerations of research on the field from leading scholars. The structure of the book reflects different ways of seeing and understanding contemporary data journalism practices and projects. The introduction highlights the renewed relevance of a book on data journalism in the current so-called “post-truth” moment, examining the resurgence of interest in data journalism, fact-checking and strengthening the capacities of “facty” publics in response to fears about “alternative facts” and the speculation about a breakdown of trust in experts and institutions of science, policy, law, media and democracy. As well as reviewing a variety of critical responses to data journalism and associated forms of datafication, it looks at how this field may nevertheless constitute an interesting site of progressive social experimentation, participation and intervention. The first section on “data journalism in context” will review histories, geographies, economics and politics of data journalism – drawing on leading studies in these areas. The second section on “data journalism practices” will look at a variety of practices for assembling data, working with data, making sense with data and organising data journalism from around the world. This includes a wide variety of case studies – including the use of social media data, investigations into algorithms and fake news, the use of networks, open source coding practices and emerging forms of storytelling through news apps and data animations. Other chapters look at infrastructures for collaboration, as well as creative responses to disappearing data and limited connectivity. The third and final section on “what does data journalism do?”, examines the social life of data journalism projects, including everyday encounters with visualisations, organising collaborations across fields, the impacts of data projects in various settings, and how data journalism can constitute a form of “data activism”. As well as providing a rich account of the state of the field, the book is also intended to inspire and inform “experiments in participation” between journalists, researchers, civil society groups and their various publics. This aspiration is partly informed by approaches to participatory design and research from both science and technology studies as well as more recent digital methods research. Through the book we thus aim to explore not only what data journalism initiatives do, but how they might be done differently in order to facilitate vital public debates about both the future of the data society as well as the significant global challenges that we currently face.

Open Knowledge Finland to produce report on the openness of key scientific publishers

- October 29, 2017 in costs of publishing, creative commons, csc, elsevier, Featured, ministry of education and culture, Open Access, Open Science, publishing costs

The project:

To round off a great Open Access  week, we’d like to announce a new interesting project we’ve started. Continuing our efforts in the field of Open Science, Open Knowledge Finland was commissioned by CSC – IT Center for Science and the Finnish Ministry of Education and Culture to implement a Study on the Openness of Scientific Publishers.

The challenge:

The key goal of the project is to look at the practices of open access publishing by major publishers, and “rank” these publishers according to metrics / scorecard developed in the project. The project will particularly look at some of the major scientific publishers, namesly Elsevier,  Springer,  Nature, Wiley-Blackwell, American Chemical Society (ACS), Taylor & Francis, Sage, Lippincott Williams & Wilkins (LWW), IEEE,  and ACM. Our assumption is that the ranking will be based on
  • Number of open access journals / full journal list
  • Costs of open access publishing
  • Creative Commons licenses
  • Self-archiving
  • Data-mining possibilities
What do you think? Perhaps you’d like to contribute to Leo Lahti’s tweet:  

Expected results:

This report looks at the practices of open access publishing as it is presented in easily accessible online sources. The need of information is linked to a wider framework of investigating the current status of open access practices across the academic field in Finland. Previous reports have scrutinised Universities and research institutions (2015, 2016) and the sources for research funding (2016). This report concentrates on a further piece of research infrastructure: channels of publication.

Who’s doing it?

The leading expert and manager for this project is Leo Lahti, a long-time researcher, expert and activist on open science. OKFI is doing this in collaboration with Oxford Research – with Juho-Matti Paavola and Anna Björk doing much of the data crunching and writing. Assoc. Prof. Mikael Laakso gives guidance and Teemu Ropponen supports with admin and communications. The project kicked off a few weeks ago, early October. Key results will be delivered in November, and the project will finalize in December. So, in short, this is indeed a “rapid action” project!

How can you participate:

  See also: Want to know more? Contact: Leo Lahti, leo.lahti@okf.fi Teemu Ropponen, teemu.ropponen@okf.fi The post Open Knowledge Finland to produce report on the openness of key scientific publishers appeared first on Open Knowledge Finland.

How Wikimedia helped authors make over 3000 articles green open access via Dissemin

- October 26, 2017 in Open Access, Open Access Week, wikimedia

In light of this year’s Open Access week, Michele Marchetto of Wikimedia Italia shares the story of how they helped authors to make their open access articles more widely available. This post has been cross-posted from Wikimedia Italia. Wikipedia is probably the most effective initiative in the world to increase the readership of academic literature: for instance, wikipedia.org is a top 10 source of clicks for doi.org. Wikipedia contributors are among the biggest consumers of scientific publications in the world, because Wikipedia articles are not allowed to be primary sources: the five pillars allow anyone to edit but require copyleft and a neutral point of view based on reliable sources. Readers are advised to trust what they read only insofar it’s confirmed by provided sources. So, does free culture need all sources to be accessible, affordable and freely licensed?

Open access

Scholarly sources, while generally high quality, are problematic for Wikipedia users in that they are often paywalled and ask for hefty payments from readers. Open access wants research output to be accessible online without restrictions, ideally under a free license, given it’s produced by authors, reviewers and editors “for free” (as part of their duties). This includes papers published in journals and conference proceedings, but also book chapters, books, experiment data. A cost-effective open science infrastructure is possible but requires political will and proprietary private platforms grow to fill unmet needs, but authors can make their works green open access autonomously and for free, thanks to open archives and publisher or employer policies. The problem is, how much effort does it take? We tried to find out.

The easy way out

In the past year we saw many developments in the open access landscape. On the reading side, DOAI and then oaDOI plus Unpaywall have made it possible to access some 40 % of the literature in just one click, collecting data from thousands of sources which were formerly rather hard to use. It was also proven that cancelling subscriptions produces little pain. On the authoring side, the SSRN fiasco paved the way to various thematic open archives and general-purpose repositories like Zenodo (offered by OpenAIRE and CERN), that make sure that an open access platform is available for all authors in the world, whatever their outputs. Publishers begin to understand the importance of metadata, although much work needs to be done, and the Open Access button staff helps connect with authors. Finally, the web platform Dissemin put ORCID and all the above initiatives together to identify 36 million works which could benefit from green open access. Authors can deposit them from Dissemin to an open archive in a couple clicks, without to need to enter metadata manually. With the possibility of a “legal Sci-Hub” within our reach, what does it take to get the authors to help?

Frontpage of the Dissem.in platform

Wikimedia Italia takes initiative

Wikimedia projects contributor Federico Leva, frustrated at the number of pay-walled articles linked from the Italian and English Wikipedia, decided to contact their authors directly. Using the available data, almost half a million depositable articles by a million authors were found. An email was sent to each of them where possible: the message thanked them for contributing sources to Wikipedia, presented them with the dilemma of a simple volunteer editor who wants to link an open access copy for all Wikipedia users to see, and asked to check the publication on Dissemin to read more about its legal status and to deposit it. The response has been overwhelmingly positive: over 15 % of the recipients clicked the links to find out more, thousands wrote encouraging replies, over 3000 papers were deposited via Dissemin in two months. Wikimedia Italia, active since 2008 in open access, covered the costs (few hundreds euro on phplist.com) and provided its OTRS instance to handle replies. With AISA’s counsel, hundreds of support requests have been handled (mostly about the usual pains of green OA, such as locating an appropriate manuscript).

Tell me a story

Our reasoning has been driven by examples such as the story of Jack Andraka, which showed how open access can change the world. Jack, as high school student, proposed a cheap method for an early diagnose of pancreatic cancer. Jack’s research, like every invention, is based on previous scientific results. Jack was not affiliated to any research entity and was not able to access paywalled research, but he was able to consult the extensive body of open access research provided by NIH’s PubMed Central, which is often in the public domain or under a free Creative Commons license. Jack’s story was a potent message in mass media on how open access can save lives.

Some reactions and what we learnt

The authors’ responses taught us what makes a difference:
  • make deposit easy and authors will love open archives;
  • focus on their own work and its readership;
  • show the concrete difference they can make, rather than talk abstractly about open access;
  • lead by example: list other colleagues who archived papers from the same journal;
  • some will adopt a free Creative Commons license to facilitate further reuse, if told about it.
More warmth came from Peter Suber’s supportJohn Dove’s proposal for OA journals to accelerate depositing of papers they reference and a lively discussion. Surprisingly many authors simply don’t know about green open access possibilities: they just need to hear about it in a way that rings true to their ears. If you work with a repository, an OA journal or other, you have a goldmine of authors to ask for deposits and stories relevant to them: why not start doing it systematically? If you are a researcher, you can just search your name on Dissemin and see what is left to make open access; when you are done, you can ask your colleagues to do the same. It’s simple and, as with Jack Andraka, you can really change the world around us.

How Wikimedia helped authors make over 3000 articles green open access via Dissemin

- October 26, 2017 in Open Access, Open Access Week, wikimedia

In light of this year’s Open Access week, Michele Marchetto of Wikimedia Italia shares the story of how they helped authors to make their open access articles more widely available. This post has been cross-posted from Wikimedia Italia. Wikipedia is probably the most effective initiative in the world to increase the readership of academic literature: for instance, wikipedia.org is a top 10 source of clicks for doi.org. Wikipedia contributors are among the biggest consumers of scientific publications in the world, because Wikipedia articles are not allowed to be primary sources: the five pillars allow anyone to edit but require copyleft and a neutral point of view based on reliable sources. Readers are advised to trust what they read only insofar it’s confirmed by provided sources. So, does free culture need all sources to be accessible, affordable and freely licensed?

Open access

Scholarly sources, while generally high quality, are problematic for Wikipedia users in that they are often paywalled and ask for hefty payments from readers. Open access wants research output to be accessible online without restrictions, ideally under a free license, given it’s produced by authors, reviewers and editors “for free” (as part of their duties). This includes papers published in journals and conference proceedings, but also book chapters, books, experiment data. A cost-effective open science infrastructure is possible but requires political will and proprietary private platforms grow to fill unmet needs, but authors can make their works green open access autonomously and for free, thanks to open archives and publisher or employer policies. The problem is, how much effort does it take? We tried to find out.

The easy way out

In the past year we saw many developments in the open access landscape. On the reading side, DOAI and then oaDOI plus Unpaywall have made it possible to access some 40 % of the literature in just one click, collecting data from thousands of sources which were formerly rather hard to use. It was also proven that cancelling subscriptions produces little pain. On the authoring side, the SSRN fiasco paved the way to various thematic open archives and general-purpose repositories like Zenodo (offered by OpenAIRE and CERN), that make sure that an open access platform is available for all authors in the world, whatever their outputs. Publishers begin to understand the importance of metadata, although much work needs to be done, and the Open Access button staff helps connect with authors. Finally, the web platform Dissemin put ORCID and all the above initiatives together to identify 36 million works which could benefit from green open access. Authors can deposit them from Dissemin to an open archive in a couple clicks, without to need to enter metadata manually. With the possibility of a “legal Sci-Hub” within our reach, what does it take to get the authors to help?

Frontpage of the Dissem.in platform

Wikimedia Italia takes initiative

Wikimedia projects contributor Federico Leva, frustrated at the number of pay-walled articles linked from the Italian and English Wikipedia, decided to contact their authors directly. Using the available data, almost half a million depositable articles by a million authors were found. An email was sent to each of them where possible: the message thanked them for contributing sources to Wikipedia, presented them with the dilemma of a simple volunteer editor who wants to link an open access copy for all Wikipedia users to see, and asked to check the publication on Dissemin to read more about its legal status and to deposit it. The response has been overwhelmingly positive: over 15 % of the recipients clicked the links to find out more, thousands wrote encouraging replies, over 3000 papers were deposited via Dissemin in two months. Wikimedia Italia, active since 2008 in open access, covered the costs (few hundreds euro on phplist.com) and provided its OTRS instance to handle replies. With AISA’s counsel, hundreds of support requests have been handled (mostly about the usual pains of green OA, such as locating an appropriate manuscript).

Tell me a story

Our reasoning has been driven by examples such as the story of Jack Andraka, which showed how open access can change the world. Jack, as high school student, proposed a cheap method for an early diagnose of pancreatic cancer. Jack’s research, like every invention, is based on previous scientific results. Jack was not affiliated to any research entity and was not able to access paywalled research, but he was able to consult the extensive body of open access research provided by NIH’s PubMed Central, which is often in the public domain or under a free Creative Commons license. Jack’s story was a potent message in mass media on how open access can save lives.

Some reactions and what we learnt

The authors’ responses taught us what makes a difference:
  • make deposit easy and authors will love open archives;
  • focus on their own work and its readership;
  • show the concrete difference they can make, rather than talk abstractly about open access;
  • lead by example: list other colleagues who archived papers from the same journal;
  • some will adopt a free Creative Commons license to facilitate further reuse, if told about it.
More warmth came from Peter Suber’s supportJohn Dove’s proposal for OA journals to accelerate depositing of papers they reference and a lively discussion. Surprisingly many authors simply don’t know about green open access possibilities: they just need to hear about it in a way that rings true to their ears. If you work with a repository, an OA journal or other, you have a goldmine of authors to ask for deposits and stories relevant to them: why not start doing it systematically? If you are a researcher, you can just search your name on Dissemin and see what is left to make open access; when you are done, you can ask your colleagues to do the same. It’s simple and, as with Jack Andraka, you can really change the world around us.

Understanding the costs of scholarly publishing – Why we need a public data infrastructure of publishing costs

- October 24, 2017 in Open Access

Scholarly communication has undergone a seismic shift away from closed publishing towards an ever-growing support for open access. With closed publishing models, academic libraries faced a so-called  “serials crisis” and were not able to afford the materials they needed for their researchers and students. Partly in response to this problem, open access advocates have argued for increased access, whilst also changing the cost structure of scholarly publishing. In many countries this has led to experiments with ‘author pays’ models, where the prices of large commercial publishers have remained high, but the costs have shifted from readers to researchers. Public data about the costs of these changing publishing models remains scarce. There is an increasing concern that they may perpetuate oligopolistic and dysfunctional structures that do not serve the interests of researchers or their students, readers and audiences. Some studies suggest that prices of open access publishing might unfairly discriminate against some institutions and point out the sometimes stark pricing differences across institutions. Funding organisations and institutions worry that hybrid journals might levy ‘Article Processing Charges’ (a common way of funding open access publishing) while not providing a proportionate decrease in subscription costs – thereby charging researchers twice (so-called “double dipping”). Yet, evidence is fragmented and displays incomplete information. Members of Open Knowledge International’s network have been following this issue for several years.  Jenny Molloy wrote a blogpost on this issue for Open Access week three years ago. We have supported research in this area undertaken by Stuart Lawson, Jonathan Gray and Michele Mauri, and we published an associated white paper as part of the PASTEUR4OA project. To date public data about scholarly publishing finances remains fragmentary, partial and scattered.   The lack of publicly accessible financial information is problematic for at least three reasons:
  1. It hinders the evaluation of existing publishing policies and financing models. For example, incomplete and conflicting data prevents funders from making the best decisions where to allocate resources to.
  2. Financial opacity also prevents us from getting a detailed view how much money is paid in a country, per funder, academic sector, universities, libraries, and individual researchers.
  3. Ultimately, a lack of knowledge about payments weakens the negotiation power of universities and libraries around market-coordinated forms of scholarly publishing.
  As we celebrate International Open Access Week, Open Knowledge International strongly pushes for public data infrastructures of scholarly finances. Such infrastructures would enable the tracking, documentation, publication, and discussion of different costs associated with scholarly publishing. Thereby public data infrastructures would provide the evidence base for a well-informed discussion about alternative ways of organising and financing scholarly publication in a world where open access to academic outputs becomes increasingly the norm.  Below you see a model of the financial flows that could be captured by such a data infrastructure, focussing on the United Kingdom.     

Caption: “Model of Financial Flows in Scholarly Publishing for the UK, 2014”, from Lawson, S., Gray, J., & Mauri, M. (2016). Opening the Black Box of Scholarly Communication Funding: A Public Data Infrastructure for Financial Flows in Academic Publishing. Open Library of Humanities, 2(1). https://doi.org/10.16995/olh.72

  Rising momentum for a public data infrastructure of scholarly finances There is rising momentum within the larger research community – including funders, institutions and institutional libraries – to address the current lack of financial data around publishing. Earlier this year, Knowledge Exchange published a report underlining the importance to understand the total cost of publishing and the role of standard documentation formats and information systems to capture those. Funding bodies in different countries insert reporting clauses in their funding policies to gain a better picture how funds are spent. The UK’s higher education infrastructure body Jisc has worked with Research Councils UK to create a template for UK higher education institutions to report open access expenditures in a standardised way and release it openly. This effort should support negotiations with journal publishers around the total costs of publishing. In different European countries, funders and institutional associations start to create databases collecting the amount of money paid through APCs to single journals. The Wellcome Trust UK published information on how much it spent on open access publishing each year from 2010 to 2014. In a similar vein the German Open APC initiative, part of the initiative Transparent Infrastructure for Article Charges, crowdsources data on GitHub to publicly disclose money spent by different European institutions on open access publishing. And Open Knowledge International hosts a wiki for payment documents requested via FOI.   More financial transparency enables to rethink how scholarly publishing is organised These examples are important signposts towards public data infrastructures of scholarly publishing costs. Yet, more concerted efforts and collaborations are needed to bring a deeper shift in how scholarly publishing is organised. Full transparency would require knowing how much each institution pays to each publisher for each journal, ideally allowing to relate these payments to public funds. To gain these insights, it is necessary to understand the ways scholarly publishing is organised and to address diverse obstacles to transparency, including:
  • Multiple income sources and financial management in institutions preventing from a disaggregated view on how much public funding is spent on open access
  • Payment models such as bundles, ‘big deals’, or annual lump sums preventing a clear image of open access costs
  • Policies mandating the reporting of payments differently and only covering certain disciplines
  • Non-disclosure agreements preventing transparent cost evaluations
  • Inaccurate or diverging price information such as price lists that do not necessarily display real payments.
  What can be done to start contributing to a public database? To lay the ground for collaboration Open Knowledge International wants to spark a dialogue across open access advocates, funders, universities, libraries, individual researchers, and publishers. In what follows we outline next steps that can be taken together towards a public data infrastructure: Funders should insert reporting and disclosure clauses in funding policies, addressing both subscription payments and APC charges at a micropayment level (costs per article). Legal measures to prevent non-disclosure agreements can include to restrict or stop funds to publishers refraining from non-disclosure agreements. Funding organisations, institutions and individual researchers should increase (inter)national research activities on the topic 1) to understand the magnitude of different cost types, 2) to identify new cost factors, necessary data representing them, and factors rendering them opaque, 3) and to analyse the benefits of alternative payment models for specific disciplines and institutions. Research should reflect rising administration costs and offer recommendations how to mitigate them. Institutions, funding organisations and individual researchers can deliver evidence by disclosing their payments in databases accessible by everyone. If other institutions follow their model this allows for public comparisons of actual payments, to detect unreasonable pricing discrimination and to publishers not complying with open access funding policies.   How to support a move towards more transparent scholarly publishing  
  • Get in touch with our team at Open Knowledge International. We are exploring actions to move the debate forward. Please email Danny Lämmerhirt (Research Coordinator) and Sander van der Waal (Head of Network and Partnerships) at research@okfn.org.
  • Let us know your ideas, thoughts, and comments on our forum.
  • Support the collection, maintenance, and use of public data as outlined in our recommendations.
  • Share this blogpost in your networks and get in touch with your institution, library, or funding organisation.
  I’d like to thank Stuart Lawson and Jonathan for their thoughtful comments and advice while writing this blogpost. This post draws from their paper “Opening the Black Box of Scholarly Communication Funding: A Public Data Infrastructure for Financial Flows in Academic Publishing.” Open Library of Humanities, 2(1). https://doi.org/10.16995/olh.72

Open Access and Open Data gaining momentum in Nepal

- March 13, 2017 in Events, nepal, ODD17, Open Access, Open Data, Open Map Data, Open Research, Open Science, opendataday

For the 5th time in a row, Open Knowledge Nepal team led the effort of organizing International Open Data Day in Nepal. This year it was a collaborative effort of Kathmandu Living Labs and Open Knowledge Nepal. It was also the first official out of Kathmandu Valley event of Open Knowledge Nepal. Organizations like Code for Nepal, Gandaki College of Engineering and Science and Open Access Nepal were the partners for the event. In Nepal, the event aims to served as a platform for bringing together open knowledge enthusiasts from different backgrounds, and support a series of collaborative events for enhancing knowledge and awareness about free and open source software, open data, open content, and various open knowledge technologies. There were 4 different major activities of the event: Presentation Session, Open Street Mapathon, Open Research Data Hackathon and Treasure Hunt.

The check in started around 10:30 AM (NPT), with the participants, slowly joining the venue and with some informal discussion having around and was formally started by Mr. Ashok Raj Parajuli, Vice-Principal of Gandaki College of Engineering and Science at 11:20 AM (NPT) by giving brief introduction of Open Data and why it is important for the country like Nepal. After him, Nikesh Balami from Open Knowledge Nepal gave an event orientation. He shared how Open Data Day was started and history of Open Data Day celebration at Nepal. After having a brief orientation about major activities, the presentation session was started.

Mr. Ashok Raj Parajuli starting the event

Gaurav Thapa from Kathmandu Living Labs was the first presenter of the event. He gave the presentation about Open Map Data and the concept of 2C (Secondary City) Pokhara. He also demonstrated the work done by Kathmandu Living Labs in Pokhara with the help of other organizations and asked participants to join them for contribution and collaboration. He also shares about the app “Prepare Pokhara”, an app which uses the data of OpenStreetMap with different kinds of filtering techniques, by using that app user can easily filter and navigate all kinds of important places and destination of Pokhara in Map.

Gaurav Thapa from Kathmandu Living Labs presenting about Open Map Data

After the presentation of Gaurav Thapa, Kshitiz Khanal representing Open Knowledge Nepal participated in presentation session, where he presented about Open Access, Open Science, and Open Research. He started from the basic introduction of OPEN and highlighted the condition of Open Access in Nepal. He also demonstrated, how Nepal government and others different bodies of Nepal government are creating Open Access barriers for users. He shared about Open Science Taxonomy and talks a little about the Open Science and Research practices in Nepal. He motivated the participants to read research article frequently so that we can make the best use of publicly funded research. His presentation can be accessed from here.

Kshitiz Khanal from Open Knowledge Nepal presenting about OA, OS and OR

There was a small break after the completion of Presentation Session and the rooms for Open Research Data Hackathon and Mapathon was divided after that break. Participants interested in joining Hackathon moved towards the Lab and those who were interested in Research Data Hackathon stayed in the same room.

Open Research Data Hackathon

Open Research Data Hackathon

Open Research Data Hackathon was facilitated by the team of Open Knowledge Nepal. Nikesh Balami from OKN started the hackathon by giving a short presentation about Data and demonstrating different kinds of tools they can use during Hackathon. After an orientation, the group was divided. There were 4 groups, who worked and brainstorm different kinds of ideas for the entire day. A group pitched a project twice, in the first pitch, they share the brainstormed idea and in the second pitch, they share about how they are doing that project, possible partners, challenges, and opportunity. The proposed idea of all 4 team was entirely different from each other, some work in Election Data and some work in using Machine learning to extract research data from users search queries. Some team worked in the use of data disaster prediction and some in Blood data.

It will be interesting to see the progress of their projects in coming days.

Mapathon

Mapathon

Mapathon was facilitated by the team of Kathmandu Living Labs. In Mapathon participants used satellite image to map Bardiya district of Nepal at OpenStreetMap, where participants got an opportunity to play with Open Map Data and OpenStreetMap. The team of KLL also led Treasure Hunt in-between to make Mapathon interesting and interactive, where participants went to the fields in the search of treasures which was hidden at different places by the KLL team. Participants used OpenStreetMap for this and enjoyed the activities so much. In fact, Mapathon was interactive where participant got hands-on training on how to contribute at OSM, did some contribution and also tried using it in their real life.

 

The whole event was closed at 04:30 PM by thanking participants and supporters. The promise of organizing this kind of International events outside of the main valley of Nepal was made by the representation of Open Knowledge Nepal and Kathmandu Living Labs. This year International Open Data Day 2017 was organized at four different places of Nepal. Two inside Kathmandu, one by YoungInnovation Pvt. Ltd. and one by Accountability Lab. In Pokhara, it was organized by Open Knowledge Nepal and Kathmandu Living Labs. Kathmandu University Open Source Community (KUOSC) also organized ODD first time in Kavre. This clearly shows that the momentum of Open Data is increasing in Nepal, which we (Civil Society Organization) can take it as a plus point.

Group photo and selfie 🙂

Event Page: https://oddnepal.github.io

More photos from our Facebook page here.

Open Access and Open Data gaining momentum in Nepal

- March 13, 2017 in Events, nepal, ODD17, Open Access, Open Data, Open Map Data, Open Research, Open Science, opendataday

For the 5th time in a row, Open Knowledge Nepal team led the effort of organizing International Open Data Day in Nepal. This year it was a collaborative effort of Kathmandu Living Labs and Open Knowledge Nepal. It was also the first official out of Kathmandu Valley event of Open Knowledge Nepal. Organizations like Code for Nepal, Gandaki College of Engineering and Science and Open Access Nepal were the partners for the event. In Nepal, the event aims to served as a platform for bringing together open knowledge enthusiasts from different backgrounds, and support a series of collaborative events for enhancing knowledge and awareness about free and open source software, open data, open content, and various open knowledge technologies. There were 4 different major activities of the event: Presentation Session, Open Street Mapathon, Open Research Data Hackathon and Treasure Hunt. The check in started around 10:30 AM (NPT), with the participants, slowly joining the venue and with some informal discussion having around and was formally started by Mr. Ashok Raj Parajuli, Vice-Principal of Gandaki College of Engineering and Science at 11:20 AM (NPT) by giving brief introduction of Open Data and why it is important for the country like Nepal. After him, Nikesh Balami from Open Knowledge Nepal gave an event orientation. He shared how Open Data Day was started and history of Open Data Day celebration at Nepal. After having a brief orientation about major activities, the presentation session was started.

Mr. Ashok Raj Parajuli starting the event

Gaurav Thapa from Kathmandu Living Labs was the first presenter of the event. He gave the presentation about Open Map Data and the concept of 2C (Secondary City) Pokhara. He also demonstrated the work done by Kathmandu Living Labs in Pokhara with the help of other organizations and asked participants to join them for contribution and collaboration. He also shares about the app “Prepare Pokhara”, an app which uses the data of OpenStreetMap with different kinds of filtering techniques, by using that app user can easily filter and navigate all kinds of important places and destination of Pokhara in Map.

Gaurav Thapa from Kathmandu Living Labs presenting about Open Map Data

After the presentation of Gaurav Thapa, Kshitiz Khanal representing Open Knowledge Nepal participated in presentation session, where he presented about Open Access, Open Science, and Open Research. He started from the basic introduction of OPEN and highlighted the condition of Open Access in Nepal. He also demonstrated, how Nepal government and others different bodies of Nepal government are creating Open Access barriers for users. He shared about Open Science Taxonomy and talks a little about the Open Science and Research practices in Nepal. He motivated the participants to read research article frequently so that we can make the best use of publicly funded research. His presentation can be accessed from here.

Kshitiz Khanal from Open Knowledge Nepal presenting about OA, OS and OR

There was a small break after the completion of Presentation Session and the rooms for Open Research Data Hackathon and Mapathon was divided after that break. Participants interested in joining Hackathon moved towards the Lab and those who were interested in Research Data Hackathon stayed in the same room. Open Research Data Hackathon

Open Research Data Hackathon

Open Research Data Hackathon was facilitated by the team of Open Knowledge Nepal. Nikesh Balami from OKN started the hackathon by giving a short presentation about Data and demonstrating different kinds of tools they can use during Hackathon. After an orientation, the group was divided. There were 4 groups, who worked and brainstorm different kinds of ideas for the entire day. A group pitched a project twice, in the first pitch, they share the brainstormed idea and in the second pitch, they share about how they are doing that project, possible partners, challenges, and opportunity. The proposed idea of all 4 team was entirely different from each other, some work in Election Data and some work in using Machine learning to extract research data from users search queries. Some team worked in the use of data disaster prediction and some in Blood data. It will be interesting to see the progress of their projects in coming days. Mapathon

Mapathon

Mapathon was facilitated by the team of Kathmandu Living Labs. In Mapathon participants used satellite image to map Bardiya district of Nepal at OpenStreetMap, where participants got an opportunity to play with Open Map Data and OpenStreetMap. The team of KLL also led Treasure Hunt in-between to make Mapathon interesting and interactive, where participants went to the fields in the search of treasures which was hidden at different places by the KLL team. Participants used OpenStreetMap for this and enjoyed the activities so much. In fact, Mapathon was interactive where participant got hands-on training on how to contribute at OSM, did some contribution and also tried using it in their real life.   The whole event was closed at 04:30 PM by thanking participants and supporters. The promise of organizing this kind of International events outside of the main valley of Nepal was made by the representation of Open Knowledge Nepal and Kathmandu Living Labs. This year International Open Data Day 2017 was organized at four different places of Nepal. Two inside Kathmandu, one by YoungInnovation Pvt. Ltd. and one by Accountability Lab. In Pokhara, it was organized by Open Knowledge Nepal and Kathmandu Living Labs. Kathmandu University Open Source Community (KUOSC) also organized ODD first time in Kavre. This clearly shows that the momentum of Open Data is increasing in Nepal, which we (Civil Society Organization) can take it as a plus point. Group photo and selfie 🙂 Event Page: https://oddnepal.github.io More photos from our Facebook page here.