You are browsing the archive for Economic Publishing.

Disclosure and ‘cook booking’

- March 25, 2013 in Contribution Economy, Economic Publishing, Featured, Open Data, Open Economics

This blog post is cross-posted from the Contribution Economy Blog. Many journals now have open data policies but they are sparingly enforced. So many scientists do not submit data. The question is: what drives them not to submit? Is it laziness? Is it a desire to keep the data to themselves? Or is it something more sinister? After all, the open data rules were, in part, to allow for replication experiments to ensure that the reported results were accurate. Robert Trivers reports on an interesting study by Wicherts, Bakker, and Mlenar that correlates disclosure of data with the statistical strength of results in psychological journals.
Here is where they got a dramatic result. They limited their research to two of the four journals whose scientists were slightly more likely to share data and most of whose studies were similar in having an experimental design. This gave them 49 papers. Again, the majority failed to share any data, instead behaving as a parody of academics. Of those asked, 27 percent failed to respond to the request (or two follow-up reminders)—first, and best, line of self-defense, complete silence—25 percent promised to share data but had not done so after six years and 6 percent claimed the data were lost or there was no time to write a codebook. In short, 67 percent of (alleged) scientists avoided the first requirement of science—everything explicit and available for inspection by others. Was there any bias in all this non-compliance? Of course there was. People whose results were closer to the fatal cut-off point of p=0.05 were less likely to share their data. Hand in hand, they were more likely to commit elementary statistical errors in their own favor. For example, for all seven papers where the correctly computed statistics rendered the findings non-significant (10 errors in all) none of the authors shared the data. This is consistent with earlier data showing that it took considerably longer for authors to respond to queries when the inconsistency in their reported results affected the significance of the results (where responses without data sharing!). Of a total of 1148 statistical tests in the 49 papers, 4 percent were incorrect based only on the scientists’ summary statistics and a full 96 percent of these mistakes were in the scientists’ favor. Authors would say that their results deserved a ‘one-tailed test’ (easier to achieve) but they had already set up a one-tailed test, so as they halved it, they created a ‘one-half tailed test’. Or they ran a one-tailed test without mentioning this even though a two-tailed test was the appropriate one. And so on. Separate work shows that only one-third of psychologists claim to have archived their data—the rest make reanalysis impossible almost at the outset! (I have 44 years of ‘archived’ lizard data—be my guest.) It is likely that similar practices are entwined with the widespread reluctance to share data in other “sciences” from sociology to medicine. Of course this statistical malfeasance is presumably only the tip of the iceberg, since in the undisclosed data and analysis one expects even more errors.
It’s correlation but it is troubling. The issue is that authors present results selectively and sadly this is not picked up in peer review processes. Of course, it goes without saying that even with open data, it takes effort to replicate and then publish alternative results and conclusions.

Looking again at “Big Deal” scholarly journal packages

- February 18, 2013 in big deals, Contribution Economy, Economic Publishing, Featured, Open Access, Open Economics, publishing

This blog post is cross-posted from the Contribution Economy Blog. One of the things pointed to in the debate over market power and scholarly journals is the rise of “Big Deal” packages. Basically, this has arisen as publishers bundle journals together for a single price. Indeed, as the publishers have merged and acquired more titles, these bundled packages have become more compelling with individual journal subscription pricing to libraries rising at a higher rate. This means that libraries with limited budgets are driven to give a greater share of their journal budgets to larger publishers; squeezing out smaller ones. The claim is that this is reducing choice. While it is reducing choice amongst publishers, Andrew Odlyzko, in a recent paper, points out that “Big Deals” have also increased the number of journal titles available; not just in large libraries but across the board. Serials The reason is basically the same reason that is behind the drive towards open access — in electronic form, the marginal cost of an additional journal is zero and so it make sense to provide more journal titles to each library. Moreover, for smaller libraries, the average cost of a journal title has fallen at a faster rate than it has done for larger libraries. In other words, behind the spectre of increased publisher profits and market power, is an increase in journal availability. Put simply, more researchers have easier access to journals than before. This is one case where — if we just consider University libraries — price discrimination (using Varian’s rule) looks to be in the welfare improving range. But there are, of course, wrinkles to all of this. This says nothing of access beyond Universities which is still an issue both economically and increasingly morally. It also says nothing of the distribution of rents in the industry. Publisher profits have increased dramatically and that money has to come from somewhere. Odlyzko raises a new issue in that regard: publisher profits are a symptom that libraries are being squeezed. Of course, we know that the share of library budgets devoted to journal acquisition has risen. At the same time, library budgets have fallen although not as quickly as Odlyzko expected a decade ago. The reason is that libraries command attention at Universities. Changes to them are a signal of how quickly changes can occur within Universities. As it turns out, there is not very much. Libraries are centrally located, have nostalgic views in the eyes of alumni donors and hitting their budgets can often be read as a sign of a move against scholarship. But what publishers are providing now, in terms of electronic access and search, is as much a transfer of functions as it is of money from libraries to themselves. Put simply, publishers are now doing what librarians used to do. They have provided tools that make it easier for people to find information. It is another way machines are being substituted for labor. The competition between libraries and publishers has implications with regard to how we view alternative journal business models. Take, for instance, the notion that we can have journals funded by author fees and be given open access instead of being funded by user fees. If we did this, then this will just change the locus of the competitive fight between libraries and publishers to involve academics. Academics can legitimately argue that these new publication fees should come from the institution and, where will the institution find the money? In the now relieved library budgets as more journals go open access. So either way, the money for journal publishing will end up coming from libraries. This is not to say that there is no scope for reducing the costs of journal access and storage. It is surely bloated now as it includes the publisher market power premium. The point is that libraries spent time resisting changes to journal business models as much as publishers did but that seems to have been a political error on their part. This is all familiar stuff to economists. The flow of money is less important than the structure of activities. When it comes down to it, we know one thing: we can provide a journal system with labor from academics (as writers, referees and editors) and publisher activities when there is enough willingness to pay for all of it. That means we can provide the same overall payment and still, because journals are a non-rival good, have open access. In other words, there is no market impediment to open access, it is proven to be a pure Pareto improvement. The question now is how to do the “Really Big Deal” to get it there.

Dutch PhD-workshop on research design, open access and open data

- February 1, 2013 in Economic Publishing, EDaWaX, External Projects, Featured, Open Access, Open Data, Open Economics, research fraud

This blog post is written by Esther Hoorn, Copyright Librarian, University of Groningen, the Netherlands. If Roald Dahl were still alive, he would certainly be tempted to write a book about the Dutch social psychologist Diederik Stapel. For not only did he make up the research data to support his conclusions, but also he ate all the M&M’s, which he bought with public money for interviews with fictitious pupils in fictitious high schools. In the Netherlands the research fraud by Stapel was a catalyst to bring attention to the issue of research integrity and availability of research data. A new generation of researchers needs to be aware of the policy on sharing research data by the Dutch research funder NWO, the EU policy and the services of DANS, the Dutch Data archiving and networked services. In the near future, a data management plan will be required in every research proposal.

Verifiability

For some time now the library at the University of Groningen is organizing workshops for PhDs to raise awareness on the shift towards Open Access. Open Access and copyright are the main themes. The question also to address verifiability of research data came from SOM, the Research Institute of the Faculty of Economics and Business. The workshop is given as part of the course Research Design of the PhD program. The blogpost Research data management in economic journals proved to be very useful to get an overview of the related issues in this field.

Open Access

As we often see, Open Access was a new issue to most of the students. Because the library buys licenses the students don’t perceive a problem with access to research journals. Moreover, they are not aware of the big sums that the universities at present pay to finance access exclusively for their own staff and students. Once they understand the issue there is a strong interest. Some see a parallel with innovative distribution models for music. The PhDs come from all over the world. And more and more Open Access is addressed in every country of the world. One PhD from Indonesia mentioned that the Indonesian government requires his dissertation to be available through the national Open Access repository. Chinese students were surprised by availability of information on Open Access in China.

Assignment

The students prepared an assignment with some questions on Open Access and sharing research data. The first question still is on the impact factor of the journals in which they intend to publish. The questions brought the discussion to article level metrics and alternative ways to organize the peer review of Open Access journals.

Will availability of research data stimulate open access?

Example of the Open Access journal Economics

The blogpost Research data management in economic journals presents the results of the German project EdaWax, European Data Watch Extended. An important result of the survey points at the role of association and university presses. Especially it appears that many journals followed the data availability policy of the American Economic Association.
[quote] We found out that mainly university or association presses have high to very high percentages of journals owning data availability policies while the major scientific publishers stayed below 20%.

Out of the 29 journals with data availability policies, 10 used initially the data availability policy implemented by the American Economic Review (AER). These journals either used exactly the same policy or a slightly modified version.

For students it is assuring to see how associations take up their role to address this issue. An example of an Open Access journal that adopted the AER policy is Economics. And yes, this journal does have an impact factor in the Social Science Citation Index and also the possibility to archive the datasets in the Dataverse Network.

Re-use of research data for peer review

One of the students suggested that the public availability of research data (instead or merely research findings) may lead to innovative forms of review. This may facilitate a further shift towards Open Access. With access to underlying research data and methodologies used, scientists may be in a better position to evaluate the quality of the research conducted by peers. The typical quality label given by top and very good journals may then become less relevant, over time. It was also discussed that journals may not publish a certain numbers of papers in a volume released e.g. four times a year, but rather as qualifying papers are available for publication throughout the year. Another point raised was that a substantial change in the existing publication mechanics will likely require either top journals or top business schools to lead the way, whereas associations of leading scientists in a certain field may also play an important role in such conversion.

First Open Economics International Workshop Recap

- January 25, 2013 in academia, Advisory Panel, collaboration, Economic Publishing, economics profession, Events, Featured, Open Access, Open Data, Open Economics, Open Research, Open Tools, research, Workshop

The first Open Economics International Workshop gathered 40 academic economists, data publishers and funders of economics research, researchers and practitioners to a two-day event at Emmanuel College in Cambridge, UK. The aim of the workshop was to build an understanding around the value of open data and open tools for the Economics profession and the obstacles to opening up information, as well as the role of greater openness of the academy. This event was organised by the Open Knowledge Foundation and the Centre for Intellectual Property and Information Law and was supported by the Alfred P. Sloan Foundation. Audio and slides are available at the event’s webpage. Open Economics Workshop

Setting the Scene

The Setting the Scene session was about giving a bit of context to “Open Economics” in the knowledge society, seeing also examples from outside of the discipline and discussing reproducible research. Rufus Pollock (Open Knowledge Foundation) emphasised that there is necessary change and substantial potential for economics: 1) open “core” economic data outside the academy, 2) open as default for data in the academy, 3) a real growth in citizen economics and outside participation. Daniel Goroff (Alfred P. Sloan Foundation) drew attention to the work of the Alfred P. Sloan Foundation in emphasising the importance of knowledge and its use for making decisions and data and knowledge as a non-rival, non-excludable public good. Tim Hubbard (Wellcome Trust Sanger Institute) spoke about the potential of large-scale data collection around individuals for improving healthcare and how centralised global repositories work in the field of bioinformatics. Victoria Stodden (Columbia University / RunMyCode) stressed the importance of reproducibility for economic research and as an essential part of scientific methodology and presented the RunMyCode project.

Open Data in Economics

The Open Data in Economics session was chaired by Christian Zimmermann (Federal Reserve Bank of St. Louis / RePEc) and was about several projects and ideas from various institutions. The session examined examples of open data in Economics and sought to discover whether these examples are sustainable and can be implemented in other contexts: whether the right incentives exist. Paul David (Stanford University / SIEPR) characterised the open science system as a system which is better than any other in the rapid accumulation of reliable knowledge, whereas the proprietary systems are very good in extracting the rent from the existing knowledge. A balance between these two systems should be established so that they can work within the same organisational system since separately they are distinctly suboptimal. Johannes Kiess (World Bank) underlined that having the data available is often not enough: “It is really important to teach people how to understand these datasets: data journalists, NGOs, citizens, coders, etc.”. The World Bank has implemented projects to incentivise the use of the data and is helping countries to open up their data. For economists, he mentioned, having a valuable dataset to publish on is an important asset, there are therefore not sufficient incentives for sharing. Eustáquio J. Reis (Institute of Applied Economic Research – Ipea) related his experience on establishing the Ipea statistical database and other projects for historical data series and data digitalisation in Brazil. He shared that the culture of the economics community is not a culture of collaboration where people willingly share or support and encourage data curation. Sven Vlaeminck (ZBW – Leibniz Information Centre for Economics) spoke about the EDaWaX project which conducted a study of the data-availability of economics journals and will establish publication-related data archive for an economics journal in Germany.

Legal, Cultural and other Barriers to Information Sharing in Economics

The session presented different impediments to the disclosure of data in economics from the perspective of two lawyers and two economists. Lionel Bently (University of Cambridge / CIPIL) drew attention to the fact that there is a whole range of different legal mechanism which operate to restrict the dissemination of information, yet on the other hand there is also a range of mechanism which help to make information available. Lionel questioned whether the open data standard would be always the optimal way to produce high quality economic research or whether there is also a place for modulated/intermediate positions where data is available only on conditions, or only in certain part or for certain forms of use. Mireille van Eechoud (Institute for Information Law) described the EU Public Sector Information Directive – the most generic document related to open government data and progress made for opening up information published by the government. Mireille also pointed out that legal norms have only limited value if you don’t have the internalised, cultural attitudes and structures in place that really make more access to information work. David Newbery (University of Cambridge) presented an example from the electricity markets and insisted that for a good supply of data, informed demand is needed, coming from regulators who are charged to monitor markets, detect abuse, uphold fair competition and defend consumers. John Rust (Georgetown University) said that the government is an important provider of data which is otherwise too costly to collect, yet a number of issues exist including confidentiality, excessive bureaucratic caution and the public finance crisis. There are a lot of opportunities for research also in the private sector where some part of the data can be made available (redacting confidential information) and the public non-profit sector also can have a tremendous role as force to organise markets for the better, set standards and focus of targeted domains.

Current Data Deposits and Releases – Mandating Open Data?

The session was chaired by Daniel Goroff (Alfred P. Sloan Foundation) and brought together funders and publishers to discuss their role in requiring data from economic research to be publicly available and the importance of dissemination for publishing. Albert Bravo-Biosca (NESTA) emphasised that mandating open data begins much earlier in the process where funders can encourage the collection of particular data by the government which is the basis for research and can also act as an intermediary for the release of open data by the private sector. Open data is interesting but it is even more interesting when it is appropriately linked and combined with other data and the there is a value in examples and case studies for demonstrating benefits. There should be however caution as opening up some data might result in less data being collected. Toby Green (OECD Publishing) made a point of the different between posting and publishing, where making content available does not always mean that it would be accessible, discoverable, usable and understandable. In his view, the challenge is to build up an audience by putting content where people would find it, which is very costly as proper dissemination is expensive. Nancy Lutz (National Science Foundation) explained the scope and workings of the NSF and the data management plans required from all economists who are applying for funding. Creating and maintaining data infrastructure and compliance with the data management policy might eventually mean that there would be less funding for other economic research.

Trends of Greater Participation and Growing Horizons in Economics

Chris Taggart (OpenCorporates) chaired the session which introduced different ways of participating and using data, different audiences and contributors. He stressed that data is being collected in new ways and by different communities, that access to data can be an enormous privilege and can generate data gravities with very unequal access and power to make use of and to generate more data and sometimes analysis is being done in new and unexpected ways and by unexpected contributors. Michael McDonald (George Mason University) related how the highly politicised process of drawing up district lines in the U.S. (also called Gerrymandering) could be done in a much more transparent way through an open-source re-districting process with meaningful participation allowing for an open conversation about public policy. Michael also underlined the importance of common data formats and told a cautionary tale about a group of academics misusing open data with a political agenda to encourage a storyline that a candidate would win a particular state. Hans-Peter Brunner (Asian Development Bank) shared a vision about how open data and open analysis can aid in decision-making about investments in infrastructure, connectivity and policy. Simulated models about investments can demonstrate different scenarios according to investment priorities and crowd-sourced ideas. Hans-Peter asked for feedback and input on how to make data and code available. Perry Walker (new economics foundation) spoke about the conversation and that a good conversation has to be designed as it usually doesn’t happen by accident. Rufus Pollock (Open Knowledge Foundation) concluded with examples about citizen economics and the growth of contributions from the wider public, particularly through volunteering computing and volunteer thinking as a way of getting engaged in research. During two sessions, the workshop participants also worked on Statement on the Open Economics principles will be revised with further input from the community and will be made public on the second Open Economics workshop taking place on 11-12 June in Cambridge, MA.