You are browsing the archive for Data Release.

Automated Game Play Datasets: New Releases

- April 24, 2013 in Announcements, Data Release, Featured, Open Data, Open Economics, Open Research

Last month we released ten datasets from the research project “Small Artificial Human Agents for Virtual Economies“, implemented by Professor David Levine and Professor Yixin Chen at the Washington University of St. Louis and funded by the National Science Foundation [See dedicated webpage]. We are now happy to announce that the list has grown with seven more datasets, now hosted at datahub.io which were added this month, including:
Clark, K. & Sefton, M., 2001. Repetition and signalling: experimental evidence from games with efficient equilibria. Economics Letters, 70(3), pp.357–362.
Link to publication | Link to data
Costa-Gomes, M. and Crawford, V. 2006. “Cognition and Behavior in Two-Person guessing Games: An Experimental Study.” The American Economic Review, 96(5), pp.1737-1768
Link to publication | Link to data
Costa-Gomes, M., Crawford, V. and Bruno Broseta. 2001. “Cognition and Behavior in Normal-Form Games: An Experimental Study.” Econometrica, 69(5), pp.1193-1235
Link to publication | Link to data
Crawford, V., Gneezy, U. and Yuval Rottenstreich. 2008. “The Power of Focal Points is Limited: Even Minute Payoff Asymmetry May Yield Large Coordination Failures.” The American Economic Review, 98(4), pp.1443-1458
Link to publication | Link to data
Feltovich, N., Iwasaki, A. and Oda, S., 2012. Payoff levels, loss avoidance, and equilibrium selection in games with multiple equilibria: an experimental study. Economic Inquiry, 50(4), pp.932-952.
Link to publication | Link to data
Feltovich, N., & Oda, S., 2013. The effect of matching mechanism on learning in games played under limited information, Working paper
Link to publication | Link to data
Schmidt D., Shupp R., Walker J.M., and Ostrom E. 2003. “Playing Safe in Coordination Games: The Roles of Risk Dominance, Payoff Dominance, and History of Play.” Games and Economic Behaviour, 42(2), pp.281–299.
Link to publication | Link to data
Any questions or comments? Please get in touch: economics [at] okfn.org

Releasing the Automated Game Play Datasets

- March 7, 2013 in Announcements, Data Release, Featured, Open Data, Open Economics, Open Research, Open Tools

We are very happy to announce that the Open Economics Working Group is releasing the datasets of the research project “Small Artificial Human Agents for Virtual Economies“, implemented by Professor David Levine and Professor Yixin Chen at the Washington University of St. Louis and funded by the National Science Foundation [See dedicated webpage]. The authors who have participated in the study have given their permission to publish their data online. We hope that through making this data available online we will aid researchers working in this field. This initiative is motivated by our belief that in order for economic research to be reliable and trusted, it should be possible to reproduce research findings – which is difficult or even impossible without the availability of the data and code. Making material openly available reduces to a minimum the barriers for doing reproducible research. If you are interested to know more or you like to get help in releasing research data in your field, please contact us at: economics [at] okfn.org

List of Datasets and Code

Andreoni, J. & Miller, J.H., 1993. Rational cooperation in the finitely repeated prisoner’s dilemma: Experimental evidence. The Economic Journal, pp.570–585.
Link to publication | Link to data
Bó, P.D., 2005. Cooperation under the shadow of the future: experimental evidence from infinitely repeated games. The American Economic Review, 95(5), pp.1591–1604.
Link to publication | Link to data
Charness, G., Frechette, G.R. & Qin, C.-Z., 2007. Endogenous transfers in the Prisoner’s Dilemma game: An experimental test of cooperation and coordination. Games and Economic Behavior, 60(2), pp.287–306.
Link to publication | Link to data
Clark, K., Kay, S. & Sefton, M., 2001. When are Nash equilibria self-enforcing? An experimental analysis. International Journal of Game Theory, 29(4), pp.495–515.
Link to publication | Link to data
Duffy, John and Feltovich, N., 2002. Do Actions Speak Louder Than Words? An Experimental Comparison of Observation and Cheap Talk. Games and Economic Behavior, 39(1), pp.1–27.
Link to publication | Link to data
Duffy, J. & Ochs, J., 2009. Cooperative behavior and the frequency of social interaction. Games and Economic Behavior, 66(2), pp.785–812.
Link to publication | Link to data
Knez, M. & Camerer, C., 2000. Increasing cooperation in prisoner’s dilemmas by establishing a precedent of efficiency in coordination games. Organizational Behavior and Human Decision Processes, 82(2), pp.194–216.
Link to publication | Link to data
Ochs, J., 1995. Games with unique, mixed strategy equilibria: An experimental study. Games and Economic Behavior, 10(1), pp.202–217.
Link to publication | Link to data
Ong, D. & Chen, Z., 2012. Tiger Women: An All-Pay Auction Experiment on Gender Signaling of Desire to Win. Available at SSRN 1976782.
Link to publication | Link to data
Vlaev, I. & Chater, N., 2006. Game relativity: How context influences strategic decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition; Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(1), p.131.
Link to publication | Link to data

Project Background

An important need for developing better economic policy prescriptions is an improved method of validating theories. Originally economics depended on field data from surveys and laboratory experiments. An alternative method of validating theories is through the use of artificial or virtual economies. If a virtual world is an adequate description of a real economy, then a good economic theory ought to be able to predict outcomes in that setting. An artificial environment offers enormous advantages over the field and laboratory: complete control – for example, over risk aversion and social preferences – and great speed in creating economies and validating theories. In economics the use of virtual economies can potentially enable us to deal with heterogeneity, with small frictions, and with expectations that are backward looking rather than determined in equilibrium. These are difficult or impractical to combine in existing calibrations or Monte Carlo simulations. The goal of this project is to build artificial agents by developing computer programs that act like human beings in the laboratory. We focus on the simplest type of problem of interest to economists: the simple one-shot two-player simultaneous move games. There is a wide variety of existing published data on laboratory behavior that will be our primary testing ground for our computer programs. As we achieve greater success with this we want to see if our programs can adapt themselves to changes in the rules: for example, if payments are changed in a certain way, the computer programs will play differently: do people do the same? In some cases we may be able to answer these questions with data from existing studies; in others we will need to conduct our own experimental studies. There is a great deal of existing research relevant to the current project. The state of the art in the study of virtual economies is agent-based modeling (Bonabeau (2002)). In addition, crucially related are both the theoretical literature on learning in games, and the empirical literature on behavior in the experimental laboratory. From the perspective of theory, the most relevant economic research is Foster and Vohra’s (1999) work on calibrated play and the related work on smooth fictitious play (Fudenberg and Levine (1998)) and regret algorithms (Hart and Mas-Colell (2000)). There is also a relevant literature in the computational game theory literature on regret optimization such as Nisan et al. (2007). Empirical work on human play in the laboratory has two basic threads: the research on first time play such as Nagel (1995) and the hierarchical models of Stahl and Wilson (1994), Costa-Gomes, Crawford, and Broseta (2001) and Camerer, Ho, and Chong (2004). Second are the learning models, most notably the reinforcement learning model of Erev and Roth (1998) and the EWA model (Ho, Camerer, and Chong (2007)). This latter model can be considered state of the art, including as it does both reinforcement and fictitious play type learning and initial play from a cognitive hierarchy.

Sovereign Credit Risk: An Open Database

- January 31, 2013 in credit risk, Data Release, External Projects, Featured, open analysis, Open Data, Open Economics, Open Research, Public Finance and Government Data, Public Sector Credit, sovereign debt crisis

Throughout the Eurozone, credit rating agencies have been under attack for their lack of transparency and for their pro-cyclical sovereign rating actions. In the humble belief that the crowd can outperform the credit rating oracles, we are introducing an open database of historical sovereign risk data. It is available at http://www.publicsectorcredit.org/sovdef where community members can both view and edit the data. Once the quality of this data is sufficient, the data set can be used to create unbiased, transparent models of sovereign credit risk. The database contains central government revenue, expenditure, public debt and interest costs from the 19th century through 2011 – along with crisis indicators taken from Reinhart and Rogoff’s public database. CentralGovernmentInterestToRevenue2010

Why This Database?

Prior to the appearance of This Time is Different, discussions of sovereign credit more often revolved around political and trade-related factors. Reinhart and Rogoff have more appropriately focused the discussion on debt sustainability. As with individual and corporate debt, government debt becomes more risky as a government’s debt burden increases. While intuitively obvious, this truth too often gets lost among the multitude of criteria listed by rating agencies and within the politically charged fiscal policy debate. In addition to emphasizing the importance of debt sustainability, Reinhart and Rogoff showed the virtues of considering a longer history of sovereign debt crises. As they state in their preface: “Above all, our emphasis is on looking at long spans of history to catch sight of ’rare’ events that are all too often forgotten, although they turn out to be far more common and similar than people seem to think. Indeed, analysts, policy makers, and even academic economists have an unfortunate tendency to view recent experience through the narrow window opened by standard data sets, typically based on a narrow range of experience in terms of countries and time periods. A large fraction of the academic and policy literature on debt and default draws conclusions on data collected since 1980, in no small part because such data are the most readily accessible. This approach would be fine except for the fact that financial crises have much longer cycles, and a data set that covers twenty-five years simply cannot give one an adequate perspective…” Reinhart and Rogoff greatly advanced what had been an innumerate conversation about public debt, by compiling, analyzing and promulgating a database containing a long time series of sovereign data. Their metric for analyzing debt sustainability – the ratio of general government debt to GDP – has now become a central focus of analysis. We see this as a mixed blessing. While the general government debt to GDP ratio properly relates sovereign debt to the ability of the underlying economy to support it, the metric has three important limitations. First, the use of a general government indicator can be misleading. General government debt refers to the aggregate borrowing of the sovereign and the country’s state, provincial and local governments. If a highly indebted local government – like Jefferson County, Alabama, USA – can default without being bailed out by the central government, it is hard to see why that local issuer’s debt should be included in the numerator of a sovereign risk metric. A counter to this argument is that the United States is almost unique in that it doesn’t guarantee sub-sovereign debts. But, clearly neither the rating agencies nor the market believe that these guarantees are ironclad: otherwise all sub-sovereign debt would carry the sovereign rating and there would be no spread between sovereign and sub-sovereign bonds – other than perhaps a small differential to accommodate liquidity concerns and transaction costs. Second, governments vary in their ability to harvest tax revenue from their economic base. For example, the Greek and US governments are less capable of realizing revenue from a given amount of economic activity than a Scandinavian sovereign. Widespread tax evasion (as in Greece) or political barriers to tax increases (as in the US) can limit a government’s ability to raise revenue. Thus, government revenue may be a better metric than GDP for gauging a sovereign’s ability to service its debt. Finally, the stock of debt is not the best measure of its burden. Countries that face comparatively low interest rates can sustain higher levels of debt. For example, The United Kingdom avoided default despite a debt/GDP ratio of roughly 250% at the end of World War II. The amount of interest a sovereign must pay on its debt each year may thus be a better indicator of debt burden. Our new database attempts to address these concerns by layering central government revenue, expenditure and interest data on top of the statistics Reinhart and Rogoff previously published.

A Public Resource Requiring Public Input

Unlike many financial data sets, this compilation is being offered free of charge and without a registration requirement. It is offered in the hope that it, too, will advance our understanding of sovereign credit risk. The database contains a large number of data points and we have made efforts to quality control the information. That said, there are substantial gaps, inconsistencies and inaccuracies in the data we are publishing. Our goal in releasing the database is to encourage a mass collaboration process directed at enhancing the information. Just as Wikipedia articles asymptotically approach perfection through participation by the crowd, we hope that this database can be cleansed by its user community. There are tens of thousands of economists, historians, fiscal researchers and concerned citizens around the world that are capable of improving this data, and we hope that they will find us. To encourage participation, we have added Wiki-style capabilities to the user interface. Users who wish to make changes can log in with an OpenID and edit individual data points. They can also enter comments to explain their changes. User changes are stored in an audit trail, which moderators will periodically review – accepting only those that can be verified while rolling back others. This design leverages the trigger functionality of MySQL to build a database audit trail that moderators can view and edit. We have thus married the collaborative strengths of a Wiki to the structure of a relational database. Maintaining a consistent structure is crucial for a dataset like this because it must ultimately be analyzed by a statistical tool such as R. The unique approach to editing database fields Wiki-style was developed by my colleague, Vadim Ivlev. Vadim will contribute the underlying Python, JavaScript and MySQL code to a public GitHub repository in a few days.

Implications for Sovereign Ratings

Once the dataset reaches an acceptable quality level, it can be used to support logit or probit analysis of sovereign defaults. Our belief – based on case study evidence at the sovereign level and statistical modeling of US sub-sovereigns – is that the ratio of interest expense to revenue and annual revenue change are statistically significant predictors of default. We await confirmation or refutation of this thesis from the data set. If statistically significant indicators are found, it will be possible to build a predictive model of sovereign default that could be hosted by our partners at Wikirating. The result, we hope, will be a credible, transparent and collaborative alternative to the credit ratings status quo.

Sources and Acknowledgements

Aside from the data set provided by Reinhart and Rogoff, we also relied heavily upon the Center for Financial Stability’s Historical Financial Statistics. The goal of HFS is “to be a source of comprehensive, authoritative, easy-to-use macroeconomic data stretching back several centuries.” This ambitious effort includes data on exchange rates, prices, interest rates, national income accounts and population in addition to government finance statistics. Kurt Schuler, the project leader for HFS, generously offered numerous suggestions about data sources as well as connections to other researchers who gave us advice. Other key international data sources used in compiling the database were:
  • International Monetary Fund’s Government Finance Statistics
  • Eurostat
  • UN Statistical Yearbook
  • League of Nation’s Statistical Yearbook
  • B. R. Mitchell’s International Historical Statistics, Various Editions, London: Palgrave Macmillan.
  • Almanach de Gotha
  • The Statesman’s Year Book
  • Corporation of Foreign Bondholders Annual Reports
  • Statistical Abstract for the Principal and Other Foreign Countries
  • For several countries, we were able to obtain nation-specific time series from finance ministry or national statistical service websites.
We would also like to thank Dr. John Gerring of Boston University and Co-Director of the CLIO World Tables project, for sharing data and providing further leads as well as Dr. Joshua Greene, author of Public Finance: An International Perspective, for alerting us to the IMF Library in Washington, DC. A number of researchers and developers played valuable roles in compiling the data and placing it on line. We would especially like to thank Charles Tian, T. Wayne Pugh, Amir Muhammed, Anshul Gupta and Vadim Ivlev, as well as Karthick Palaniappan and his colleagues at H-Garb Informatix in Chennai, India for their contributions. Finally, we would like to thank the National University of Singapore’s Risk Management Institute for the generous grant that made this work possible.