You are browsing the archive for Johnny West.

Why we need public models for public policy

- May 6, 2020 in COVID-19, Open Knowledge Foundation

Corona Time has brought the eery experience of becoming like a pawn on a chess board. Suddenly where we go, what we do, what we wear, who we meet and why are decided by somebody else. Here in Berlin, our youngest son now goes to school on Tuesday and Thursday mornings while his best friend goes on Mondays and Thursdays. As of this week. Maybe next week it will open up more. Maybe it won’t. Meanwhile our three other children, in different schools, are off until August. These decisions need to be dynamic and reactive. We know who makes them, more or less: our leaders are all still in place since the last election they won. We just don’t know what the rules are. The only honest campaign promise if an election were held now, anywhere in the world, would be: “We’ll play it by ear.”

97% of people agreed that it was important to them for COVID-19 data to be openly available, in a recent Open Knowledge Foundation/Survation poll

But there are – or for goodness sake should be! – systems which are suggesting those rules to manage the crisis. And because of the incredible complexity, they are being driven by algorithms in models. In the case of lockdown policy, how much to open up is a function of the interaction between many different variables: how many people have been infected so far, the current transmission rate, the properties of the virus itself, some estimates of compliance with various different social distancing options, even the starting position of how a given population is laid out on the chess board in the first place.  And there are plenty of other decisions being driven by models. Who gets the ventilator? What invasions of privacy are justified at any point in time to enforce social distancing? How many people should be tested?  So where are these models? And why can’t we see them? Since democracy has been suspended, along with normal economic life, the models have all to rule. The only way to snatch back even a modicum of the scrutiny that we have lost is to publish the models online. For three reasons: to make sure that that the models, which are triggering life and death decisions, are sufficiently stress tested; to check that bad stuff isn’t slipping in through the back door, and we don’t end up with a slate of mass surveillance measures that were spuriously justified as saving lives; and to ensure that models are even being used consistently. To deal with this last point first. It has been clear so far that many leaders are “modelling illiterate”. The UK government lurched from a barely articulated idea of herd immunity into stringent lockdown in late March. But is it in danger now of overkill in the other direction now, keeping a general lockdown going too long? Nobody knows. Debates around policy still lack nuance by and large, assuming static positions (it’s even hard to avoid the suspicion that identity politics plays a role – “What’s all the hysterical overreaction?” or “How come some people don’t care and can’t see how serious this is?”) Whereas the reality is policy is going to continue to need to be driven by equations – what is today’s estimate of the number of infections, beds available etc etc.  In the case of the UK, it has been widely reported that the change was driven by the modelling of Professor Neil Ferguson, at Imperial College, London. At least some other scientists, notably Nobel prize winner Michael Levitt, have challenged the assumptions going into that model, defining the spread of COVID-19 as not exponential but “sub-exponential” after an initial phase, regardless of any policy intervention. But we can’t know who’s right, or even if the government drew the right conclusions from the model, because the version of the model used to drive that decision is not accessible. They might be driving blind.  It’s not as though all of us all are about to download the model, spend hours inspecting it, and list its weak points. That’s not the way transparency works. But imagine: the government announced which model it was using, why it drew the conclusions it did from it, and published the model. And Professor Levitt, and a few dozen others, could beat it up, as scientists do, and offer feedback and improvements to policy makers – in real-time. There is a community of scientists able to form an informed view of the dispute between Ferguson and Levitt, updated with new data day by day, and to articulate that view to the media. In the absence of parliament, that’s the nearest we’re going to get to accountability. And then we have encroachment. The Open Knowledge Foundation’s new Justice Programme has already made great strides in defining algorithmic accountability, how the rules in models need to be held to democratic account. In some places in the United States, for example, rules have been introduced to give patients access to emergency medical care according to how many years of life they are expected to live, should they survive. Which sounds reasonable enough – until you consider that poverty has a big impact on medical history, which in turn drives life expectancy. So then, in fact, the algorithm ends up picking more affluent patients, and leaving the poor to die. Or the Taiwanese corporation that is introducing cameras to every work station in all its factories – right now, it says, to catch workers who infringe social distancing rules. But who knows? The coronavirus is dramatic. But in fact it is just one example of a much broader, deeper trend. Although computational modelling has been around for decades – its first significant implementations were in World War Two, to break German military codes and build the nuclear bomb – it has picked up extraordinary pace in the last five to ten years, driven by cheap processing power, big data and other factors. Massive decisions are now being made in every aspect of public life driven by models we never see, whose rules nobody understands. The only way to re-establish democratic equilibrium is for the models themselves to be published. If we’re going to be moved around like pieces on the chess board, we at least need to see what the rules of the game are. And if the people moving us round the board even understood them.Johnny West is director of OpenOil, a Berlin-based consultancy which uses open data and methodologies to build investment-grade financial and commercial analysis for governments and societies of their natural resource assets. He sits on the Advisory Board of FAST, the only open source financial modelling standard, and is an alumnus of the Shuttleworth Foundation Fellowship. He is also a member of the Open Knowledge Foundation’s Board.

The next target user group for the open data movement is governments

- September 18, 2018 in Open Data, Open Government Data

Here’s an open data story that might sound a bit counterintuitive. Last month a multinational company was negotiating with an African government to buy an asset. The company, which already owned some of the asset but wanted to increase its stake, said the extra part was worth $6 million. The government’s advisers said it was worth at least three times that. The company disputed that. The two sides met for two days and traded arguments, sometimes with raised voices, but the meeting broke up inconclusively. A week later, half way round the world, the company’s headquarters issued a new investor presentation. Like all publicly listed companies, its release was filed with the appropriate stock market regulator, and sent out by email newsletter. Where a government adviser picked it up. In it, the company  advertised to investors how it had increased the value of its asset – the asset discussed in Africa – by four times in 18 months, and gave a valuation of the asset now. Their own valuation, it turned out, was indeed roughly three times the $6 million the company had told the government it was worth. Probably, the negotiators were not in touch with the investor relations people. But the end result was that the company had blown its negotiating position because, in effect, as a whole institution, it didn’t think a small African government could understand disclosure practise on an international stock market, subscribe to newsletters, or read their website. The moral of the story is: we need to expand the way we think about governments and open data. In the existing paradigm, governments are seen as the targets of advocacy campaigns, to release data they hold for public good, and enact legislation which binds themselves, and others, to release. Civil society tries hunts for internal champions within government, international initiatives (EITI, OGP etc) seek to bind governments in to emergent best practise, and investigative journalists and whistleblowers highlight the need for better information by dramatic cases of all the stuff that goes wrong and is covered up. And all of that is as it should be. But what we see regularly in our work at OpenOil is that there is also huge potential to engage government – at all levels – as users of open data. Officials in senior positions are sitting day after day, month after month, trying to make difficult decisions, under the false impression that they have little or no data. Often they don’t have clear understanding and access to data produced by other parts of their own government, and they are unaware of the host of broader datasets and systems. Initiatives like EITI which were founded to serve the public interest in data around natural resources have found a new and receptive audience in various government departments seeking to get a joined up view of their own data. And imagine if governments were regular and systematic users of open data and knowledge systems, how it might affect their interaction with advocacy campaigns. Suddenly, this would not be a one way street – governments would be getting something out of open data, not just responding to what, from their perspective, often seems like the incessant demands of activists. It could become more of a mutual backscratching dynamic. There is a paradox at the heart of much government thinking about information. In institutions with secretive cultures, there can be a weird ellipsis of the mind in which information which is secret must be important, and information which is open must be, by definition, worthless. Working on commercial analysis of assets managed by governments, we often find senior officials who believe they can’t make any progress because their commercial partners, the multinationals, hold all the data and don’t release it. While it is true that there is a stark asymmetry of information, we have half a dozen cases where the questions the government needed to answer could be addressed by data downloadable from the Internet. You have to know where to look of course. But it’s not rocket science. In one case, a finance ministry official had all the government’s “secret” data sitting on his laptop but we decided to go ahead and model a major mining project using public statements by the company anyway because the permissions needed from multiple departments to show the data to anyone else, let alone incorporate them in a model which might be published, would take months or years. Of course reliance on open data is likely to leave gaps and involves careful questions of interpretation. But our experience is that these have never been “deal breakers” – we have never had to abandon an analytical project because we couldn’t achieve good enough results with public data. Because the test of any analytical project is not “is it perfect?” but “does it take us on from where we are now, and can we comfortably state what we think the margins of error are?”. The potential is not confined to the Global South. Government at all levels and in all parts of the world could benefit greatly from more strategic use of open data. And it is in the interest of the open data movement to help them.

Open model of an oil contract

- October 22, 2013 in External Projects, Featured, Open Data, Open Economics

Please come and kick the tires of our open model of an oil contract! In the next month or so, OpenOil and its partners will publish what we believe will be the first financial model of an oil contract under Creative Commons license. We would like to take this opportunity to invite the Open Economics community to come and kick the wheels on the model when it is ready, and help us improve it. We need you because we expect a fair degree of heat from those with a financial or reputational stake in continued secrecy around these industries. We expect the brunt of attacks to be on the basis that we are wrong. And of course we will be wrong in some way. It’s inevitable. So we would like our defence to be not, “no we’re never wrong”, but “yes, sometimes we are wrong, but transparently so and for the right reasons – and look, here are a bunch of friends who have already pointed out these errors, which have been corrected. You got some specific critiques, come give them. But the price of criticism is improvement – the open source way!” We figure Open Economics is the perfect network to seek that constructive criticism. screengrab Ultimately, we want to grow an open source community which will help grow a systematic understanding of the economics of the oil and gas industry independent of investor or government stakes, since the public policy impact of these industries and relevant flows are too vital to be left to industry specialists. There are perhaps 50 countries in the world where such models could transform public understanding of industries which dominate the political economy. The model itself is still being fine-tuned but I’d like to take this chance to throw out a few heuristics that have occurred in the process of building it. Public interest modelling. The model is being built by professionals with industry experience but its primary purpose is to inform public policy, not to aid investment decisions or serve as negotiation support for either governments or companies. This has determined a distinct approach to key issues such as management of complexity and what is an acceptable margin of error. Management of complexity. Although there are several dozen variables one could model, and which typically appear in the models produced for companies, we deliberately exclude a long tail of fiscal terms, such as ground rent and signature bonuses, on the basis that the gain in reduction of margin of error is less than the loss from increasing complexity for the end user. We also exclude many of the fine tuning implementations of the taxation system. We list these terms in a sheet so those who wish can extend the model with them. It would be great, for example, to get tax geek help on refining some of these issues. A hierarchy of margins of error. Extractives projects can typically last 25 years. The biggest single margin of error is not within human power to solve – future price. All other uncertainties or estimates pale in comparison with its impact on returns to all stakeholders. Second are the capex and opex going into a project. The international oil company may be the only real source of these data, and may or may not share them in disaggregated form with the government – everyone else is in the dark. For public interest purposes, the margin of error created by all other fiscal terms and input assumptions combined is less significant, and manageable. Moving away from the zero-sum paradigm. Because modelling has traditionally been associated with the negotiation process, and perhaps because of the wider context surrounding extractive industries, a zero-sum paradigm often predominates in public thinking around the terms of these contracts. But the model shows graphically two distinct ways in which that paradigm does not apply. First, in agreements with sufficient progressivity, rising commodity prices could mean simultaneous rise of both government take and a company’s Internal Rate of Return. Second, a major issue for governments and societies depending on oil production is volatility – the difference between using minimal and maximal assumptions across all of the inputs will likely produce a difference in result which is radical. One of a country’s biggest challenges then is focusing enough attention on regulating itself, its politicians’ appetite for spending, its public’s appetite for patronage. We know this of course in the real world. Iraq received $37 billion in 2007, then $62 billion in 2008, then $43 billion or so in 2009. But it is the old journalistic difference between show and tell. A model can show this in your country, with your conditions. The value of contract transparency. Last only because self-evident is the need for primary extractives conracts between states and companies to enter the public domain. About seven jurisdictions around the world publish all contracts so far but it is gaining traction as a norm in the governance community. The side-effects of the way extractive industries are managed now are almost all due to the ill-understood nature of rent. Even corruption, the hottest issue politically, may often simply be a secondary effect of the rent-based nature of the core activities. Publishing all contracts is the single biggest measure that would get us closer to being able to address the root causes of Resource Curse. See http://openoil.net/ for more details.

Open model of an oil contract

- October 22, 2013 in External Projects, Featured, Open Data, Open Economics

Please come and kick the tires of our open model of an oil contract!

In the next month or so, OpenOil and its partners will publish what we believe will be the first financial model of an oil contract under …