You are browsing the archive for Open/Closed.

Ubernomics: Platform Monopolies & How to Fix Them

- September 6, 2018 in Open Economics, Open/Closed

First version: Dec 2016, updated Feb 2018. This blog is a summary of the full article at http://rufuspollock.com/ubernomics Around the world countries have struggled to work out how to deal with Uber, AirBnB and their like. Are these new apps something to be welcomed or something to be stopped? But how we treat Uber-like companies is largely dependent on how we see and understand them. There is, in fact, nothing especially digital about the underlying economics of Uber’s business model. In fact, Uber’s economics are very similar to businesses that have been with us for hundreds or even thousands of years: marketplaces / platforms. Because of the positive feedback between buyers and sellers, the initial free competition between marketplaces tends towards “one” – one dominant marketplace. And, hence, to monopoly if that one marketplace is exclusively owned and controlled. Monopoly is not inevitable though: we can have open platforms/marketplaces – just as we have free and open markets. To do this requires us to open the digital infrastructure and, where appropriate, to open the order book too. Note: we’ll use the term platform and marketplace interchangeably. Today, “platform” is the more common term in the digital world and in economics (“two-sided platforms” etc). However, here we often prefer “marketplace” because it connects this with something both familiar and ancient.

Market Day in Stockport in 1910s. Link

1. Uber-like companies are marketplace or platform companies

Stripped to their essence Uber, AirBnB and the like resemble a very old economic structure: the marketplace. Marketplaces are where buyers and sellers come together to exchange. On Uber this is riders and drivers. On AirBnB owners and renters etc. Marketplaces have existed for thousands of years, practically since civilization first began. More broadly, we have platforms. This includes companies like Facebook, Google, Microsoft and eBay. Facebook is a platform mediating between users – and advertisers. Google is a platform mediating between users and content – and advertisers. Microsoft’s Windows is an operating system platform mediating between apps and users. Strictly platforms as broader than marketplaces. For example, an operating system or social network is a platform but not, strictly, a marketplace. However, many of the same ideas apply and so the distinction does not matter much here.

2. Platforms tend to one because of positive feedback between buyers and sellers

Like a snowball down a mountain, marketplaces, once past a critical size, have the potential to grow rapidly thanks to positive feedback where buyers and sellers both value size because it offers:
  • liquidity: you will be able to trade e.g. book a taxi, rent an apartment etc
  • diversity: they have the product you want e.g. this particular fish is available, that stock has a market maker, there is a taxi in your area (not just central London)
Furthermore, buyers and sellers usually don’t want to have to participate in lots of different marketplaces (“multi-homing” is a pain). Combined with the positive feedback effects this creates a strong pressure for there to be just one marketplaces.

3. Marketplaces tend to monopoly (unless made open)

Because of snowball economics over time you converge on just one marketplaces – “marketplaces tend to one”. You don’t have ten fish markets in a town, you have one. You don’t have fifty stock exchanges, you have one. The question then is: is that marketplace “closed”: exclusively owned and controlled by one entity. If so it becomes a monopoly. Or is an “open” marketplace where anyone can participate on fair and equitable terms? Note: there can be substantial competition to become the monopolist. There may also be some competition between regional monopolies when there is enough geographic or preference diversity: if you live in Scotland you won’t go to London to buy your fish so several local fish markets can exist (with limited competition between them at the fringes).

4. These marketplaces are not “contestable”

It is not easy to build a new one and compete against the old one. Why? Because buyers are numerous and independent coordination between them is very hard. The same is true for sellers (though to a lesser extent because sellers are usually less numerous and diverse than buyers – fifty fishmongers at a market might supply thousands of fish-buyers). This makes coordinated action – such as switching to a different competing market – very hard: as a buyer I don’t want to head over to the new fish market only to discover all the fish sellers are still at the old marketplace. Similarly, no fish-seller wants to risk moving their stall to the new marketplace until they know the buyers will all be there – its a chicken and egg problem with thousands and chickens and eggs who all need to act simultaneously!

5. Thus, the monopoly marketplace owner has a lot of power

Thus, the owner of marketplace has a lot of power – once the marketplace is established. At an early stage marketplace industries will often be highly dynamic and competitive as firms fight to get critical mass and dominate the market. This can mislead policy-makers into believing the market is competitive which in turn prevents them from acting at a crucial early stage when it would be relatively easy to put in place long-term pro-competitive policies (e.g. establishing a neutral exchange, or regulated marketplace access rates).

6. That power is inevitably abused to the detriment of buyers and consumers

When an organization has a lot of power it will use it to its advantage. In the case of the marketplace, the obvious thing is for the owner to start aggressively charging the users of the marketplace for access. Depending exactly on how the marketplace works it can charge buyers, sellers or both. Often charging sellers is preferred because they are easier to identify, contract with and track (and they have a larger and better sense of the value of the marketplace per entity). Thus, it is Uber’s drivers who get charged the 20-25% fee by Uber (this fee, of course, gets passed on to consumers but they don’t directly see it). Note: a side benefit of charging sellers is that it makes the fees largely hidden to buyers which is good both for PR and politically: if buyers got upset they might start pushing politicians to regulate the marketplace. For example, most people think that Google is just wonderful because it provides them with a valuable service for “free”. They don’t see, of course, that they do pay – just indirectly through the sellers (advertisers and content providers) who have to pay Google or supply Google with free content.

7. The solution is to open the marketplace

The solution to marketplace monopolies is to make the marketplace open: accessible to all buyers and sellers on equitable and non-discriminatory terms. This involves two parts:
  • Opening the software, protocols and non-personal data that power the marketplace.
  • Universal, equitable access to the order book database with pricing set to cover the cost of maintenance. Preferably this would involve the order book being run and managed by an independent third-party with governance in place to ensure a transparent and equitable pricing and access policy.
It is worth emphasizing that competition between proprietary, closed, marketplaces is not sufficient. Openness is essential.

8. Remuneration rights can pay for open

It costs money to create the software, protocols and (non-personal) data that power a marketplace. Traditionally, entrepreneurs and investors fund the creation of these based on the hope of becoming a marketplace monopolist and making it rich. Without the monopoly why would they invest? One option would be farsighted funding by the state – as with the Internet. However, this is problematic: how can the state know exactly which entrepreneurs should be backed with what ideas? Instead, we can use remuneration rights. These provide a free-market-like but open-compatible way to fund innovators. In essence, remuneration rights combine a common subscription payment from citizens organized by the government combined with a market-based payment of those monies to innovators based on whose innovations get used. You can read more about these ideas in my book The Open Revolution. Finally, actually running the platform itself costs money even if the protocols and software are free and open to use – you still need data centers and sysadmins to keep the servers running. With the software and protocols being free, service providers can freely compete and users will have a choice of who they use – just as we choose today between different “Internet Service Providers” who operate the Internet and provide us with access to it.

9. Pro-activity is essential

Policymakers and stakeholders need to take a pro-active approach. It is far easier to shape a marketplace towards openness early in its development than it is to handle an entrenched and powerful monopolist marketplace once in place. Read on  

The Open Revolution: rewriting the rules of the information age

- June 12, 2018 in News, open, Open Data, Open Knowledge, Open/Closed

Rufus Pollock, the Founder of Open Knowledge International, is delighted to announce the launch of his new book The Open Revolution on how we can revolutionize information ownership and access in the digital economy.

About the book

Will the digital revolution give us digital dictatorships or digital democracies? Forget everything you think you know about the digital age. It’s not about privacy, surveillance, AI or blockchain – it’s about ownership. Because, in a digital age, who owns information controls the future.

Today, information is everywhere. From your DNA to the latest blockbusters, from lifesaving drugs to the app on your phone, from big data to algorithms. Our entire global economy is built on it and the rules around information affect us all every day.

As information continues to move into the digital domain, it can be copied and distributed with ease, making access and control even more important. But the rules we have made for it, derived from how we manage physical property, are hopelessly maladapted to the digital world.

In The Open Revolution, Pollock exposes the myths that cloud the digital debate. Looking beneath the surface, into the basic rules of the digital economy, he offers a simple solution. The answer is not technological but political: a choice between making information Open, shared by all, or making it Closed, exclusively owned and controlled. Today in a Closed world we find ourselves at the mercy of digital dictators. Rufus Pollock charts a path to a more “Open” future that works for everyone.
Cory Doctorow, journalist and activist: “The richest, most powerful people in the world have bet everything on the control of information in all its guises; Pollock’s fast-moving, accessible book explains why seizing the means of attention and information is the only path to human freedom and flourishing.”

An Open future for all

The book’s vision of choosing Open as the path to a more equitable, innovative and profitable future for all is closely related to the vision of an open knowledge society of Open Knowledge International. Around the world, we are working towards societies where everyone has access to key information and the ability to use it to understand and shape their lives. We want to see powerful institutions made comprehensible and accountable. We want to see vital research information which can help us tackle challenges such as poverty and climate change available to all as open information. The Open Revolution is a great inspiration for our worldwide network of people passionate about openness, boosting our shared efforts towards an open future for all.
Get the book and join the open revolution at openrevolution.net, or join our forum to discuss the book’s content.

About the author

Dr Rufus Pollock is a researcher, technologist and entrepreneur. He has been a pioneer in the global Open Data movement, advising national governments, international organisations and industry on how to succeed in the digital world. He is the founder of Open Knowledge, a leading NGO which is present in over 35 countries, empowering people and organization with access to information so that they can create insight and drive change. Formerly, he was the Mead Fellow in Economics at Emmanuel College, University of Cambridge. He has been the recipient of a $1m Shuttleworth Fellowship and is currently an Ashoka Fellow and Fellow of the RSA. He holds a PhD in Economics and a double first in Mathematics from the University of Cambridge.
 

The Open Revolution: rewriting the rules of the information age

- June 12, 2018 in News, open, Open Data, Open Knowledge, Open/Closed

Rufus Pollock, the Founder of Open Knowledge International, is delighted to announce the launch of his new book The Open Revolution on how we can revolutionize information ownership and access in the digital economy.

About the book

Will the digital revolution give us digital dictatorships or digital democracies? Forget everything you think you know about the digital age. It’s not about privacy, surveillance, AI or blockchain – it’s about ownership. Because, in a digital age, who owns information controls the future.

Today, information is everywhere. From your DNA to the latest blockbusters, from lifesaving drugs to the app on your phone, from big data to algorithms. Our entire global economy is built on it and the rules around information affect us all every day.

As information continues to move into the digital domain, it can be copied and distributed with ease, making access and control even more important. But the rules we have made for it, derived from how we manage physical property, are hopelessly maladapted to the digital world.

In The Open Revolution, Pollock exposes the myths that cloud the digital debate. Looking beneath the surface, into the basic rules of the digital economy, he offers a simple solution. The answer is not technological but political: a choice between making information Open, shared by all, or making it Closed, exclusively owned and controlled. Today in a Closed world we find ourselves at the mercy of digital dictators. Rufus Pollock charts a path to a more “Open” future that works for everyone.
Cory Doctorow, journalist and activist: “The richest, most powerful people in the world have bet everything on the control of information in all its guises; Pollock’s fast-moving, accessible book explains why seizing the means of attention and information is the only path to human freedom and flourishing.”

An Open future for all

The book’s vision of choosing Open as the path to a more equitable, innovative and profitable future for all is closely related to the vision of an open knowledge society of Open Knowledge International. Around the world, we are working towards societies where everyone has access to key information and the ability to use it to understand and shape their lives. We want to see powerful institutions made comprehensible and accountable. We want to see vital research information which can help us tackle challenges such as poverty and climate change available to all as open information. The Open Revolution is a great inspiration for our worldwide network of people passionate about openness, boosting our shared efforts towards an open future for all.
Get the book and join the open revolution at openrevolution.net, or join our forum to discuss the book’s content.

About the author

Dr Rufus Pollock is a researcher, technologist and entrepreneur. He has been a pioneer in the global Open Data movement, advising national governments, international organisations and industry on how to succeed in the digital world. He is the founder of Open Knowledge, a leading NGO which is present in over 35 countries, empowering people and organization with access to information so that they can create insight and drive change. Formerly, he was the Mead Fellow in Economics at Emmanuel College, University of Cambridge. He has been the recipient of a $1m Shuttleworth Fellowship and is currently an Ashoka Fellow and Fellow of the RSA. He holds a PhD in Economics and a double first in Mathematics from the University of Cambridge.
 

Solving the Internet Monopolies Problem – Facebook, Google et al

- May 9, 2018 in open, Open Knowledge Definition, Open/Closed

The good news is that an increasing number of people seem to agree that:
  1. Facebook, Google etc are monopolies
  2. That is a problem
Agreeing we have a problem is always a crucial first step. But to go further we need to:
  1. Correctly diagnose the disease — in particular, avoid confusing the symptoms with the root cause
  2. Identify a cure that actually works
On point one, the answer is that he root cause is costless copying (plus platform effects) combined with monopoly rights. Costless copying (and platform effects) would always lead to some kind of standardization — witness the Internet. But that is not a problem — in fact it is advantage to have a single standard. The problem arises when the standard platform is owned by one entity and becomes a monopoly as we have today with Google in search, Facebook in social networking etc. The solution flows from the diagnosis: make these platforms open and have an open-compatible, market-oriented way to pay innovators (i.e. have remuneration rights). By make a platform open I mean make its protocols, algorithms, software and know-how open, free to all to use, build on and share. More details on all of this in my upcoming book: http://rufuspollock.com/book

Photo by Jonas Lee on Unsplash

An example of a mis-diagnosis: their control of our personal data

One prevalent misdiagnosis is that the issue with Facebook and Google and the source of (much) of their monopoly power is to do with their control of our personal data. See, for example, the Economist’s cover in May 2017 showing Internet monopolies as oil rigs mining personal data – an allusion to the common assertion that “personal data is the new oil”.

From this mis-diagnosis flows a proposed solution: limit Facebook and Google’s access to our personal data and/or ensure others have access to that personal data on equal terms (“data portability”). Data portability means almost nothing in a world where you have a dominant network. So what if I can get my data out of Facebook if no other network has a critical mass of participants. What is needed is that Facebook has a live, open read/write API that allows other platforms to connect if authorized by the user. In fact, personal data is a practical irrelevancies to the monopoly issue. Focusing on it serves only to distract us from the real solutions. Limiting Facebook’s and Google’s access to our personal data or making it more portable would make very little difference to their monopoly power, or reduce the deleterious effects of that power on innovation and freedom — the key freedoms of enterprise, choice and thought. It make little difference because their monopoly just doesn’t arise from their access to our personal data. Instead it comes from massive economies of scale (costless copying) plus platform effects. If you removed Google’s and Facebook’s ability to use personal data to target ads tomorrow it would make very little difference to their top or bottom lines because their monopoly on our attention would be little changed and their ad targeting would be little diminished — in Google’s case the fact you type in a specific search from a particular location is already enough to target effectively and similar Facebook’s knowledge of your broad demographic characteristics would be enough given the lock-hold they have on our online attention. What is needed in Google’s case is openness of the platform and in Facebook’s openness combined with guaranteed interoperability (“data portability” means little if everyone is on Facebook!). Worse, focusing on privacy actually reinforces their monopoly position. It does so because privacy concerns:

  • Increase compliance costs which burden less wealthy competitors disproportionately. In particular, increased compliance costs make it harder for new firms to enter the market. A classic example is the “right to be forgotten” which actually makes it harder for alternative search firms to compete with Google.
  • Make it harder to get (permitted) access to user data on the platform and it is precisely (user-permitted) read/write access to a platform’s data that is the best chance for competition. In fact, it now gives monopolists the perfect excuse to deny such access: Facebook can now deny other competing firms (user-permitted) access to user data citing “privacy concerns”.

An example of a misguided solution: build a new open-source decentralized social network

Similarly, the idea sometimes put forward that we just need another open-source decentralized social network is completely implausible (even if run by Tim Berners-Lee*). Platforms/networks like Facebook tend to standardize: witness phone networks, postal networks, electricity networks and even the Internet. We don’t want lots of incompatible social networks. We want one open one — just like we have one open Internet. In addition, the idea that some open-source decentralized effort is going to take on an entrenched highly resourced monopoly on its own is ludicrous (the only hope would be if there was serious state assistance and regulation — just in the way that China got its own social networks by effectively excluding Facebook). Instead, in the case of Facebook we need to address the monopoly at root: networks like this will always tend to standardization. The solution is ensure that we get an open rather than closed, proprietary global social network — just like we got with the open Internet. Right now that would mean enforcing equal access rights to facebook API for competitors or, enforcing full open sourcing of key parts of the software and tech stack plus getting guarantees ongoing non-discriminatory API access. Even more importantly we need to prevent these kind of monopolies in future — we want to stop shutting the door after the horse has bolted! This means systematic funding of open protocols and platforms. By open i mean the software, algorithms and non-personal data are open. And we need to fund the innovators who create and develop these and the way to do that is replacing patents/copyright with remuneration rights.

Fake news: confusing which symptoms are related to which disease

We must also be careful not to confuse which symptoms are related to which disease. For example, fake news *is* a problem but it is only tangentially related to the disease of monopoly. The causes of fake news are many and various and much more complex than simple monopoly. In fact, one could argue that more diversity in media actually makes echo chambers worse. Reducing monopoly at Facebook and Google level may bring *some* improvement but it is probably secondary. For more, see https://rufuspollock.com/2016/11/26/fake-news-post-truth—is-it-news-and-what-can-we-do/
* see as an example of just this kind of proposal https://meanjin.com.au/essays/the-last-days-of-reality/

Requiem for an Internet Dream

- December 12, 2017 in Internet, open, Open/Closed

The dream of the Internet is dying. Killed by its children. We have barely noticed its demise and done even less to save it. It was a dream of openness, of unprecedented technological and social freedom to connect and innovate. Whilst expressed in technology, it was a dream that was, in essence, political and social. A dream of equality of opportunity, of equality of standing, and of liberty. A world where anyone could connect and almost everyone did. No-one controlled or owned the Internet; no one person or group decided who got on it or who didn’t. It was open to all. But that dream is dying. Whilst the Internet will continue in its literal, physical sense, its spirit is disappearing. In its place, we are getting a technological infrastructure dominated by a handful of platforms which are proprietary, centralized and monopolized. Slowly, subtly, we no longer directly access the Net. Instead, we live within the cocoons created by the Internet’s biggest children. No longer do you go online: you go on Facebook or you Google something. In those cocoons we seem happy, endlessly-scrolling through our carefully curated feeds, barely, if ever, needing to venture beyond those safe blue walls to the Net beyond. And if not on Facebook, we’ll be on Google, our friendly guide to the overwhelming, unruly hinterlands of the untamed Net. Like Facebook, Google is helpfully ensuring that we need never leave, that everything is right there on its pages. They are hoovering up more and more websites into the vastness that is the Googleplex. Chopping them up and giving them back to us in the bite-sized morsels we need. Soon we will never need to go elsewhere, not even to Wikipedia, because Google will have helpfully integrated whatever it was we needed; the only things left will be the advertisers who have something to sell (and who Google need to pay them). As the famous Microsoft mantra went: embrace, extend, extinguish. Facebook, Google, Apple and the like have done this beautifully, aided by our transition back from the browser to the walled garden of mobile. And this achievement is all the more ironic for its unintended nature; if questioned, Facebook and Google would honestly protest their innocence. Let me be clear, this is not a requiem for some half-warm libertarianism. The Internet is not a new domain, and it must play by laws and jurisdictions of the states in which it lives. I am no subscriber to independence declarations or visions of brave new worlds. What I mourn is something both smaller and bigger. The disappearance of something rare and special: proof that digital was different, that platforms at a planetary scale could be open, and that from that magical combination of tech and openness something special flowed. Not only speech and freedom of speech, but also innovation and creativity in all its wondrous fecundity and generous, organized chaos on a scale previously unimagined. And we must understand that the death of this dream was not inevitable. It is why I hesitate to use the word dream. Dreams always fade in the morning; we always wake up. This was not so much a dream as possibility. A delicate one, and a rare one. After all, the history of technology and innovation is full of proprietary platforms and exclusive control — of domination by the one or the few. The Internet was different. It was like language: available to all, almost as a birthright. And in the intoxicating rush of discovery we neglected to realise how rare it was. What a strange and wonderful set of circumstances had caused its birth: massive, far-sighted government investment at DARPA, an incubation in an open-oriented academia, maturity before anyone realised its commercial importance, and its lucky escape in the 1990s from control by the likes of AOL or MSN. And then, as the web took off, it was free, so clearly, unarguably, and powerfully valuable for its openness that none could directly touch it. The Internet’s power was not a result of technology but of a social and political choice. The choice of openness. The fact that every single major specification of how the Internet worked was open and free for anyone to use. That production grade implementations of those specifications were available as open software — thanks to government support. That a rich Internet culture grew that acknowledged and valued that openness, along with the bottom-up, informal innovation that went with it. We must see this, because even if it is too late to save the Internet dream, we can use our grief to inspire a renewed commitment to the openness that was its essence, to open information and open platforms. And so, even as we take off our hats to watch the Internet pass in all its funereal splendour, in our hearts we can have hope that its dream will live again.

Why Open Source Software Matters for Government and Civic Tech – and How to Support It

- July 13, 2016 in Featured, Open Data, Open Software, Open/Closed, Policy, research

Today we’re publishing a new research paper looking at whether free/open source software matters for government and civic tech. Matters in the sense that it should have a deep and strategic role in government IT and policy rather than just being a “nice to have” or something “we use when we can”. As the paper shows the answer is a strong yes: open source software does matter for government and civic tech — and, conversely, government matters for open source. The paper covers:
  • Why open software is especially important for government and civic tech
  • Why open software needs special support and treatment by government (and funders)
  • What specific actions can be taken to provide this support for open software by government (and funders)
We also discuss how software is different from other things that government traditionally buy or fund. This difference is why government cannot buy software like it buys office furniture or procures the building of bridges — and why buying open matters so much. The paper is authored by our President and Founder Dr Rufus Pollock.

Why Open Software

We begin with four facts about software and government which form a basis for the conclusions and recommendations that follow.
  1. The economics of software: software has high fixed costs and low (zero) marginal costs and it is also incremental in that new code builds on old. The cost structure creates a fundamental dilemma between finding ways to fund the fixed cost, e.g. by having proprietary software and raising prices; and promoting optimal access by setting the price at the marginal cost level of zero. In resolving this dilemma, proprietary software models favour the funding of fixed costs but at the price of inefficiently raised pricing and hampering future development, whilst open source models favour efficient pricing and access but face the challenge of funding the fixed costs to create high quality software in the first place. The incremental nature of software sharpens this dilemma and contributes to technological and vendor lock-in.

  2. Switching costs are significant: it is (increasingly) costly to switch off a given piece of software once you start using it. This is because you make “asset (software) specific investments”: in learning how to use the software, integrating the software with your systems, extending and customizing the software, etc. These all mean there are often substantial costs associated with switching to an alternative later.
  3. The future matters and is difficult to know: software is used for a long time — whether in its original or upgraded form. Knowing the future is therefore especially important in purchasing software. Predictions about the future in relation to software are especially hard because of its complex nature and adaptability; behavioural biases mean the level of uncertainty and likely future change are underestimated. Together these mean lock-in is under-estimated.
  4. Governments are bad at negotiating, especially in this environment, and hence the lock-in problem is especially acute for Government. Government are generally poor decision-makers and bargainers due to the incentives faced by government as a whole and by individuals within government. They are especially weak when having to make trade-offs between the near-term and the more distant future. They are even weaker when the future is complex, uncertain and hard to specify contractually up front. Software procurement has all of these characteristics, making it particularly prone to error compared to other government procurement areas.

The Logic of Support

Note: numbers in brackets e.g. (1) refer to one of the four observations of the previous section.
A. Lock-in to Proprietary Software is a Problem Incremental Nature of Software (1) + Switching Costs (2)
imply …
Lock-in happens for a software technology, and, if it is proprietary, to a vendor Zero Marginal Cost of Software (1) + Uncertainty about the Future in user needs and technologies (3) + Governments are Poor Bargainers (4)
imply …
Lock-in to proprietary software is a problem
Lock-in has high costs and is under-estimated – especially so for government B. Open Source is a Solution Lock-in is a problem
imply …
Strategies that reduce lock-in are valuable Economics of Software (1)
imply …
Open-source is a strategy for government (and others) to reduce future lock-in
Why? Because it requires the software provider to make an up-front commitment to making the essential technology available both to users and other technologists at zero cost, both now and in the future Together these two points
imply …
Open source is a solution
And a specific commitment to open source in government / civic tech is important and valuable C. Open Source Needs Support
And Government / Civic Tech is an area where it can be provided effectively Software has high fixed costs and a challenge for open source is to secure sufficient support investment to cover these fixed costs (1 – Economics)
+
Governments are large spenders on IT and are bureaucratic: they can make rules to pre-commit up front (e.g. in procurement) and can feasibly coordinate whether at local, national or, even, international levels on buying and investment decisions related to software. imply … Government is especially well situated to support open source
AND
Government has the tools to provide systematic support
AND
Government should provide systematic support

How to Promote Open Software

We have established in the previous section that there is a strong basis for promoting open software. This section provides specific strategic and tactical suggestions for how to do that. There are five proposals that we summarize here. Each of these is covered in more detail in the main section below. We especially emphasize the potential of the last three options as it does not require up-front participation by government and can be boot-strapped with philanthropic funding. 1. Recognize and reward open source in IT procurement. Give open source explicit recognition and beneficial treatment in procurement. Specifically, introduce into government tenders: EITHER an explicit requirement for an open source solution OR a significant points value for open source in the scoring for solutions (more than 30% of the points on offer). 2. Make government IT procurement more agile and lightweight. Current methodologies follow a “spec and deliver” model in which government attempts to define a full spec up front and then seeks solutions that deliver against this. The spec and deliver model greatly diminishes the value of open source – which allows for rapid iteration in the open, and more rapid switching of provider – and implicitly builds lock-in to the selected provider whose solution is a black-box to the buyer. In addition, whilst theoretically shifting risk to the supplier of the software, given the difficulty of specifying software up front it really just inflates upfront costs (since the supplier has to price in risk) and sets the scene for complex and cumbersome later negotiations about under-specified elements. 3. Develop a marketing and business development support organization for open source in key markets (e.g. US and Europe). The organization would be small, at least initially, and focused on three closely related activity areas (in rough order of importance):
  1. General marketing of open source to government at both local and national level: getting in front of CIOs, explaining open source, demystifying and derisking it, making the case etc. This is not specific to any specific product or solution.

  2. Supporting open source businesses, especially those at an early-stage, in initial business development activities including: connecting startups to potential customers (“opening the rolodex”) and guidance in navigating the bureaucracy of government procurement including discovering and responding to RFPs.
  3. Promoting commercialization of open source by providing advice, training and support for open source startups and developers in commercializing and marketing their technology. Open source developers and startups are often strong on technology and weak on marketing and selling their solutions and this support would help address these deficiencies.
4. Open Offsets: establish target levels of open source financing combined with a “offsets” style scheme to discharge these obligations. An “Open Offsets” program would combine three components:
  1. Establish target commitments for funding open source for participants in the program who could include government, philanthropists and private sector. Targets would be a specific measurable figure like 20% of all IT spending or $5m.

  2. Participants discharge their funding commitment either through direct spending such as procurement or sponsorship or via purchase of open source “offsets”. “Offsets” enable organizations to discharge their open source funding obligation in an analogous manner to the way carbon offsets allow groups to deliver on their climate change commitments.
  3. Administrators of the open offset fund distribute the funds to relevant open source projects and communities in a transparent manner, likely using some combination of expert advice, community voting and value generated (this latter based on an estimate of the usage and value of created by given pieces of open software).
5. “Choose Open”: a grass-roots oriented campaign to promote open software in government and government run activities such as education. “Choose Open” would be modelled on recent initiatives in online political organizing such as “Move On” in the 2004 US Presidential election as well as online initiatives like Avaaz. It would combine central provision of message, materials and policy with localized community participation to drive change.

Introducing “A free, libre and open Glossary”

- September 3, 2013 in Open/Closed

The following guest post is by Chris Sakkas. A few months ago, we ran into a problem at work. ‘Let’s open source this,’ my boss said, and then ran a conventional brainstorming session. I am constantly frustrated by people misusing terms like free, libre and open that have well-established definitions. I decided to spend an afternoon writing the first draft of a glossary that would explain in depth what these words mean and their relationship to one another. My hope is that if someone read the glossary from start to finish, they would never again confuse crowdsourcing with open source, or freeware with software. Here’s the summary:
  • A free/libre/open work is one that can be shared and adapted by any person for any purpose, without infringing copyright.
  • A crowdsourced work is one that was solicited from the community, rather than internally or by conventional contracting.
  • Freeware describes software that is free of cost to download.
  • Free software is free/libre/open, but might cost money to buy.
The glossary is a collaboration by the community, but I’ve also released it as an ODT and PDF in a fixed form. The advantage of this is that it is proofread, verified and able to be cited. However, it also survives as a living document that you are welcome to contribute to. The research that I needed to do to write the glossary made me more sympathetic to those who blur or misuse the terms. While the big concepts – open knowledge, open source, free software, free cultural works – are clearly defined, they are not quite synonyms. What is free, libre and open has been filtered through the expectations of the drafters: the Open Knowledge Foundation’s Open Definition requires a work to be open access to qualify as open knowledge; the Definition of Free Cultural Works requires the work to be in a free format to qualify as a free cultural work. It’s thus possible to have a free cultural work that is not open knowledge, and vice versa; it’s also not unusual for a work to be under a free cultural licence/open knowledge licence but be neither a free cultural work nor open knowledge. The community response to my first draft was interesting and useful. I experienced early and sustained criticism of my use of the non-free, libre and open Google Drive to host the file. I also learned first-hand the power of the carrot over the stick: I ignored people simply criticising the use of Google Drive, but transferred it to the Etherpad when someone suggested it. If this sounds of interest to you, jump in and check it out!
Chris Sakkas is the admin of the FOSsil Bank wiki and the Living Libre blog and Twitter feed.

We Need an Open Database of Clinical Trials

- February 5, 2013 in Access to Information, Campaigning, Featured, Open Data, Open Science, Open/Closed, Policy

The award winning science writer and physician Ben Goldacre recently launched a major campaign to open up the results of clinical trials. The AllTrials initiative calls for all clinical trials to be reported and for the “full methods and the results” of each trial to be published. Currently negative results are poorly recorded and positive results are overhyped, leading to what Goldacre calls ‘research fraud’, misleading doctors about the drugs they are prescribing and misleading patients about the drugs they are taking. The Open Knowledge Foundation is an organisational supporter of AllTrials, and we encourage you to sign and share the petition if you have not already done so: There have been some big wins in the past 48 hours. The lead legislator for a new EU Clinical Trials Regulation recently came out in favour of transparency for clinical trials. Today GlaxoSmithKline announced their support for the campaign, which, as Goldacre says, is “huge, and internationally huge”. As well as continuing to push for stronger policies and practises that support the release of information about clinical trials, we would like to see a public repository of reports and results that doctors, patients and researchers can access and add to. We need an open database of clinical trials. Over the past few days we’ve been corresponding with Ben and others on the AllTrials about how we might be able to work together to create such a database – building on the prototyping work that was presented at last year’s Strata event. In the mean time, you can watch the TED talk if you haven’t seen it already – and help us to make some noise about the petition!

“Carbon dioxide data is not on the world’s dashboard” says Hans Rosling

- January 21, 2013 in Featured, Interviews, OKFest, Open Data, Open Government Data, Open/Closed, WG Sustainability, Working Groups

Professor Hans Rosling, co-founder and chairman of the Gapminder Foundation and Advisory Board Member at the Open Knowledge Foundation, received a standing ovation for his keynote at OKFestival in Helsinki in September in which he urged open data advocates to demand CO2 data from governments around the world. Following on from this, the Open Knowledge Foundation’s Jonathan Gray interviewed Professor Rosling about CO2 data and his ideas about how better data-driven advocacy and reportage might help to mobilise citizens and pressure governments to act to avert catastrophic changes in the world’s climate.
Hello Professor Rosling! Hi. Thank you for taking the time to talk to us. Is it okay if we jump straight into it? Yes! I’m just going to get myself a banana and some ginger cake. Good idea. Just so you know: if I sound strange, it’s because I’ve got this ginger cake. A very sensible idea. So in your talk in Helsinki you said you’d like to see more CO2 data opened up. Can you say a bit more about this? In order to get access to public statistics, first the microdata must be collected, then it must be compiled into useful indicators, and then these indicators must be published. The amount of coal one factory burnt during one year is microdata. The emission of carbon dioxide per year per person in one country is an indicator. Microdata and indicators are very very different numbers. CO2 emissions data is often compiled with great delays. The collection is based on already existing microdata from several sources, which civil servants compile and convert into carbon dioxide emissions. Let’s compare this with calculating GDP per capita, which also requires an amazing amount of collection of microdata, which has to be compiled and converted and so on. That is done every quarter for each country. And it is swiftly published. It guides economic policy. It is like a speedometer. You know when you drive your car you have to check your speed all the time. The speed is shown on the dashboard. Carbon dioxide is not on the dashboard at all. It’s like something you get with several years delay, when you are back from the trip. It seems that governments don’t want to get it swiftly. And when they publish it finally, they publish it as total emissions per country. They don’t want to show emission per person, because then the rich countries stand out as worse polluters than China and India. So it is not just an issue about open data. We must push for change in the whole way in which emissions data is handled and compiled. You also said that you’d like to see more data-driven advocacy and reportage. Can you tell us what kind of thing you are thinking of? Basically everyone admits that the basic vision of the green movement is correct. Everyone agrees on that. By continuing to exploit natural resources for short term benefits you will cause a lot of harm. You have to understand the long-term impact. Businesses have to be regulated. Everyone agrees. Now, how much should we regulate? Which risks are worse, climate or nuclear? How should we judge the bad effects of having nuclear electricity? The bad effects of coal production? These are difficult political judgments. I don’t want to interfere with these political judgments, but people should know the orders of magnitude involved, the changes, what is needed to avoid certain consequences. But that data is not even compiled fast enough, and the activists do not protest, because it seems they do not need data? Let’s take one example. In Sweden we have data from the energy authority. They say: “energy produced from nuclear”. Then they include two outputs. One is the electricity that goes out into the lines and that lights the house that I’m sitting in. The other is the warm waste water that goes back into the sea. That is also energy they say. It is actually like a fraud to pretend that that is energy production. Nobody gets any benefit from it. On the contrary, they are changing the ecology of the sea. But they get away with it as the destination is energy produced. We need to be able to see the energy supply for human activity from each source and how it changes over time. The people who are now involved in producing solar and wind produce very nice reports on how production increase each year. Many get the impression that we have 10, 20, 30% of our energy from solar and wind. But even with fast growth from almost zero solar and wind it is nothing yet. The news reports mostly neglect to explain the difference in percentage growth of solar and wind energy and their percent of total energy supply. People who are too much into data and into handling data may not understand how the main misconceptions come about. Most people are so surprised when I show them total energy production in the world on one graph. They can’t yet see solar because it hasn’t reached one pixel yet. So this isn’t of course just about having more data, but about having more data literate discussion and debate – ultimately about improving public understanding? It’s like that basic rule in nutrition: Food that is not eaten has no nutritional value. Data which is not understood has no value. It is interesting that you use the term data literacy. Actually I think it is presentation skills we are talking about. Because if you don’t adapt your way of presenting to the way that people understand it, then you won’t get it through. You must prepare the food in a way that makes people want to eat it. The dream that you will train the entire population to about one semester of statistics in university: that’s wrong. Statisticians often think that they will teach the public to understand data the way they do, but instead they should turn data into Donald Duck animations and make the story interesting. Otherwise you will never ever make it. Remember, you are fighting with Britney Spears and tabloid newspapers. My biggest success in life was December 2010 on the YouTube entertainment category in the United Kingdom. I had most views that month. And I beat Lady Gaga with statistics. Amazing. Just the fact that the guy in the BBC in charge of uploading the trailer put me under ‘entertainment’ was a success. No-one thought of putting a trailer for a statistics documentary under entertainment. That’s what we do at Gapminder. We try to present data in a way that makes people want to consume it. It’s a bit like being a chef in a restaurant. I don’t grow the crop. The statisticians are like the farmers that produce the food. Open data provide free access to potatoes, tomatoes and eggs and whatever it is. We are preparing it and making a delicious food. If you really want people to read it, you have to make data as easy to consume as fish and chips. Do not expect people to become statistically literate! Turn data into understandable animations. My impression is that some of the best applications of open data that we find are when we get access to data in a specific area, which is highly organized. One of my favorite applications in Sweden is a train timetable app. I can check all the communter train departures from Stockholm to Uppsala, including the last change of platform and whether there is a delay. I can choose how to transfer quickly from the underground to the train to get home fastest. The government owns the rails and every train reports their arrival and departure continuously. This data is publicly available as open data. Then a designer made an app and made the data very easy for me to understand and use. But to create an app which shows the determinants of unemployment in the different counties of Sweden? No-one can do that because that is a great analytical research task. You have to take data from very many different sources and make predictions. I saw a presentation about this yesterday at the Institute for Future Studies. The PowerPoint graphics were ugly, but the analysis was beautiful. In this case the researchers need a designer to make their findings understandable to the broad public, and together they could build an app that would predict unemployment month by month. The CDIAC publish CO2 data for the atmosphere and the ocean, and they publish national and global emissions data. The UNFCCC publish national greenhouse gas inventories. What are the key datasets that you’d like to get hold of that are currently hard to get, and who currently holds these? I have no coherent CO2 dataset for the world beyond 2008 at the present. I want to have this data until last year, at least. I would also welcome half year data but I understand this can be difficult because carbon dioxide emission vary for transport, heating or cooling of houses over the seasons of the year. So just give me the past year’s data in March. And in April/May for all countries in the world. Then we can hold government accountable for what happens year by year. Let me tell you a bit about what happens in Sweden. The National Natural Protection Agency gets the data from the Energy Department and from other public sources. Then they give these datasets to consultants at the University of Agriculture and the Meteorological Authority. Then the consultants work on these datasets for half a year. They compile them, the administrators look through them and they publish them in mid-December, when Swedes start to get obsessed about Christmas. So that means that there was a delay of eleven and a half months. So I started to criticize that. My cutting line was when I was with the Minister of Environment and she was going to Durban. And I said “But you are going to Durban with eleven and a half month constipation. What if all of this shit comes out on stage? That would be embarrassing wouldn’t it?”. Because I knew that she had in 2010 an increase in carbon dioxide emission and it increased by 10%. But she only published that coming back from Durban. So that became a political issue on TV. And then the government promised to make it earlier. So 2012 we got CO2 data by mid-October, and 2013 we’re going to get it in April. Fantastic. But actually ridiculing is the only way that worked. That’s how we liberated the World Bank’s data. I ridiculed the President of the World Bank at an international meeting. People were laughing. That became too much. The governments in the rich countries don’t want the world to see emissions per capita. They want to publish emissions per country. This is very convenient for Germany, UK, not to mention Denmark and Norway. Then they can say the big emission countries are China and India. It is so stupid to look at total emissions per country. This allows small countries to emit as much as they want because they are just not big enough to matter. Norway hasn’t reduced their emissions for the last forty years. Instead they spend their aid money to help Brazil to replant rainforest. At the same time Brazil lends 200 times more money to the United States of America to help them consume more and emit more carbon dioxide into the atmosphere. Just to put these numbers up makes a very strong case. But I need to have timely carbon dioxide emission data. But not even climate activists ask for this. Perhaps it is because they are not really governing countries. The right wing politicians need data on economic growth, the left wing need data on unemployment but the greens don’t yet seem to need data in the same way. As well as issues getting hold of data at a national level, are there international agencies that hold data that you can’t get hold? It is like a reflection. If you can’t get data from the countries for eleven and a half months, why the heck should the UN or the World Bank compile it faster? Think of your household. There are things you do daily, that you need swiftly. Breakfast for your kids. Then, you know, repainting the house. I didn’t do it last year, so why should I do it this year? It just becomes slow the whole system. If politicians are not in a hurry to get data for their own country, they are not in a hurry to compare their data to other countries. They just do not want this data to be seen during their election period. So really what you’re saying that you’d recommend is stronger political pressure through ridicule on different national agencies? Yes. Or sit outside and protest. Do a Greenpeace action on them. Can you think of datasets about carbon dioxide emissions which aren’t currently being collected, but which you think should be collected? Yes. In a very cunning way China, South Africa and Russia like to be placed in the developing world and they don’t publish CO2 data very rapidly because they know it will be turned against them in international negotiations. They are not in a hurry. The Kyoto Protocol at least made it compulsory for the richest countries to report their data because they had committed to decrease. But every country should do this. All should be able to know how much coal each country consumed, how much oil they consumed, etc and from that data have a calculation made on how much CO2 each country emitted last year. It is strange that the best country to do this – and it is painful for a Swede to accept this – is the United States. CDIAC. Federal Agencies in US are very good on data and they take on the whole world. CDIAC make estimates for the rest of the world. Another US agency I really like is the National Snow and Ice Data Centre in Denver, Colorado. Thay give us 24 hours updates on the polar sea ice area. That’s really useful. They are also highly professional. In the US the data producers are far away from political manipulation. When you see the use of fossil fuels in the world there is only one distinct dip. That dip could be attributed to the best environmental politician ever. The dip in CO2 emissions took place in 2008. George W. Bush, Greenspan and the Lehman Brothers decreased CO2 emissions by inducing a financial crisis. It was the most significant reduction on the use of fossil fuels in modern history. I say this to put things into proportion. So far it is only financial downturns that have had an effect on the emission of greenhouse gases. The whole of environmental policy hasn’t yet had any such dramatic effect. I checked this with Al Gore personally. I asked him “Can I make this joke? That Bush was better for the climate than you were?”. “Do that!”, he said, “You’re correct.” Once we show this data people can see that the economic downturn so far was the most forceful effect on CO2 emission. If you could have all of the CO2 and climate data in the world, what would you do with it? We’re going to make teaching materials for high schools and colleges. We will cover the main aspects of global change so that we produce a coherent data-driven worldview, which starts with population, and then covers money, energy, living standards, food, education, health, security, and a few other major aspects of human life. And for each dimension we will pick a few indicators. Instead of doing Gapminder World with the bubbles that can display hundreds of indicators we plan a few small apps where you get a selected few indicators but can drill down. Start with world, world regions, countries, subnational level, sometimes you split male and female, sometimes counties, sometimes you split income groups. And we’re trying to make this in a coherent graphic and color scheme, so that we really can convey an upgraded world view. Very very simple and beautiful but with very few jokes. Just straightforward understanding. And for climate impact we will relate to the economy. To relate to the number of people at different economic levels, how much energy they use and then drill down into the type of energy they use and how that energy source mix affects the carbon dioxide emissions. And make trends forward. We will rely on the official and most credible trend forecast for population, one, two or more for energy and economic trends etc. But we will not go into what needs to be done. Or how should it be achieved. We will stay away from politics. We will stay away from all data which is under debate. Just use data with good consensus, so that we create a basic worldview. Users can then benefit from an upgraded world view when thinking and debating about the future. That’s our idea. If we provide the very basic worldview, others will create more precise data in each area, and break it down into details. A group of people inspired by your talk in Helsinki are currently starting a working group dedicated to opening up and reusing CO2 data. What advice would you give them and what would you suggest that they focus on? Put me in contact with them! We can just go for one indicator: carbon dioxide emission per person per year. Swift reporting. Just that. Thank you very much Professor Rosling. Thank you.
If you want help to liberate, analyse or communicate carbon emissions data in your country, you can join the OKFN’s Open Sustainability Working Group.

Did Gale Cengage just liberate all of their public domain content? Sadly not…

- January 9, 2013 in Featured, Free Culture, Legal, Open Access, Open/Closed, Public Domain, WG Public Domain

Earlier today we received a strange and intriguing press release from a certain ‘Marmaduke Robida’ claiming to be ‘Director for Public Domain Content’ at Gale Cengage’s UK premises in Andover. Said the press release:
Gale, part of Cengage Learning, is thrilled to announce that all its public domain content will be freely accessible on the open web. “On this Public Domain Day, we are proud to have taken such a ground-breaking decision. As a common good, the Public Domain content we have digitized has to be accessible to everyone” said Marmaduke Robida, Director for Public Domain Content, Gale. Hundreds of thousands of digitized books coming from some of the world’s most prestigious libraries and belonging to top-rated products highly appreciated by the academic community such as “Nineteenth Century Collection Online”, “Eighteenth Century Collection Online”, “Sabin America”, “Making of the Modern World” and two major digitized historical newspaper collections (The Times and the Illustrated London news) are now accessible from a dedicated websit. The other Gale digital collections will be progressively added to this web site throughout 2013 so all Public Domain content will be freely accessible by 2014. All the images are or will be available under the Public Domain Mark 1.0 license and can be reused for any purpose. Gale’s global strategy is inspired by the recommandations issued by the European reflection group “Comite des sages” and the Public Domain manifesto. For Public Domain content, Gale decided to move to a freemium business model : all the content is freely accessible through basic tools (Public Domain Downloader, URL lists, …), but additional services are charged for. “We are confident that there still is a market for our products. Our state-of-art research platforms offer high quality services and added value which universities or research libraries are ready to pay for” said Robida. A specific campaign targeted to national and academic libraries for promoting the usage of Public Domain Mark for digitized content will be launched in 2013. “We are ready to help the libraries that have a digitization programme fulfill their initial mission : make the knowledge accessible to everyone. We also hope that our competitors will follow the same way in the near future. Public Domain should not be enclosed by paywalls or dubious licensing terms” said Robida.
The press release linked to a website which proudly proclaimed:
All Public Domain content to be freely available online. Gale Digital Collections has changed the nature of research forever by providing a wealth of rare, formerly inaccessible historical content from the world’s most prestigious libraries. In january 2013, Gale has taken a ground-breaking decision and chosen to offer this content to all the academic community, and beyond to mankind, to which it belongs
This was met with astonishment by members of our public domain discussion list, many of whom suspected that the news might well be too good to be true. The somewhat mysterious, yet ever-helpful Marmaduke attempted to allay these concerns on the list, commenting:
I acknowledge this decision might seem a bit disorientating. As you may know, Gale is already familiar to give access freely to some of its content [...], but for Public Domain content we have decided to move to the next degree by putting the content under the Public Domain Mark.
Several brave people had a go at testing out the so-called ‘Public Domain Downloader’ and said that it did indeed appear to provide access to digitised images of public domain texts – in spite of concerns in the Twittersphere that the software might well be malware (in case of any ambiguity, we certainly do not suggest that you try this at home!). I quickly fired off an email to Cengage’s Director of Media and Public Relations to see if they had any comments. A few hours later a reply came back:
This is NOT an authorized Cengage Learning press release or website – our website appears to have been illegally cloned in violation of U.S. copyright and trademark laws. Our Legal department is in the process of trying to have the site taken down as a result. We saw that you made this information available via your listserv and realize that you may not have been aware of the validity of the site at the time, but ask that you now remove the post and/or alert the listserv subscribers to the fact that this is an illegal site and that any downloads would be in violation of copyright laws.
Sadly the reformed Gale Cengage – the Gale Cengage opposed to paywalls, restrictive licensing and clickwrap agreements on public domain material from public collections, the Gale Cengage supportive of the Public Domain Manifesto and dedicated to liberating of public domain content for everyone to enjoy – was just a hoax, a fantasm. At least this imaginary, illicit doppelgänger Gale gives a fleeting glimpse of a parallel world in which one of the biggest gatekeepers turned into one of the biggest liberators overnight. One can only hope that Gale Cengage and their staff might – in the midst of their legal wrangling – be inspired by this uncanny vision of the good public domain stewards that they could one day become. If only for a moment.