You are browsing the archive for Exemplars.

What Does $3.2M Buy in Open Government?

- July 16, 2013 in Exemplars, Open Government Data

The following guest post is by Travis Korte from the Information Technology & Innovation Foundation. Still from GitMachines’ submission video The Knight Foundation received hundreds of submissions to its “Knight News Challenge on Open Gov.,” a competition designed to create new tools to improve how citizens interact with government. The applicants noted a number of problems with government data: confusing interfaces for government data portals, poor public understanding of proposed policies, inaccessible court records, strict security regulations impeding civic hacking projects, poor visualization of government data and a lack of information about municipal projects. Last month, the Foundation awarded over $3.2 million to eight winners. Here’s a round-up of what they do:
  • Procure.io: The Oakland and Atlanta-based organization will produce a streamlined procurement system for government contracts. Using a simple interface, government officials will be able to submit requests for proposal to a publicly accessible and easily indexed database. By simplifying the contracting process, Procure.io stands to broaden the pool of applicants and encourage lower bids.
  • Outline.com: This Cambridge, MA-based “policy simulation” startup will let users input their age, income and other general details on a website and then use sophisticated economic models to output a positive or negative dollar amount that represents their expected net income change as a result of a proposed policy. Outline will also provide a transparent version-control system to catalog changes in various policies.
  • Oyez: Founded in 1997 at IIT Chicago-Kent College of Law, Oyez has overseen successful digitization initiatives for U.S. Supreme Court documents, and now hopes to apply the same model to state supreme courts. The effort will collect, catalog, standardize, annotate and release to the public the records of the courts of the five largest states (CA, FL, IL, NY, and TX). The organization will also work to annotate the records with metadata and plain-English summaries, in partnership with local “public-spirited institutions.”
  • GitMachines: The Washington, DC-based team will provide free, cloud-based virtual machines that are compliant with NIST and GSA software standards and come pre-configured with commonly used open government tools such as the Apache Tomcat web server and data workflow management tool Drake. By offering these servers from a central, virtual depot, GitMachines will also reduce costs associated with ad hoc server-side IT staffing.
  • Civic Insight: Building off their work on BlightStatus, an urban blight data visualization tool for New Orleans, the San Francisco-based Civic Insight will expand the scope of their dynamic mapping solution, working with other cities on applications related to economic development and public health.
  • Plan in a Box: A Philadelphia- and New York-based team will build a web publishing platform designed for municipal planning activities. Aimed at geographically-constrained projects in small and medium-sized cities, Plan in a Box will offer a centralized news and feedback repository, with mobile and social integration. Organizers hope to enable effective communications without any costly web design or excessive configuration on the part of city officials.
  • Smart Communities – Pushing Government Open: The Chicago branch of the Local Initiatives Support Corporation offers a three-pronged approach to grow the community’s capacity to participate in and take advantage of future open data initiatives: 1) attract more internet users by providing classes and job training 2) promote currently available open data by introducing existing data projects on neighborhood web portals and in special meetups and 3) meet with community members in five Chicago neighborhoods to prioritize and request additional open data.
  • OpenCounter: The City of Santa Cruz, CA and Code for America will simplify the process of opening a small business by developing an application programming interface (a set of protocols for building software using the OpenCounter platform), a mechanism for non-technical users to customize the website, and tools for site selection and project comparison. These tools will be added to the existing OpenCounter website, which provides a portal to the city’s business permitting process.
The winning entries provide a revealing glimpse into an emerging concern in open government data projects: what sorts of web infrastructure will be necessary to allow more people to actually make use of the data? Oyez represented the only traditional digitization project among the winners; others, such as Civic Insight and OpenCounter, trained their focus on the post-digitization landscape, proposing projects to redesign the data offerings that already exist. The most ambitious projects took a further step back from data itself, and proposed to address gaps in knowledge, skills and resources related to open government that no amount of interface design is likely to fix. From GitMachines, which will attempt to help surmount security obstacles to government software adoption, to Smart Communities, which will promote data literacy as a gateway to participation in open government, a common question emerged. Data is here; now what will we have to change about ourselves and our institutions to make good use of it?
Travis Korte is a research analyst at the Information Technology & Innovation Foundation (ITIF), where he works on the Data Innovation project. He has a background in journalism, computer science and statistics. Prior to joining ITIF, he launched the Science vertical of The Huffington Post and served as its Associate Editor, covering a wide range of science and technology topics. If you’re interested in Open Government Data, you should be on our Open Government Data group!

Prescribing Analytics: how drug data can save the NHS millions

- December 17, 2012 in Exemplars, External, Open Data, Open Science

Last week saw the launch of prescribinganalytics.com (covered in the Economist and elsewhere). At present it’s “just” a nice data visualisation of some interesting open data that show the NHS could potentially save millions from its drug budget. I say “just” because we’re in discussions with several NHS organizations about providing a richer, tailored, prescribing analytics service to support the best use of NHS drug budgets. Working on the project was a lot of fun, and to my mind the work nicely shows the spectacular value of open data when combined with people and internet. The data was, and is, out there. All 11 million or so rows of it per month, detailing every GP prescription in England. Privately, some people expressed concern that failure to do anything with the data so far was undermining efforts to make it public at all. Once data is open it takes time for people to discover a reason for doing something interesting with it, and to get organized to do it. There’s no saying what people will use the data for, but provided the data isn’t junk there’s a good bet that sooner or later something will happen. The story of how prescribinganalytics.com came to be is illustrative, so I’ll briefly tell my version of it here… Fran (CEO of https://www.mastodonc.com/) emailed me a few months ago with the news that she was carrying out some testing using the GP prescribing data. I replied and suggested looking at prescriptions of proprietary vs generic ACE-inhibitors (a drug that lowers blood pressure) and a few other things. I also cc’d Ben Goldacre and my good friend Tom Yates. Ben shared an excellent idea he’d had a while ago for a website with a naughty name that showed how much money was wasted on expensive drugs where there was an identically effective cheaper option and suggested looking at statins (a class of drug that reduces the risk of stroke, heart attack, and death) first. Fran did the data analysis and made beautiful graphics. Ben, Tom, and I, with help from a handful of other friends, academics, and statisticians provided the necessary domain expertise to come up with an early version of the site which had a naughty name. We took counsel and decided it’d be more constructive, and more conducive to our goals, not to launch the site with a naughty name. A while later the http://www.theodi.org/ offered to support us in delivering prescribinganalytics.com. In no particular order, Bruce (CTO of https://www.mastodonc.com/), Ayesha (http://londonlime.net/), Sym Roe, Ross Jones, and David Miller collaborated with the original group to make the final version. I’d call the way we worked peer production, a diverse group of people with very different skill sets and motivations formed a small self-organizing community to achieve the task of delivering the site. I think the results speak for themselves, it’s exciting, and this is just the beginning :-) Notes
  1. Mastodon C is a start-up company currently based at The Open Data Institute. The Open Data Institute’s mission is to catalyse the evolution of an open data culture to create economic, environmental, and social value.

  2. Open Health Care UK is a health technology start-up

  3. About Ben Goldacre

  4. Full research findings and details on methodology can be found at: http://www.prescribinganalytics.com/

Hurricane Sandy and open data

- November 1, 2012 in Exemplars, News, Open Data, Open Government Data

It is not an immediately obvious partnership, and yet open data and crisis response go together incredibly well. As storms have lashed the East coast of the US in recent days, causing tragic loss of life and enormous financial damage, many of the tools which have helped citizens to track its path and stay safe have been built on the back of open government data. Just as with the Open Street Map community’s response in the Haiti disaster, we find that with open data at their fingertips, civic hackers and developers are able to build useful tools in an emergency with a speed that far outstrips what centralised government agencies are able to produce. Check out the Google Crisis Map of Hurricane Sandy, which predicts the future of the storm in real time, including power outages; or the New York Times’s evacuation map. Or if you’re a coder wanting to work with others in the tech community, check out HurricaneHackers who are working on projects and resources for Sandy. Alex Howard is tracking the datastorm here. He writes:
When natural disasters loom, public open government data feeds become critical infrastructure … it’s key to understand that it’s government weather data, gathered and shared from satellites high above the Earth, that’s being used by a huge number of infomediaries to forecast, predict and instruct people about what to expect and what to do.
And New York City’s Chief Digital Officer, Rachel Haot, wrote to TechCrunch:
Open data is critical in crisis situations because it allows government to inform and serve more people than it ever could on its own through conventional channels. By making data freely available in a usable format for civic-minded developers and technology platforms, government can exponentially scale its communications and service delivery.
We’ve set up a CKAN group for data related to Sandy here: http://thedatahub.org/group/sandy-response-data If you’re interested in contributing, there are some useful links to get started with here.

Open Street Map has officially switched to ODbL – and celebrates with a picnic

- September 12, 2012 in Exemplars, External, Featured, Open Data, Open Data Commons, WG Open Licensing

Open Street Map is probably the best example of a successful, community driven open data project. The project was started by Steve Coast in 2004 in response to his frustration with the Ordnance Survey’s restrictive licensing conditions. Steve presented on some of his early ‘mapping parties’ – where a small handful of friends would walk or cycle around with GPS devices and then rendezvous in the pub for a drink – at some of the Open Knowledge Foundation’s first events in London. In the past 8 years it has grown from a project run by a handful of hobbyists on a shoestring to one of the world’s biggest open data projects, with hundreds of thousands of registered users and increasingly comprehensive coverage all over the world. In short, Open Street Map is the Wikipedia of the open data world – and countless projects strive to replicate its success. Hence we are delighted that – after a lengthy consultation process – today Open Street Map has officially switched to using the OKFN’s Open Data Commons ODbL license. Michael Collinson, who is on the License Working Group at the OpenStreetMap Foundation, reports:
It is my great pleasure to pass on to you that as of 07:00 UTC this morning, 12th September 2012, OpenStreetMap began publishing its geodata under Open Data Common’s ODbL 1.0. That is several terabytes of data created by a contributor community of over a three-quarters of a million and growing every day.
The Open Street Map blog reports that OSM community members will be celebrating with a picnic:
At long last we are at the end of the license change process. After four years of consultation, debate, revision, improvement, revision, debate, improvement, implementation, coding and mapping, mapping, mapping, it comes down to this final step. And this final step is an easy one, because we have all pitched in to do the hard work in advance. The last step is so easy, it will be a picnic.
If you use data from Open Street Map, you can read about how the switch will affect you here. A big well done to all involved for coming to the end of such a lengthy process – and we hope you enjoyed the sandwiches!

Montevideo: proud of our data

- August 9, 2011 in Exemplars, montevideo, uruguay, Video

The following post is by Guillermo Moncecchi of Intendencia de Montevideo in Uruguay. Here, in Montevideo, we are proud of our data. The Intendencia de Montevideo drives the economic, social and cultural life of the city, producing data. Lots of data. The government has spent years developing its information services, almost all government processes produce digital data. High quality data: we need it to accomplish our government tasks. As we said, we are proud of them: we have high precision cartography, including every street and every address; we have birth, death and marriage data in the city; we have digitized the placement of libraries, polyclinics, city landmarks, light points… we need them for our work. And, as we do a good work, all these data are accurate and are continuously updated. As we are proud of our data, a day came when we ask ourselves: why not let others use them? We discussed the idea and decided to embrace the open data principles, removing barriers to information access: we decided that our data should be on the public domain. The city Mayor approved the idea and wrote a resolution stating the open data approach: if it is public, it is open. We then started an open data portal and published the first data sets. From then, we have been continuously working on updating the portal. We listen to people asking for data. We try to satisfy them. Moreover, we are trying to include an open data version of our information as a mandatory product of every software we develop, including the open data idea in the software development cycle. Yes, we lost some money: before open data, we charged individuals and institutions for the access to our cartography base. Today, an application using OpenStreetMap uses the same cartography we use for our daily work. That is: the best cartography available for Montevideo. For free. That means better services for people in Montevideo. We have eased data exchange with other public institutions: want some data? Just go to the site and get it. Not available? Ok, wait a couple of days and look again… you’ll get the data, and everybody will. It’s public, it’s open. We are about to publish our accounting data: where does my money come from, where does my money go. Digitally, in open formats. For everybody. That is how we think about transparency. We want to build community. We want our data to be used, because we are responsible for them. People have started using our data: in our portal, we have linked applications buil using our data. People have found mistakes within our data: we corrected them. We are not afraid of errors: we want to solve them. Going to http://datos.gub.uy we are working with Agesic (the Electronic Government and Information Society Agency of the Uruguayan government), trying to aid in the development of the Uruguayan open data portal. The Uruguayan state has information access laws, but wants more: if it is public is open. We want to help with our data and our experience.

OpenCorporates: the Open Database of the Corporate World

- December 20, 2010 in Exemplars, External, Open Data, Open Government Data

This is a guest post by Chris Taggart, a member of OKFN’s open government working group and creator of OpenlyLocal, who today launched a new website OpenCorporates in collaboration with Rob McKinnon (a project they first demoed at the Open Government Data Camp in November). Why OpenCorporates? Like most open data/open source projects, it was started (just a couple of months ago), because the founders, Chris Taggart & Rob McKinnon, needed such a resource to exist. Specifically we needed:
  1. an open data base of companies not just in the UK, or in another individual country, but in any country
  2. a way of matching lists of company names to real-world companies (with their company numbers)
  3. a place where the increasingly large amount of open government data relating to companies could be brought together, with all the power that would bring to the community
So, OpenCorporates was created, and while it’s very, very early days, we think we’ve got something that is massively more usable than anything else out there (and did we mention it’s open data too?). So, without any more delay, let’s take a quick run through the main features. The first place is, reasonably, the home page, where you can search for a company name from the over 3,800,000 companies in the OpenCorporates database You can also start browsing the database by filtering by jurisdiction (this similar but not the same as country – more on this in a later post), and from there to filtering by company type or status. The next bit is where it starts to get really interesting, and that’s where we can start to filter based on public data we’ve imported. Let’s say we want to see all the company with Financial Transactions – there’s possibly a better way of expressing this, but these are all the UK central government spending items recently release as part of its drive to open up government. Click on the Financial Transactions filter and you get: There’s 4955 companies who received a payment from central government. Let’s now see those who received notices from the UK Health & Safety Executive by clicking on the filter to the right: Then let’s choose an industry classification, say, Fishing, Fish Farming etc. OK that’s just one company. DUCHY OF CORNWALL OYSTER FARM LIMITED, and clicking on that gives us the following screen: OK. Interesting, but click through onto the transaction, and you get this: I’ll leave it to the reader to dig out more about that transaction (clue: http://www.google.co.uk/search?q=NOMS), but I think you’ll agree it’s a pretty useful starting point. The second core feature is the ability to matcth company names to real-world companies, complete with company numbers. To do this, we’ve implemented the back end stuff that the awesome Google Refine needs, and here a short screencast will do the job of a thousand words: screencast on vimeo. It’s worth mentioning one last feature, which is some ways is the most powerful but not at all sexy, and that’s the ability to have a URL for every company in the world (we’ll be adding the ability for the community to add companies soon). Why is this important? Because when we’re talking about companies, it’s difficult to be sure which company we’re talking about. We need universal identifiers for them, and the best are URLs. This means that different people can refer to the same OpenCorporates url (here’s the one for Google Bermuda Limited) and be sure that they’re talking about the same company. Finally, we’ve got lots of features we’re working on, including a full-blown API, so it’s easy to get the data out and reuse it elsewhere. Watch this space, follow @OpenCorporates on twitter and start exploring.