You are browsing the archive for Guest.

Gæsteindlæg: Åbne offentlige data – en ny form for offentlig service

- September 21, 2017 in åben data, offentlige data

Dette er et gæsteindlæg af Kristian Holmgaard Bernth, der er chefrådgiver i firmaet Seismonaut. Indlægget er oprindeligt skrevet som artikel på Seismonauts hjemmeside i en lidt længere version, der kan findes her. Seismonaut har blandt andet arbejdet med hvordan offentlige åbne data i større grad kan finde anvendelse i virksomheder og rådgivet myndigheder omkring deres udstilling af åbne data. Indlægget er udtryk for skribentens egne holdninger og Open Knowledge Danmark har ingen tilknytning til Seismonaut i øvrigt.   Hvad er åbne offentlige data?
Åbne offentlige data er en meget bred betegnelse og et stort virvar, hvor der er mange ting, som hører ind under. Men der er grundlæggende tre ting, som definerer åbne offentlige data:
  1. Den offentlige del af definitionen betyder, at det er data eller information, som er ejet af en offentlig myndighed eller institution.
  2. Den åbne del betyder, at det er data, som bliver stillet frit til rådighed for alle
  3. Den tredje ting er, at det er data, som ikke kræver nogen særlig form for beskyttelse. Det vil sige, at privatpersoner, organisationer og virksomheder frit kan bruge det, som de vil.
For at opsummere er åbne offentlige data altså data og information i offentlige hænder, som kan deles og bruges frit. Det handler grundlæggende om, at offentlige myndigheder i kraft af deres arbejde samler en masse værdifuld data og information. De data har man ikke tidligere tænkt som en ressource, men de kan have stor værdi for private ved at skabe transparens eller for virksomheder som ressource i deres forretning. Derudover er åbne offentlige data for mig en ny form for offentlig service på linje med alle mulige andre services, som det offentlige tilbyder på tværs af alle myndigheder. Det er altså ikke kun interessant at se på, hvad data er, men også hvad data er i en offentlig kontekst. For hvis vi ser på data i en offentlig kontekst, er data noget, som de forskellige myndigheder skal skabe, publicere, vedligeholde og gøre synligt, således at det står til rådighed for, de folk som skal bruge det.

Øget fokus på datadrevet udvikling

Hvorfor skal vi interessere os for emnet?
Åbne offentlige data spiller ind i en større sammenhæng, hvor vi generelt begynder at snakke mere om data og datadrevet udvikling. Åbne offentlige data er bare én form for data. Derudover er der også en masse private virksomheder, som har data, de begynder at sælge, fordi det kan bruges som en ressource i forhold til udvikling. For at give et overblik snakker vi om tre kategorier indenfor data:
  1. Egne data
  2. Andres data
  3. Åbne offentlige data (en underkategori af andres data)
Grunden til, vi snakker om emnet er, at data er en ressource i forretnings-, samfunds- og produktudvikling på tværs af brancher. Der er fokus på området, fordi det er interessant at snakke om lige nu, hvor virksomheder og organisationer er blevet bedre og bedre til at arbejde med datadrevet udvikling. Det skyldes også det store vækstpotentiale, der ligger i at få frigjort al den data, som offentlige myndigheder sidder på, og lade virksomheder og organisationer bruge dataen som ressource. Vækstpotentialet kan forstås ligesom varer på hylderne i supermarkederne: Hvis alle varerne bliver frit tilgængelige for virksomheder, der kan bruge dem i udviklingen af deres produkter eller forretning, vil alle have bedre ressourcer til rådighed for at skabe nyt. Hvis virksomhederne har flere ingredienser til rådighed til deres produktudvikling, må man forvente at outputtet er nogle mere interessante produkter end dem, vi havde før. Det er derfor, vi snakker om data – fordi det er en potentiel værdiskabende ressource.

Vi er godt med i Danmark

Hvad er status i forhold til at stille data til rådighed?
Overordnet set er Danmark godt med sammenlignet med andre lande. Det offentlige Danmark har over de sidste mange år været rigtig gode til at digitalisere både deres systemer og deres arbejdsgange, og deri er grundlaget for, at der opstår en masse digitaliseret data. I forhold til de store brede data, er vi godt med f.eks. i forhold til miljø- og geodata, hvor Danmarks Miljøportal og DMI har en masse offentlig data. Men udover det er det stadig meget på græsrodsniveau. Der er tale om ildsjæle rundt i kommuner og andre myndigheder, som har taget initiativ og er begyndt at publicere en masse data. Det har til gengæld ført til, at datalandskabet er en anelse fragmenteret.
Eksempel på brug af åbne offentlige data
I de byer, der har betalte offentlige parkeringspladser, samler kommunerne en enorm mængde data. Gennem parkeringsanlæg eller billet-apps bliver der samlet data om, hvor, hvornår og hvor længe vi parkerer, når vi benytter pladserne. De data har tidligere ligget ubrugt hen, gemt i databaser, men de kan være en vigtig ressource. I Københavns Kommune arbejder man derfor nu på at få de data i hænderne på virksomheder, der kan bruge dem til at hjælpe byens borgere til nemmere at finde en ledig parkeringsplads i myldretiden.
Hvilken værdi skaber åbne data for henholdsvis det offentlige og os borgere?
Virksomhederne efterspørger især data om borgernes adfærd. Hvem er de, hvor er de henne, og hvad gør de? Alle sådanne informationer er interessante, fordi de handler om mennesker. Den type data kan bruges som en ressource for virksomheder til at lave nogle bedre løsninger. Derfor skaber det værdi for borgerne, idet dataressourcen kommer i hænderne på dem, som udvikler løsninger til deres hverdag. På den offentlige side kan fritstillet data bruges til at skabe bedre borgerrettede offentlige services f.eks. internt i offentlige myndigheder.
 

Hold fast i nysgerrigheden og støt eksisterende initiativer

Hvordan tror du, udviklingen bliver i fremtiden inden for åbne offentlige data? 
Jeg ser det sådan, at græsrodsniveauet har en værdi i sig selv. Jeg tror ikke, det bliver centraliseret i fremtiden – men det er klart, at vi især på udbudssiden formentligt bliver mere koordinerede, så vi forhåbentlig udnytter den vækstskabende ressource, som ligger i datadrevet udvikling. Jeg synes dog, det er vigtigt at holde fast i nysgerrigheden, og den personlige drivkraft for det teknologiske, og det data kan. Det handler om at støtte op om alle de gode idéer og initiativer, så de folk, som sidder tæt på data, får de rigtige redskaber til at arbejde med at publicere data både i offentligt og privat regi. Udviklingen sker ikke kun på baggrund af en politisk dagsorden – det bliver nødt til stadig at handle om, hvordan vi får mennesker til at mødes, og hvad vi kan gøre for de mennesker, som brænder for det her område.

Gæsteindlæg: Åbne offentlige data – en ny form for offentlig service

- September 21, 2017 in åben data, offentlige data

Dette er et gæsteindlæg af Kristian Holmgaard Bernth, der er chefrådgiver i firmaet Seismonaut. Indlægget er oprindeligt skrevet som artikel på Seismonauts hjemmeside i en lidt længere version, der kan findes her. Seismonaut har blandt andet arbejdet med hvordan offentlige åbne data i større grad kan finde anvendelse i virksomheder og rådgivet myndigheder omkring deres udstilling af åbne data. Indlægget er udtryk for skribentens egne holdninger og Open Knowledge Danmark har ingen tilknytning til Seismonaut i øvrigt.   Hvad er åbne offentlige data?
Åbne offentlige data er en meget bred betegnelse og et stort virvar, hvor der er mange ting, som hører ind under. Men der er grundlæggende tre ting, som definerer åbne offentlige data:
  1. Den offentlige del af definitionen betyder, at det er data eller information, som er ejet af en offentlig myndighed eller institution.
  2. Den åbne del betyder, at det er data, som bliver stillet frit til rådighed for alle
  3. Den tredje ting er, at det er data, som ikke kræver nogen særlig form for beskyttelse. Det vil sige, at privatpersoner, organisationer og virksomheder frit kan bruge det, som de vil.
For at opsummere er åbne offentlige data altså data og information i offentlige hænder, som kan deles og bruges frit. Det handler grundlæggende om, at offentlige myndigheder i kraft af deres arbejde samler en masse værdifuld data og information. De data har man ikke tidligere tænkt som en ressource, men de kan have stor værdi for private ved at skabe transparens eller for virksomheder som ressource i deres forretning. Derudover er åbne offentlige data for mig en ny form for offentlig service på linje med alle mulige andre services, som det offentlige tilbyder på tværs af alle myndigheder. Det er altså ikke kun interessant at se på, hvad data er, men også hvad data er i en offentlig kontekst. For hvis vi ser på data i en offentlig kontekst, er data noget, som de forskellige myndigheder skal skabe, publicere, vedligeholde og gøre synligt, således at det står til rådighed for, de folk som skal bruge det.

Øget fokus på datadrevet udvikling

Hvorfor skal vi interessere os for emnet?
Åbne offentlige data spiller ind i en større sammenhæng, hvor vi generelt begynder at snakke mere om data og datadrevet udvikling. Åbne offentlige data er bare én form for data. Derudover er der også en masse private virksomheder, som har data, de begynder at sælge, fordi det kan bruges som en ressource i forhold til udvikling. For at give et overblik snakker vi om tre kategorier indenfor data:
  1. Egne data
  2. Andres data
  3. Åbne offentlige data (en underkategori af andres data)
Grunden til, vi snakker om emnet er, at data er en ressource i forretnings-, samfunds- og produktudvikling på tværs af brancher. Der er fokus på området, fordi det er interessant at snakke om lige nu, hvor virksomheder og organisationer er blevet bedre og bedre til at arbejde med datadrevet udvikling. Det skyldes også det store vækstpotentiale, der ligger i at få frigjort al den data, som offentlige myndigheder sidder på, og lade virksomheder og organisationer bruge dataen som ressource. Vækstpotentialet kan forstås ligesom varer på hylderne i supermarkederne: Hvis alle varerne bliver frit tilgængelige for virksomheder, der kan bruge dem i udviklingen af deres produkter eller forretning, vil alle have bedre ressourcer til rådighed for at skabe nyt. Hvis virksomhederne har flere ingredienser til rådighed til deres produktudvikling, må man forvente at outputtet er nogle mere interessante produkter end dem, vi havde før. Det er derfor, vi snakker om data – fordi det er en potentiel værdiskabende ressource.

Vi er godt med i Danmark

Hvad er status i forhold til at stille data til rådighed?
Overordnet set er Danmark godt med sammenlignet med andre lande. Det offentlige Danmark har over de sidste mange år været rigtig gode til at digitalisere både deres systemer og deres arbejdsgange, og deri er grundlaget for, at der opstår en masse digitaliseret data. I forhold til de store brede data, er vi godt med f.eks. i forhold til miljø- og geodata, hvor Danmarks Miljøportal og DMI har en masse offentlig data. Men udover det er det stadig meget på græsrodsniveau. Der er tale om ildsjæle rundt i kommuner og andre myndigheder, som har taget initiativ og er begyndt at publicere en masse data. Det har til gengæld ført til, at datalandskabet er en anelse fragmenteret.
Eksempel på brug af åbne offentlige data
I de byer, der har betalte offentlige parkeringspladser, samler kommunerne en enorm mængde data. Gennem parkeringsanlæg eller billet-apps bliver der samlet data om, hvor, hvornår og hvor længe vi parkerer, når vi benytter pladserne. De data har tidligere ligget ubrugt hen, gemt i databaser, men de kan være en vigtig ressource. I Københavns Kommune arbejder man derfor nu på at få de data i hænderne på virksomheder, der kan bruge dem til at hjælpe byens borgere til nemmere at finde en ledig parkeringsplads i myldretiden.
Hvilken værdi skaber åbne data for henholdsvis det offentlige og os borgere?
Virksomhederne efterspørger især data om borgernes adfærd. Hvem er de, hvor er de henne, og hvad gør de? Alle sådanne informationer er interessante, fordi de handler om mennesker. Den type data kan bruges som en ressource for virksomheder til at lave nogle bedre løsninger. Derfor skaber det værdi for borgerne, idet dataressourcen kommer i hænderne på dem, som udvikler løsninger til deres hverdag. På den offentlige side kan fritstillet data bruges til at skabe bedre borgerrettede offentlige services f.eks. internt i offentlige myndigheder.
 

Hold fast i nysgerrigheden og støt eksisterende initiativer

Hvordan tror du, udviklingen bliver i fremtiden inden for åbne offentlige data? 
Jeg ser det sådan, at græsrodsniveauet har en værdi i sig selv. Jeg tror ikke, det bliver centraliseret i fremtiden – men det er klart, at vi især på udbudssiden formentligt bliver mere koordinerede, så vi forhåbentlig udnytter den vækstskabende ressource, som ligger i datadrevet udvikling. Jeg synes dog, det er vigtigt at holde fast i nysgerrigheden, og den personlige drivkraft for det teknologiske, og det data kan. Det handler om at støtte op om alle de gode idéer og initiativer, så de folk, som sidder tæt på data, får de rigtige redskaber til at arbejde med at publicere data både i offentligt og privat regi. Udviklingen sker ikke kun på baggrund af en politisk dagsorden – det bliver nødt til stadig at handle om, hvordan vi får mennesker til at mødes, og hvad vi kan gøre for de mennesker, som brænder for det her område.

Gæsteindlæg: Something’s (Johnny) Rotten in Denmark

- April 21, 2017 in åben data, english, offentlige data

Dette er et gæsteindlæg af Jason Hare, der er Open Data Evangelist hos OpenDataSoft. I indlægget beskrives initiativet med Københavns “City Data Exchange“, der samler data fra forskellige åbne og lukkede kilder og udgør en form for markedsplads for data. Fra Open Knowledge Danmarks side er vi særligt skeptiske overfor ideen om at tage åbne data offentliggjort på åbne platforme (som Københavns åbne data portal, der er baseret på CKAN) og genpublicere dem bag login og med begrænsning i vilkårene for videreanvendelse. Indlægget blev oprindeligt postet på Jason Hares egen blog og er udtryk for skribentens egne holdninger.

Hitachi Insight Group Repackaging Open Data

Hitachi Insight Group’s City Data Exchange pilot in Copenhagen is the latest attempt at a model of monetization around Open Data. There are some hurdles, both practical and ethical, that this model will have to overcome. Repackaging public data for resell limits re-use, accessibility and may push ethical boundaries when public data is enriched with private data.

I see ethical problems taking two tracks:

  • Personally identifiable information is more likely when public data is enriched with private data and used in a less-than-transparent manner. This is also known as the “Mosaic Effect”. “Mosaic Effect”, small fact, is a term invented by the intelligence community in the US. No longer do we have transparent government, instead we have transparent citizens. For more information on this etymological footnote, read Victor V. Ramraj’s brilliant book Global Anti-Terrorism Law and Policy.
  • Public data, Open Data, has been paid for by taxpayers. The data is a public asset and should not be given away to private sector companies that have no transparency requirements. See Ade Adewunmi’s brilliant piece written on the UK’s Government Digital Service Blog for more on this. Also a blog post I wrote based on my work at the White House Open Data Roundtables in regards to data as an asset on the OpenDataSoft blog.

The Data Exchange Model Already Includes You.

Many years ago I worked as a Vice President for a data exchange company. This company, RateWatch, packaged and resold bank rate data to banks. The banks found it cheaper to buy rates from the largest database of bank rates in the world rather than try to gather the intelligence themselves.

Selling Access is not what Smells About this Deal

APIs usually have tokens and these tokens can be throttled. This is to prevent abuse of the API and the underlying data. Governments that sell a premium access to these API are a different animal to what Hitachi is doing. Consuming millions of rows of data is not something the average person does. Selling access to the API, with a Service Level Agreement (SLA), allows public sector to make the data more reusable.

Local government can do this with other assets: toll roads; industrial use of natural resources; access to medical care; an expectation of public safety. All of these municipal services have a basic free level and a level at which there are additional fees. Consider transportation, if you drive a car you pay taxes and gasoline to drive. Taking a bus is less expensive and there are no taxes. In the same way data can be distributed more or less equally. The real difference is in the velocity of data consumption.

Examples of Companies that Collect and Repackage data:

  • Axiom sells data subscriptions to, among other customers, users of SalesForce.
  • XOR Data Exchange offers customer acquisition risk mitigation through subscriptions to credit profiles of consumers. Your cable company probably uses XOR.
  • BDEX offers persona data including spending habits, entertainment habits and political affiliation.
  • LexisNexis sells data analytics supporting compliance, customer acquisition, fraud detection, health outcomes, identity solutions, investigation, receivables management, risk decisioning and workflow optimization.
  • ESRI repackages public (open) data from US Agencies and offers subscriptions to its ArcGIS online service. The data is now in a non-reusable, proprietary format.
  • Hitachi Insight Group and the City of Copenhagen will collect and resell public data to private interests.

It’s a long and somewhat unsettling list

These companies spend money to gather information about all of us based on our commercial and entertainment habits. They then sell this data to marketing companies looking to remarket to all of us. The deal is we exchange our data in return for small benefits at the gas pump, the grocery store, the movie theatre and probably every place you shop. That is ok. We can opt out. We know it’s happening and we play along.

How the Hitachi Deal Works

The idea of data exchanges has been around probably as long as humans have been writing things down. Now that most of us operate in digital environments on a daily basis, it is not surprising that companies have figured out that data is money.

Hitachi Insight Group approach the City of Copenhagen. The City pledged $1.3 million and Hitachi matched these funds at 2:1. Note again that Hitachi is using money from the [local] government. This money is used to incentivize the private sector to invest money in making the data suitable and reliable for data sharing. In this scheme, the City recovers some of its upfront costs in making the data suitable for release. Hitachi plans to license its technology to other cities with a one time startup fee after which there will be no further obligations on the part of the government.

This implies that all of the revenue then goes back to the Hitachi Group. Hitachi does not know if this is a viable model and neither does the City of Copenhagen. At best, the City achieved a goal of limited value, it recovered some capital. At worst, the city short-circuited its own Smarter City initiative.

When we talked about access to APIs and cities wanting to charge for premium access, we decided that was ok. The City has an obligation to taxpayers to recover any revenue possible. Residents can access the API without a token for research or data storytelling, business can pay a small fee to increase the velocity of the data harvested from the Open Data Portal.

What makes the Hitachi deal so bad for Copenhagen is that it does not solve the data dissemination issue. Hitachi will control the data market and all access to the data.

 Open Knowledge Danmark har også tidligere har et gæsteindlæg om samme emne med titlen: Impressions of City Data Exchange Copenhagen.

Gæsteindlæg: Something’s (Johnny) Rotten in Denmark

- April 21, 2017 in åben data, english, offentlige data

Dette er et gæsteindlæg af Jason Hare, der er Open Data Evangelist hos OpenDataSoft. I indlægget beskrives initiativet med Københavns “City Data Exchange“, der samler data fra forskellige åbne og lukkede kilder og udgør en form for markedsplads for data. Fra Open Knowledge Danmarks side er vi særligt skeptiske overfor ideen om at tage åbne data offentliggjort på åbne platforme (som Københavns åbne data portal, der er baseret på CKAN) og genpublicere dem bag login og med begrænsning i vilkårene for videreanvendelse. Indlægget blev oprindeligt postet på Jason Hares egen blog og er udtryk for skribentens egne holdninger.

Hitachi Insight Group Repackaging Open Data

Hitachi Insight Group’s City Data Exchange pilot in Copenhagen is the latest attempt at a model of monetization around Open Data. There are some hurdles, both practical and ethical, that this model will have to overcome. Repackaging public data for resell limits re-use, accessibility and may push ethical boundaries when public data is enriched with private data.

I see ethical problems taking two tracks:

  • Personally identifiable information is more likely when public data is enriched with private data and used in a less-than-transparent manner. This is also known as the “Mosaic Effect”. “Mosaic Effect”, small fact, is a term invented by the intelligence community in the US. No longer do we have transparent government, instead we have transparent citizens. For more information on this etymological footnote, read Victor V. Ramraj’s brilliant book Global Anti-Terrorism Law and Policy.
  • Public data, Open Data, has been paid for by taxpayers. The data is a public asset and should not be given away to private sector companies that have no transparency requirements. See Ade Adewunmi’s brilliant piece written on the UK’s Government Digital Service Blog for more on this. Also a blog post I wrote based on my work at the White House Open Data Roundtables in regards to data as an asset on the OpenDataSoft blog.

The Data Exchange Model Already Includes You.

Many years ago I worked as a Vice President for a data exchange company. This company, RateWatch, packaged and resold bank rate data to banks. The banks found it cheaper to buy rates from the largest database of bank rates in the world rather than try to gather the intelligence themselves.

Selling Access is not what Smells About this Deal

APIs usually have tokens and these tokens can be throttled. This is to prevent abuse of the API and the underlying data. Governments that sell a premium access to these API are a different animal to what Hitachi is doing. Consuming millions of rows of data is not something the average person does. Selling access to the API, with a Service Level Agreement (SLA), allows public sector to make the data more reusable.

Local government can do this with other assets: toll roads; industrial use of natural resources; access to medical care; an expectation of public safety. All of these municipal services have a basic free level and a level at which there are additional fees. Consider transportation, if you drive a car you pay taxes and gasoline to drive. Taking a bus is less expensive and there are no taxes. In the same way data can be distributed more or less equally. The real difference is in the velocity of data consumption.

Examples of Companies that Collect and Repackage data:

  • Axiom sells data subscriptions to, among other customers, users of SalesForce.
  • XOR Data Exchange offers customer acquisition risk mitigation through subscriptions to credit profiles of consumers. Your cable company probably uses XOR.
  • BDEX offers persona data including spending habits, entertainment habits and political affiliation.
  • LexisNexis sells data analytics supporting compliance, customer acquisition, fraud detection, health outcomes, identity solutions, investigation, receivables management, risk decisioning and workflow optimization.
  • ESRI repackages public (open) data from US Agencies and offers subscriptions to its ArcGIS online service. The data is now in a non-reusable, proprietary format.
  • Hitachi Insight Group and the City of Copenhagen will collect and resell public data to private interests.

It’s a long and somewhat unsettling list

These companies spend money to gather information about all of us based on our commercial and entertainment habits. They then sell this data to marketing companies looking to remarket to all of us. The deal is we exchange our data in return for small benefits at the gas pump, the grocery store, the movie theatre and probably every place you shop. That is ok. We can opt out. We know it’s happening and we play along.

How the Hitachi Deal Works

The idea of data exchanges has been around probably as long as humans have been writing things down. Now that most of us operate in digital environments on a daily basis, it is not surprising that companies have figured out that data is money.

Hitachi Insight Group approach the City of Copenhagen. The City pledged $1.3 million and Hitachi matched these funds at 2:1. Note again that Hitachi is using money from the [local] government. This money is used to incentivize the private sector to invest money in making the data suitable and reliable for data sharing. In this scheme, the City recovers some of its upfront costs in making the data suitable for release. Hitachi plans to license its technology to other cities with a one time startup fee after which there will be no further obligations on the part of the government.

This implies that all of the revenue then goes back to the Hitachi Group. Hitachi does not know if this is a viable model and neither does the City of Copenhagen. At best, the City achieved a goal of limited value, it recovered some capital. At worst, the city short-circuited its own Smarter City initiative.

When we talked about access to APIs and cities wanting to charge for premium access, we decided that was ok. The City has an obligation to taxpayers to recover any revenue possible. Residents can access the API without a token for research or data storytelling, business can pay a small fee to increase the velocity of the data harvested from the Open Data Portal.

What makes the Hitachi deal so bad for Copenhagen is that it does not solve the data dissemination issue. Hitachi will control the data market and all access to the data.

 Open Knowledge Danmark har også tidligere har et gæsteindlæg om samme emne med titlen: Impressions of City Data Exchange Copenhagen.

Energinet.dk will use CKAN to launch Energy DataStore – a free and open portal for sharing energy data

- January 24, 2017 in åben data, ckan, energi data, english, klima, offentlige data

Dette er en repost af et indlæg fra Open Knowledge Internationals blog.   Open data service provider Viderum is working with Energinet.dk, the gas and electricity transmission system operator in Denmark, to provide near real-time access to Danish energy data. Using CKAN, an open-source platform for sharing data originally developed by Open Knowledge International, Energinet.dk’s Energy DataStore will provide easy and open access to large quantities of energy data to support the green transition and enable innovation.
Image credit: Jürgen Sandesneben, Flickr CC BY

Image credit: Jürgen Sandesneben, Flickr CC BY

What is the Energy DataStore?

Energinet.dk holds the energy consumption data from Danish house-holds and businesses as well as production data from windmills, solar cells and power plants. All this data will be made available in aggregated form through the Energy DataStore, including electricity market data and near-real-time information on CO2 emissions. The Energy DataStore will be built using open-source platform CKAN, the world’s leading data management system for open data. Through the platform, users will be able to find and extract data manually or through an API. “The Energy DataStore opens the next frontier for CKAN by expanding into large-scale, continuously growing datasets published by public sector enterprises”, writes Sebastian Moleski, Managing Director of Viderum, “We’re delighted Energinet.dk has chosen Viderum as the CKAN experts to help build this revolutionary platform. With our contribution to the success of the Energy DataStore, Viderum is taking the next step in fulfilling our mission: to make the world’s public data discoverable and accessible to everyone.” Open Knowledge International’s commercial spin-off, Viderum, is using CKAN to build a responsive platform for Energinet.dk that publishes energy consumption data for every municipality in hourly increments with a look to provide real-time in future. The Energy DataStore will provide consumers, businesses and non-profit organizations access to information vital for consumer savings, business innovation and green technology. As Pavel Richter, CEO of Open Knowledge International explains, “CKAN has been instrumental over the past 10 years in providing access to a wide range of government data. By using CKAN, the Energy DataStore signals a growing awareness of the value of open data and open source to society, not just for business growth and innovation, but for citizens and civil society organizations looking to use this data to address environmental issues.” Energinet.dk hopes that by providing easily accessible energy data, citizens will feel empowered by the transparency and businesses can create new products and services, leading to more knowledge sharing around innovative business models.     Notes:
Energinet.dk
Energinet.dk owns the Danish electricity and gas transmission system – the ‘energy’ motorways. The company’s main task is to maintain the overall security of electricity and gas supply and create objective and transparent conditions for competition on the energy markets.
CKAN
CKAN is the world’s leading open-source data portal platform. It is a complete out-of-the-box software solution that makes data accessible – by providing tools to streamline publishing, sharing, finding and using data. CKAN is aimed at data publishers (national and regional governments, companies and organizations) wanting to make their data open and available. A slide-deck overview of CKAN can be found here.
Viderum
Viderum is an open data solutions provider spun off from Open Knowledge, an internationally recognized non-profit working to open knowledge and see it used to empower and improve the lives of citizens around the world.
Open Knowledge International
Open Knowledge International is a global non-profit organisation focused on realising open data’s value to society by helping civil society groups access and use data to take action on social problems. Open Knowledge International does this in three ways: 1) we show the value of open data for the work of civil society organizations; 2) we provide organisations with the tools and skills to effectively use open data; and 3) we make government information systems responsive to civil society.

Energinet.dk will use CKAN to launch Energy DataStore – a free and open portal for sharing energy data

- January 24, 2017 in åben data, ckan, energi data, english, klima, offentlige data

Dette er en repost af et indlæg fra Open Knowledge Internationals blog.   Open data service provider Viderum is working with Energinet.dk, the gas and electricity transmission system operator in Denmark, to provide near real-time access to Danish energy data. Using CKAN, an open-source platform for sharing data originally developed by Open Knowledge International, Energinet.dk’s Energy DataStore will provide easy and open access to large quantities of energy data to support the green transition and enable innovation.
Image credit: Jürgen Sandesneben, Flickr CC BY

Image credit: Jürgen Sandesneben, Flickr CC BY

What is the Energy DataStore?

Energinet.dk holds the energy consumption data from Danish house-holds and businesses as well as production data from windmills, solar cells and power plants. All this data will be made available in aggregated form through the Energy DataStore, including electricity market data and near-real-time information on CO2 emissions. The Energy DataStore will be built using open-source platform CKAN, the world’s leading data management system for open data. Through the platform, users will be able to find and extract data manually or through an API. “The Energy DataStore opens the next frontier for CKAN by expanding into large-scale, continuously growing datasets published by public sector enterprises”, writes Sebastian Moleski, Managing Director of Viderum, “We’re delighted Energinet.dk has chosen Viderum as the CKAN experts to help build this revolutionary platform. With our contribution to the success of the Energy DataStore, Viderum is taking the next step in fulfilling our mission: to make the world’s public data discoverable and accessible to everyone.” Open Knowledge International’s commercial spin-off, Viderum, is using CKAN to build a responsive platform for Energinet.dk that publishes energy consumption data for every municipality in hourly increments with a look to provide real-time in future. The Energy DataStore will provide consumers, businesses and non-profit organizations access to information vital for consumer savings, business innovation and green technology. As Pavel Richter, CEO of Open Knowledge International explains, “CKAN has been instrumental over the past 10 years in providing access to a wide range of government data. By using CKAN, the Energy DataStore signals a growing awareness of the value of open data and open source to society, not just for business growth and innovation, but for citizens and civil society organizations looking to use this data to address environmental issues.” Energinet.dk hopes that by providing easily accessible energy data, citizens will feel empowered by the transparency and businesses can create new products and services, leading to more knowledge sharing around innovative business models.     Notes:
Energinet.dk
Energinet.dk owns the Danish electricity and gas transmission system – the ‘energy’ motorways. The company’s main task is to maintain the overall security of electricity and gas supply and create objective and transparent conditions for competition on the energy markets.
CKAN
CKAN is the world’s leading open-source data portal platform. It is a complete out-of-the-box software solution that makes data accessible – by providing tools to streamline publishing, sharing, finding and using data. CKAN is aimed at data publishers (national and regional governments, companies and organizations) wanting to make their data open and available. A slide-deck overview of CKAN can be found here.
Viderum
Viderum is an open data solutions provider spun off from Open Knowledge, an internationally recognized non-profit working to open knowledge and see it used to empower and improve the lives of citizens around the world.
Open Knowledge International
Open Knowledge International is a global non-profit organisation focused on realising open data’s value to society by helping civil society groups access and use data to take action on social problems. Open Knowledge International does this in three ways: 1) we show the value of open data for the work of civil society organizations; 2) we provide organisations with the tools and skills to effectively use open data; and 3) we make government information systems responsive to civil society.

Gæsteindlæg: Open Energy Days – Hackathon med åbne energidata

- July 15, 2016 in åben data, begivenhed, dataworkshop, energi data, hackathon, offentlige data

Dette er et gæsteindlæg af Matti Bugge, der er digitaliseringskonsulent hos Kultur og Borgerservice i Aarhus Kommune og tidligere blandt andet har arbejdet med initiativet Open Data Aarhus.
Open energy Days

Open energy Days

Der sker i øjeblikket meget på energidataområdet. Flere kommuner er i gang med at udvikle systemer, der kan vise information om deres bygningers energiforbrug, der kortlægges forbrugsmønstre i private husstande og der findes adskillige forskningsprojekter, der arbejder med adfærd og ny smart teknologi. Til september bliver der åbnet op for nogle af disse data til hackathonet Open Energy Days, der afholdes i Aarhus af Open Data DK og Erhvervsstyrelsen. Det bliver muligt at se på kommunernes data om forbrug i egne bygninger, husstande og der arbejdes på også at få adgang til energidata fra flere private firmaer. Formålet med arrangementet er, at der bliver skabt nye bud på innovative måder at udnytte de enorme mængder data på området, til at skabe nye samfundsrelevante løsninger og services. Deltager man til arrangementet bliver det altså muligt at få adgang til et hidtil relativt lukket datafelt. Derudover er der en flot iværksætterpræmie til vinderne af Open Energy Days, som modtager rådgivning fra en række firmaer til at omsætte ideen fra hackathonet til et reelt start-up. Heriblandt fx Systematic, BIIR og WElearn. Det er gratis at deltage, og arrangørerne sørger for fuld forplejning hele weekenden. Alle kan deltage uanset baggrund, og der ledes især efter studerende, iværksættere og virksomheder, der har interesse for åbne data, energiområdet, innovation eller iværksætteri. Open Energy Days finder sted fra d. 22. – 25. september på Dokk1, Hack Kampmann’s Plads 2, 8000 Aarhus. Læs mere på: Open Energy Days.

Gæsteblog: Impressions of City Data Exchange Copenhagen

- June 27, 2016 in announcement, fellowship

Dette er et gæsteindlæg af Leigh Dodds, der rådgiver omkring åbne data og er tilknyttet Open Data Institute (ODI). I indlægget beskrives initiativet med Københavns “City Data Exchange“, der samler data fra forskellige åbne og lukkede kilder og udgør en form for markedsplads for data. Fra Open Knowledge Danmarks side er vi især skeptiske overfor ideen om at tage åbne data offentliggjort på åbne platforme (som Københavns åbne data portal, der er baseret på CKAN) og genpublicere dem bag login og med begrænsning i vilkårene for videreanvendelse. Indlægget blev oprindeligt postet på Leigh Doods egen blog og er udtryk for skribentens egne holdninger.
 

First Impressions of Copenhagen’s City Data Exchange

Copenhagen have apparently launched their new City Data Exchange. As this is a subject which is relevant to my interests I thought I’d publish my first impressions of it. The first thing I did was to read the terms of service. And then explore the publishing and consuming options.

Current Contents

As of today 21st May there are 56 datasets on the site. All of them are free. The majority seem to have been uploaded by Hitachi and are copies of datasets from Copenhagen’s open data portal. Compare, for example this dataset on the exchange and the same one on the open data portal. The open version has better metadata, clearer provenance, more choice of formats and a download process that doesn’t require a registration step. The open data portal also has more datasets than the exchange.

Consuming Data

Datasets on the exchange can apparently be downloaded as a “one time download” or purchased under a subscription model. However I’ve downloaded a few and the downloads aren’t restricted to being one-time, at least currently. I’ve also subscribed to a free dataset. My expectation was that this would give me direct access to an API. It turns out that the developer portal is actually a completely separate website. After subscribing to a dataset I was emailed with a username and password (in clear text!) with instructions to go and log into that portal. The list of subscriptions in the developer portal didn’t quite match what I had in the main site, as one that I’d cancelled was still active. It seems you can separately unsubscribe to them there, but its not clear what the implications of that might be. Weirdly there’s also a prominent “close your account” button in the developer portal. Which seems a little odd. Feels like two different products or services have been grafted together. The developer portal is very, very basic. The APIs expose by each dataset are:
  • a download API that gives you the entire dataset
  • a “delta” API that gives you changes made between specific dates.
There are no filtering or search options. No format options. Really there’s very little value-add at all. Essentially the subscribing to a dataset gives you a URL from which you can fetch the dataset on a regular basis rather than having to manual download it. There’s no obvious help or support for developers creating useful applications against these APIs. Authorising access to an API is done via an API key which is added as a URL parameter. They don’t appear to be using OAuth or similar to give extra security.

Publishing Data

In order to publish data you need to have provided a contact phone number and address. You can then provide some basic configuration for your dataset:
  • Title
  • Description
  • Period of update: one off, hourly, daily, weekly, monthly, annual
  • Whether you want to allow it to be downloaded and if so, whether its free or paid
  • Whether you want to allow API access and if so, whether its free or paid
Pricing is in Kronor and you can set a price per download or a monthly price for API access (such as it is). To provide your data you can either upload a file or give the data exchange access to an API. It looks like there’s an option to discuss how to integrate your API with their system, or you can provide some configuration options:
  • Type – this has one option “Restful”
  • Response Type – this has one option “JSON”
  • Endpoint URL
  • API Key
When uploading a dataset, you can tell it a bit about the structure of the data, specifically:
  • Whether it contains geographical information, and which columns include the latititude and longitude.
  • Whether it’s a time series and which column contains the timestamp
This is as far as I’ve tested with publishing, but looks like there’s a basic workflow for draft and published datasets. I got stuck because of issues trying to publish and map a dataset that I’d just downloaded from the exchange itself.

The Terms of Service

There are a number interesting things to note there: Section 7, Payments: “we will charge Data Consumers Service Delivery Charges based on factors such as the volume of the Dataset queried and downloaded as well as the frequency of usage of the APIs to query for the Datasets” It’s not clear what those service delivery charges will be yet. The platform doesn’t currently provide access to any paid data, so I can’t tell. But it would appear that even free data might incur some charges. Hopefully there will be a freemium model? Seems likely though that the platform is designed to generate revenue for Hitachi through ongoing use of the APIs. But if they want to raise traffic they need to think about adding a lot more power to the APIs. Section 7, Payments: As a Data Consumer your account must always have a positive balance with a minimum amount as stated at our Website from time to time” Well, this isn’t currently required during either registration or signing up to subscribe to an API. However I’m concerned that I need to let Hitachi hold money even if I’m not actively using the service. I’ll also note that in Section 8, they say that on termination, “Any positive balance on your account will be payable to you provided we receive payment instructions.” Given that the two payment options are Paypal and Invoice, you’d think they might at least offer to refund money via PayPal for those using that option. Section 8, Restrictions in use of the Services or Website: You may not “access, view or use the Website or Services in or in connection with the development of any product, software or service that offers any functionality similar to, or competitive with, the Services” So I can’t, for example, take free data from the service and offer an alternative catalogue or hosting option? Or provide value-added services that enrich the freely available datasets? This is pure protecting the platform, not enabling consumers or innovation. Section 12, License to use the Dataset: “Subject to your payment of any applicable fees, you are granted a license by the Data Provider to use the relevant Dataset solely for the internal purposes and as otherwise set out under section 14 below. You may not sub-license such right or otherwise make the Dataset or any part thereof available to third parties.” Data reuse rights are also addressed in Section 13 which includes the clause: “You shall not…make the Dataset or any part thereof as such available to any third party.” While Section 14, explains that as a consumer you may “(i) copy, distribute and publish the result of the use of the Dataset, (ii) adapt and combine the Dataset with other materials and (iii) exploit commercially and noncommercially” and that: “The Data Provider acknowledges that any separate work, analysis or similar derived from the Dataset shall vest in the creator of such“. So, while they’ve given clearly given some thought to the creation of derived works and products, which is great, the data can only be used for “internal purposes” which are not clearly defined especially with respect to the other permissions. I think this precludes using the data in a number of useful ways. You certainly don’t have any rights to redistribute, even if the data is free. This is not an open license. I’ve written about the impacts of non-open licenses. It appears that data publishers must agree to these terms too, so you can’t publish open data through this exchange. This is not a good outcome, especially if the city decides to publish more data here and on its open data portal. The data that Hitachi have copied into the site is now under a custom licence. If you access the data through the Copenhagen open data portal then you are given more rights. Amusingly, the data in the exchange isn’t properly attributed, so it break the terms of the open licence. I assume Hitachi have sought explicit permission to use the data in this way? Overall I’m extremely underwhelmed by the exchange and the developer portal. Even allowing for it being at an early stage, its a very thin offering.I built more than this with a small team of a couple of people over a few months. It’s also not clear to me how the exchange in its current form is going to deliver on the vision. I can’t see how the exchange is really going to unlock more data from commercial organisations. The exchange does give some (basic) options for monetising data, but has nothing to say about helping with all of the other considerations important to data publishing.

Gæsteblog: Impressions of City Data Exchange Copenhagen

- June 27, 2016 in Uncategorized

Dette er et gæsteindlæg af Leigh Dodds, der rådgiver omkring åbne data og er tilknyttet Open Data Institute (ODI). I indlægget beskrives initiativet med Københavns “City Data Exchange“, der samler data fra forskellige åbne og lukkede kilder og udgør en form for markedsplads for data. Fra Open Knowledge Danmarks side er vi især skeptiske overfor ideen om at tage åbne data offentliggjort på åbne platforme (som Københavns åbne data portal, der er baseret på CKAN) og genpublicere dem bag login og med begrænsning i vilkårene for videreanvendelse. Indlægget blev oprindeligt postet på Leigh Doods egen blog og er udtryk for skribentens egne holdninger.
 

First Impressions of Copenhagen’s City Data Exchange

Copenhagen have apparently launched their new City Data Exchange. As this is a subject which is relevant to my interests I thought I’d publish my first impressions of it. The first thing I did was to read the terms of service. And then explore the publishing and consuming options.

Current Contents

As of today 21st May there are 56 datasets on the site. All of them are free. The majority seem to have been uploaded by Hitachi and are copies of datasets from Copenhagen’s open data portal. Compare, for example this dataset on the exchange and the same one on the open data portal. The open version has better metadata, clearer provenance, more choice of formats and a download process that doesn’t require a registration step. The open data portal also has more datasets than the exchange.

Consuming Data

Datasets on the exchange can apparently be downloaded as a “one time download” or purchased under a subscription model. However I’ve downloaded a few and the downloads aren’t restricted to being one-time, at least currently. I’ve also subscribed to a free dataset. My expectation was that this would give me direct access to an API. It turns out that the developer portal is actually a completely separate website. After subscribing to a dataset I was emailed with a username and password (in clear text!) with instructions to go and log into that portal. The list of subscriptions in the developer portal didn’t quite match what I had in the main site, as one that I’d cancelled was still active. It seems you can separately unsubscribe to them there, but its not clear what the implications of that might be. Weirdly there’s also a prominent “close your account” button in the developer portal. Which seems a little odd. Feels like two different products or services have been grafted together. The developer portal is very, very basic. The APIs expose by each dataset are:
  • a download API that gives you the entire dataset
  • a “delta” API that gives you changes made between specific dates.
There are no filtering or search options. No format options. Really there’s very little value-add at all. Essentially the subscribing to a dataset gives you a URL from which you can fetch the dataset on a regular basis rather than having to manual download it. There’s no obvious help or support for developers creating useful applications against these APIs. Authorising access to an API is done via an API key which is added as a URL parameter. They don’t appear to be using OAuth or similar to give extra security.

Publishing Data

In order to publish data you need to have provided a contact phone number and address. You can then provide some basic configuration for your dataset:
  • Title
  • Description
  • Period of update: one off, hourly, daily, weekly, monthly, annual
  • Whether you want to allow it to be downloaded and if so, whether its free or paid
  • Whether you want to allow API access and if so, whether its free or paid
Pricing is in Kronor and you can set a price per download or a monthly price for API access (such as it is). To provide your data you can either upload a file or give the data exchange access to an API. It looks like there’s an option to discuss how to integrate your API with their system, or you can provide some configuration options:
  • Type – this has one option “Restful”
  • Response Type – this has one option “JSON”
  • Endpoint URL
  • API Key
When uploading a dataset, you can tell it a bit about the structure of the data, specifically:
  • Whether it contains geographical information, and which columns include the latititude and longitude.
  • Whether it’s a time series and which column contains the timestamp
This is as far as I’ve tested with publishing, but looks like there’s a basic workflow for draft and published datasets. I got stuck because of issues trying to publish and map a dataset that I’d just downloaded from the exchange itself.

The Terms of Service

There are a number interesting things to note there: Section 7, Payments: “we will charge Data Consumers Service Delivery Charges based on factors such as the volume of the Dataset queried and downloaded as well as the frequency of usage of the APIs to query for the Datasets” It’s not clear what those service delivery charges will be yet. The platform doesn’t currently provide access to any paid data, so I can’t tell. But it would appear that even free data might incur some charges. Hopefully there will be a freemium model? Seems likely though that the platform is designed to generate revenue for Hitachi through ongoing use of the APIs. But if they want to raise traffic they need to think about adding a lot more power to the APIs. Section 7, Payments: As a Data Consumer your account must always have a positive balance with a minimum amount as stated at our Website from time to time” Well, this isn’t currently required during either registration or signing up to subscribe to an API. However I’m concerned that I need to let Hitachi hold money even if I’m not actively using the service. I’ll also note that in Section 8, they say that on termination, “Any positive balance on your account will be payable to you provided we receive payment instructions.” Given that the two payment options are Paypal and Invoice, you’d think they might at least offer to refund money via PayPal for those using that option. Section 8, Restrictions in use of the Services or Website: You may not “access, view or use the Website or Services in or in connection with the development of any product, software or service that offers any functionality similar to, or competitive with, the Services” So I can’t, for example, take free data from the service and offer an alternative catalogue or hosting option? Or provide value-added services that enrich the freely available datasets? This is pure protecting the platform, not enabling consumers or innovation. Section 12, License to use the Dataset: “Subject to your payment of any applicable fees, you are granted a license by the Data Provider to use the relevant Dataset solely for the internal purposes and as otherwise set out under section 14 below. You may not sub-license such right or otherwise make the Dataset or any part thereof available to third parties.” Data reuse rights are also addressed in Section 13 which includes the clause: “You shall not…make the Dataset or any part thereof as such available to any third party.” While Section 14, explains that as a consumer you may “(i) copy, distribute and publish the result of the use of the Dataset, (ii) adapt and combine the Dataset with other materials and (iii) exploit commercially and noncommercially” and that: “The Data Provider acknowledges that any separate work, analysis or similar derived from the Dataset shall vest in the creator of such“. So, while they’ve given clearly given some thought to the creation of derived works and products, which is great, the data can only be used for “internal purposes” which are not clearly defined especially with respect to the other permissions. I think this precludes using the data in a number of useful ways. You certainly don’t have any rights to redistribute, even if the data is free. This is not an open license. I’ve written about the impacts of non-open licenses. It appears that data publishers must agree to these terms too, so you can’t publish open data through this exchange. This is not a good outcome, especially if the city decides to publish more data here and on its open data portal. The data that Hitachi have copied into the site is now under a custom licence. If you access the data through the Copenhagen open data portal then you are given more rights. Amusingly, the data in the exchange isn’t properly attributed, so it break the terms of the open licence. I assume Hitachi have sought explicit permission to use the data in this way? Overall I’m extremely underwhelmed by the exchange and the developer portal. Even allowing for it being at an early stage, its a very thin offering.I built more than this with a small team of a couple of people over a few months. It’s also not clear to me how the exchange in its current form is going to deliver on the vision. I can’t see how the exchange is really going to unlock more data from commercial organisations. The exchange does give some (basic) options for monetising data, but has nothing to say about helping with all of the other considerations important to data publishing.

Gæsteblog: Impressions of City Data Exchange Copenhagen

- June 27, 2016 in Uncategorized

Dette er et gæsteindlæg af Leigh Dodds, der rådgiver omkring åbne data og er tilknyttet Open Data Institute (ODI). I indlægget beskrives initiativet med Københavns “City Data Exchange“, der samler data fra forskellige åbne og lukkede kilder og udgør en form for markedsplads for data. Fra Open Knowledge Danmarks side er vi især skeptiske overfor ideen om at tage åbne data offentliggjort på åbne platforme (som Københavns åbne data portal, der er baseret på CKAN) og genpublicere dem bag login og med begrænsning i vilkårene for videreanvendelse. Indlægget blev oprindeligt postet på Leigh Doods egen blog og er udtryk for skribentens egne holdninger.
 

First Impressions of Copenhagen’s City Data Exchange

Copenhagen have apparently launched their new City Data Exchange. As this is a subject which is relevant to my interests I thought I’d publish my first impressions of it. The first thing I did was to read the terms of service. And then explore the publishing and consuming options.

Current Contents

As of today 21st May there are 56 datasets on the site. All of them are free. The majority seem to have been uploaded by Hitachi and are copies of datasets from Copenhagen’s open data portal. Compare, for example this dataset on the exchange and the same one on the open data portal. The open version has better metadata, clearer provenance, more choice of formats and a download process that doesn’t require a registration step. The open data portal also has more datasets than the exchange.

Consuming Data

Datasets on the exchange can apparently be downloaded as a “one time download” or purchased under a subscription model. However I’ve downloaded a few and the downloads aren’t restricted to being one-time, at least currently. I’ve also subscribed to a free dataset. My expectation was that this would give me direct access to an API. It turns out that the developer portal is actually a completely separate website. After subscribing to a dataset I was emailed with a username and password (in clear text!) with instructions to go and log into that portal. The list of subscriptions in the developer portal didn’t quite match what I had in the main site, as one that I’d cancelled was still active. It seems you can separately unsubscribe to them there, but its not clear what the implications of that might be. Weirdly there’s also a prominent “close your account” button in the developer portal. Which seems a little odd. Feels like two different products or services have been grafted together. The developer portal is very, very basic. The APIs expose by each dataset are:
  • a download API that gives you the entire dataset
  • a “delta” API that gives you changes made between specific dates.
There are no filtering or search options. No format options. Really there’s very little value-add at all. Essentially the subscribing to a dataset gives you a URL from which you can fetch the dataset on a regular basis rather than having to manual download it. There’s no obvious help or support for developers creating useful applications against these APIs. Authorising access to an API is done via an API key which is added as a URL parameter. They don’t appear to be using OAuth or similar to give extra security.

Publishing Data

In order to publish data you need to have provided a contact phone number and address. You can then provide some basic configuration for your dataset:
  • Title
  • Description
  • Period of update: one off, hourly, daily, weekly, monthly, annual
  • Whether you want to allow it to be downloaded and if so, whether its free or paid
  • Whether you want to allow API access and if so, whether its free or paid
Pricing is in Kronor and you can set a price per download or a monthly price for API access (such as it is). To provide your data you can either upload a file or give the data exchange access to an API. It looks like there’s an option to discuss how to integrate your API with their system, or you can provide some configuration options:
  • Type – this has one option “Restful”
  • Response Type – this has one option “JSON”
  • Endpoint URL
  • API Key
When uploading a dataset, you can tell it a bit about the structure of the data, specifically:
  • Whether it contains geographical information, and which columns include the latititude and longitude.
  • Whether it’s a time series and which column contains the timestamp
This is as far as I’ve tested with publishing, but looks like there’s a basic workflow for draft and published datasets. I got stuck because of issues trying to publish and map a dataset that I’d just downloaded from the exchange itself.

The Terms of Service

There are a number interesting things to note there: Section 7, Payments: “we will charge Data Consumers Service Delivery Charges based on factors such as the volume of the Dataset queried and downloaded as well as the frequency of usage of the APIs to query for the Datasets” It’s not clear what those service delivery charges will be yet. The platform doesn’t currently provide access to any paid data, so I can’t tell. But it would appear that even free data might incur some charges. Hopefully there will be a freemium model? Seems likely though that the platform is designed to generate revenue for Hitachi through ongoing use of the APIs. But if they want to raise traffic they need to think about adding a lot more power to the APIs. Section 7, Payments: As a Data Consumer your account must always have a positive balance with a minimum amount as stated at our Website from time to time” Well, this isn’t currently required during either registration or signing up to subscribe to an API. However I’m concerned that I need to let Hitachi hold money even if I’m not actively using the service. I’ll also note that in Section 8, they say that on termination, “Any positive balance on your account will be payable to you provided we receive payment instructions.” Given that the two payment options are Paypal and Invoice, you’d think they might at least offer to refund money via PayPal for those using that option. Section 8, Restrictions in use of the Services or Website: You may not “access, view or use the Website or Services in or in connection with the development of any product, software or service that offers any functionality similar to, or competitive with, the Services” So I can’t, for example, take free data from the service and offer an alternative catalogue or hosting option? Or provide value-added services that enrich the freely available datasets? This is pure protecting the platform, not enabling consumers or innovation. Section 12, License to use the Dataset: “Subject to your payment of any applicable fees, you are granted a license by the Data Provider to use the relevant Dataset solely for the internal purposes and as otherwise set out under section 14 below. You may not sub-license such right or otherwise make the Dataset or any part thereof available to third parties.” Data reuse rights are also addressed in Section 13 which includes the clause: “You shall not…make the Dataset or any part thereof as such available to any third party.” While Section 14, explains that as a consumer you may “(i) copy, distribute and publish the result of the use of the Dataset, (ii) adapt and combine the Dataset with other materials and (iii) exploit commercially and noncommercially” and that: “The Data Provider acknowledges that any separate work, analysis or similar derived from the Dataset shall vest in the creator of such“. So, while they’ve given clearly given some thought to the creation of derived works and products, which is great, the data can only be used for “internal purposes” which are not clearly defined especially with respect to the other permissions. I think this precludes using the data in a number of useful ways. You certainly don’t have any rights to redistribute, even if the data is free. This is not an open license. I’ve written about the impacts of non-open licenses. It appears that data publishers must agree to these terms too, so you can’t publish open data through this exchange. This is not a good outcome, especially if the city decides to publish more data here and on its open data portal. The data that Hitachi have copied into the site is now under a custom licence. If you access the data through the Copenhagen open data portal then you are given more rights. Amusingly, the data in the exchange isn’t properly attributed, so it break the terms of the open licence. I assume Hitachi have sought explicit permission to use the data in this way? Overall I’m extremely underwhelmed by the exchange and the developer portal. Even allowing for it being at an early stage, its a very thin offering.I built more than this with a small team of a couple of people over a few months. It’s also not clear to me how the exchange in its current form is going to deliver on the vision. I can’t see how the exchange is really going to unlock more data from commercial organisations. The exchange does give some (basic) options for monetising data, but has nothing to say about helping with all of the other considerations important to data publishing.