SOTm

I gave a talk at State of the Map openstreetmap.org conference on crowdsourcing. (The slides are posted on my twitter feed https://twitter.com/geolytica/status/531501476708114432 , also available on Vimeo)

One observation I took home from the conference is that the state of public data around the word is similar to that in Canada, in the sense that governments and their affiliate entities hold on to the data for as long a possible, despite the fact that doing so, adversely affects the state of their economies and goes against the public good. (According to OSM France “Bano” project, a country loses up to 0.5% of its GDP due to lack of publicly available addressing data. Source)

Crowdsourcing the data is not an optimal solution, in the face of the lack a data feed from its authoritative source, because it results in datasets that contain errors. Still, this seems the only way to open up the data in my view, when the decision makers are convinced that keeping the data closed is better for their budgets (an interesting figure of 0.5 billion pounds was thrown around as the value of a closed post code list by the Royal Mail CEO Moya Greene, in her arguments for keeping the dataset closed. She also happened to be Canadapost’s CEO at the time they started their legal efforts aimed at enforcing CP’s alleged intellectual rights over Canadian postal codes.)

In France on the other hand they don’t have such problems, as they have not yet made the effort to create a post code system like the Canadian or the British ones, hence it is hard to make the half billion dollar argument there, still that does not mean that whatever system they have is open. People from the “Bano” project had to lobby hard to get the list of up to 1000 postal codes created by the French postal service open to the public, and when they actually did it was full of errors. Not only that but the French postal service has 4 different street address datasets (one for regular mail, one for advertising mail,  one for parcels and another one for a purpose I can’t remember now.) All 4 have quality issues, and the 4 different departments that created them do not talk nor cooperate with each other to improve their respective datasets. Funny stories of government inefficiency at the public’s expense. The Economist also wrote a piece on this topic a month ago.

In closing, public data all around the world is at various stages of unavailability because certain people of influence are convinced they are worth a lot of money. Nobody has yet shown how much money they are actually making from licensing this data. I doubt it is half a billion. Or 0.5% of the GDP.

I am certain it is more akin to a hidden tax we all have to pay.