Radical transparency is an intriguing school of thought, with the philosophy that the best society is a transparent society. In other words, all data that can be opened should be opened. I find such transparency an interesting concept, and in many cases probably worth aiming for. The key question is: what is a realistic environment in which to begin experimenting with it? I focus here on one tightly restricted area: data transparency in shipping safety. [Finnish version: Click here]
For a slightly perspective on this issue by Niko Porjo, see here.
At the moment, international standards require large ships to transmit AIS information. At minimum, this information contains, in standardized format, the ship’s identity, location, speed, and bearing. The AIS information is transmitted in the clear and its purpose is to help ships maintain positional awareness of other traffic. Internet distribution of the data originally raised some controversy, but in practice the controversy is over: the AIS information is public.
It is quite sensible to ask a further question: should even more information from the ships be openly available? There are good reasons to ask this question; above all, in an emergency it would make the passengers active participants rather than passive subjects. It would also help to show up poor safety practices that would remain invisible in a closed environment. The technical problem can be stated quite simply: should the information currently collected by the black box be available and public (although not necessarily in real time)? More radically, it is technically feasible to make all the information that is available on the bridge available to the public. Should it be made available?
Unfortunately, I tend to arrive at a pessimistic outcome for this specific case. Openness would benefit the overall system. Unfortunately, it would not benefit any of the individual players, at least in the beginning stages. The problem with transparency in this particular area is that the first adopter ends up taking most of the risk. Although radical transparency is a good concept to aim for, shipping security does not seem like a reasonable platform in which to start experimenting with it.
The authorities cannot be bossed around
In practice, security is defined and enforced by national or international authorities. In a democratic system, it is in principle possible to force the authorities to make good decisions. Unfortunately, in a democratic system this is also painfully difficult in practice. Authorities are dependent on what legislators decide. Legislation in turn is a slow process, undergoing massive lobbying from established interestes, and requiring a significant push from citizens. Based on the lukewarm reception these issues are getting, it does not seem that there is any real political push in this direction.
Laws and directives change most rapidly through major accidents, which lead to security recommendations. Even then, the new directives may or may not be followed adequately, especially if they require significant amounts of money. Waiting for the authorities to act requires patience and (unfortunately) often new accidents. This path does work, but is not likely to lead to rapid or radical solutions.
Anonymization does not work
In order to balance between data transparency and personal privacy, security-related information should be anonymized. Unfortunately, this does not work in the Internet age, where all information (whether correct or not) will be on Twitter within minutes of an accident. The most tragic failure of anonymization is the Überlingen air accident of 2002, in which two aircraft collided. The investigation report concluded that it was a system-wide problem, and no single individual was to blame. Nevertheless, a man who lost his family in the accident blamed the air traffic controller, found out his identity and home address, and murdered him.
The Überlingen case is extreme, but in an open system there is no automatic mechanism to protect those initially blamed for the accident. It is a serious scenario is that in any accident, the people potentially responsible will be identified immediately, they will be blamed by the media, their personal information will be found immediately, and Internet mobbing could start immediately. The risk may look small now, but already cyber-bullying in South Korea shows that a risk exists. How many people would be willing to work under such circumstances?
Data without metadata is nothing
The technical problems are considerable. The AIS parameters are standardized tightly and are easily understandable. If more generic information is to be transmitted, then its interpretation becomes problematic. Raw data is just rows of numbers; processing, interpretation, and displaying are what make it into information. Someone must do this, must be paid to do it, and must be responsible for quality control.
Some parameters will be considered trade secrets by the shipping companies (or at least in a gray area). Realistically speaking, any shipping company will either not want to do such an analysis, or will want to keep the results secret. It is certainly possible to force a company to make the raw data available. Without extra incentives, it is barely realistic to expect the company to make the data available in a form which could be easily utilized by competitors.
Transparency benefits the unscrupulous
Transparency is an equalizing safety factor when all parties have the same information on all parties. If one party stops sharing information, it creates a business advantage for itself (even more so if it begins to distort it). No idealism can change this fact; surveillance and enforcement are needed. The enforcement needs to be global. It can be argued that for technologies such as nuclear energy such a global enforcement system already exists; that is true, but nuclear energy was born in completely different historical circumstances than shipping, and was in fat able to start from a clean table.
Open real-time information also makes piracy easier. More information means more opportunities to plan attacks. Merchant ships near the coast of Somalia will certainly not be willing to participate in experiments in radical transparency.
Terrorism is invoked too easily, but it cannot be ignored. Any transparency model must accept the brutal truth that there are destructive entities. The sinking of a large passenger ship might not even be the worst-case scenario; societies can recover from large losses of life very rapidly, even though the scars are horrible. A more worrisome scenario might be an Exxon Valdez-type massive oil leak event next to a nuclear power plant.
What can we do?
Many people reflexively oppose this type of radical transparency, whether with good reason or by knee-jerk reflex. How could they be motivated to at least try? Even if calculations clearly show that transparency is useful for the whole system in the long run, people are irrational and think in the short run. Given that the early adopters take a risk, how would this risk be compensated to them? Shipping has a long history and legacy practices which are difficult to overcome. Radical transparency is something that absolutely should be tested in a suitable environment. However, I am forced to conclude that shipping safety is simply not a sensible environment in which to start.