This weekend, residents of Hawaii woke up to an unpleasant text message from the authorities informing them of an inbound ballistic missile. As we now know, it was a false alarm although the 38 minutes that it took to get that news out to the folks who had received the alert were presumably anxious ones. The false alert was even more concerning than it might have been because of the already heightened tensions with North Korea. That country has, in fact, been launching nuclear capable ballistic missiles into the Pacific Ocean as part of its nuclear program. Initial root cause analysis has revealed that the Hawaiian alert problem was caused by the user interface—the test warning tab was adjacent to the warning tab on the pull-down menu, a trivial mistake.
What happened in Hawaii is only one of many such events, albeit scarier. When Air France Flight 447 went down in South Atlantic, the aircraft was flown intact and with engines running into the water because the pilots became confused as to the airspeed due to conflicting messages from its computer systems. A recent fatal Tesla accident occurred when during self-driving mode when the car’s computer became situationally unaware of its immediate environment. Increasingly, we are dependent upon the human-machine interface for our safety and well being. Yet, as with HAL in Kubrick’s 2001 A Space Odyssey, every contingency cannot be programmed for.
It doesn’t take much imagination to surmise that North Korea’s military command was made aware of the Hawaiian nuclear alert shortly after it was received by millions across Oahu and the rest of the islands that make up the State. Given the paranoia of the regime, it’s relatively easy to see how they might have interpreted the alert, not as an error, but as a US false flag operation to justify a premptive strike on the regime. In such a situation, where the North Koreans believed that they must “use or loose”, they well might have attacked with catastrophic results to all.
With more of our technologies become AI-enabled, the potential for common mishaps will go down (e.g. self-driving cars will get in fewer fender-benders) yet the potential for long-tail “black swan” events can’t be discounted. Returning to the events in Hawaii, it has always been too easy to lauch nuclear weapons. In the U.S. there is a single-point of potential failure—the President. That individual alone has the authority to cause a launch without any checks or balances from Congress. With AI increasingly entering the picture, the information stream that might lead to a failed decision, flows through the black box of machine learning and opaque algorithms. Where the consequences are high, it is high time that neither machine nor human can easily commit to catastrophic error by means of trivial mistake.