Artificial Intelligence (AI) is a hot topic in industries from manufacturing to the medical profession. Developments in the last ten years have delivered AI technology, once a fiction reserved for the movies, to private corporations and even to everyday homes. Examples include:
- 2004 Defense Advanced Research Projects Agency (DARPA) sponsors a driverless car grand challenge. Technology developed by the participants eventually allows Google to develop a driverless automobile and modify existing transportation laws.
- 2005 Honda’s ASIMO humanoid robot can walk as fast as a human, delivering trays to customers in a restaurant setting. The same technology is now used in military robots.
- 2011 IBM’s Watson wins Jeopardy against top human champions. It is training to provide medical advice to doctors. It can master any domain of knowledge.
- 2012 Google releases its Knowledge Graph, a semantic search knowledge base, likely to be the first step toward true artificial intelligence.
- 2013 BRAIN initiative aimed at reverse engineering the human brain receives $3 billion in funding by the White House, following an earlier billion euro European initiative to accomplish the same.
- 2014 Chatbot convinced 33% of the judges it was human and by doing so passed a restricted version of a Turing Test.
Almost every day, headlines showcase the most recent advancements in AI. Although many are positively revered for increasing efficiency or improving security, the advancements come with failures, too. Some are funny. Like when one company’s chatbots shut down after developing their own language. Or when a popular virtual assistant blasted music, prompting German police to break into an apartment when the resident was out.
Others are not. Some are annoying—like when a “smart speaker” experienced nearly a 100% failure rate in June 2017. Others are offensive, such as when a smart messaging app suggested a man in a turban emoji as a response to a gun emoji. Others are potentially dangerous, like when autonomous vehicles are involved in accidents, or when a highly touted facial recognition program was thwarted by a mask a week after its release.
With the risks evolving just as fast as the technology itself, both insurers and insureds will be hard pressed to keep up. Questions of liability, insurance coverage and product response are becoming increasingly murky. For example, a loss scenario involving a freight train wreck used to be relatively straightforward. If the train failed to brake, resulting in a crash, the liability evaluation would likely include looking to the operator, the train manufacturer, and/or the brake manufacturer. A dispute over fault would likely arise, but the possibilities were limited. By adding AI, the same crash in an autonomous freight train complicates the liability discussion. Was the circuitry at fault? A chip? Was there a fault in the programming? Was there a connectivity issue? Was it hacked? Did the train choose not to apply the brakes because of a specific set of circumstances presented? These become pressing questions to determine what policy will cover the loss.
For instance, if an AI blocks emails that should have been allowed to a server, a Technology errors and omissions (E&O) policy designed to cover losses resulting from faulty software and other technology products and services may cover the loss. Similarly, companies may tap their E&O policies where an AI performs as intended, but produces poor results because it learned from bad data.
Potential coverage becomes less clear where an AI failure results in physical damage. It becomes even more so when a company’s own losses stem from its use of AI. Using the same freight train scenario described above, let’s say a programming error caused a security flaw in the software operating the autonomous train. Then, a hacker exploited the flaw, disabling the brakes on the train causing it to crash into another train. The crash rendered the train and the rest of the fleet inoperable for several weeks while the network was restored. Besides the physical damage caused by the crash, the company experienced significant business interruption losses. The manufacturer utilizing the freight train to transport its products took a huge reputational hit because they could not supply the contracted products. The train company’s property or general liability policy might cover the physical damage and business interruption, but perhaps not, if the damage resulted from a cyberattack. Similarly, the company’s cyber policy might cover any data lost because of the attack, but not the property damage or business interruption. Would the manufacturer’s product liability policy respond? Or perhaps the software developer’s errors and omissions policy? Maybe, but perhaps not if the damage was caused by the attacker rather than by a programming error directly.
As with any insurance loss, there’s likely to be a lot of finger pointing. What’s different here is that AI technology is outpacing changes in insurance policy language. This has the potential to leave significant coverage gaps for insureds. In 2015, AIG introduced its Robotics Shield policy, which it marketed to provide “end to end risk management” for the robotics industry. The insurance market, however, has not yet addressed the impact AI may have to a broader base of insureds, potentially leaving those who utilize AI uncovered.
Companies that depend on AI should evaluate whether scenarios like those described above could affect their business. If so, they should carefully review their insurance coverages to determine whether the losses would be covered under their existing policies. Qualified coverage counsel can assist in that evaluation. If their coverage leaves gap, they may want to consider purchasing a specialized policy.