In an important sense, AI is the reason we humans invented computers in the first place. At first the motivation for work on AI was reflective – a desire to better understand the way we think, by looking in a mirror – and mixed with the Everest motive: “because it’s there”; because we might be able to. Of course, actually building computers, which in the 1940s could hardly be expected to run AI software, was driven by the urgent need to break codes and defeat fascism in WWII.
Now, though, we’re facing a situation where we’ve got computers, and we actually need AI. Our society is deeply interconnected: almost everything we do depends on what other people do; our systems depend on other systems; our rules depend on other rules. And there’s no easy way to reduce this interconnectedness; we’ve set things up this way because it allows us to live richer lives. And, now, there are so many of us humans that our very lives depend on the resource-use efficiencies this interconnection brings now, and will bring in the future.
There’s a risk here, though: it’s not clear that these systems we’ve built are stable; the 2009 financial crisis and the H1N1 pandemic are only proximate examples of barely, and only partially, averted disasters of global interconnection, with global effect. World War I, the Spanish Flu, the great depression, the decimation of the Americas during colonization, and the Spanish Inquisition, are examples of network disasters in earlier, far less interdependent, eras.
At a more personal level, we’re faced with looming networked disasters: failing to notice that your software implements one of millions of patented ideas, or has inadvertently included GPL code, can destroy your livelihood; failure to track, say, the purchase time of a stock lot, can lead to a failed tax audit; failure to fully understand a mortgage contract can cost one’s house, and along with it years worth of a modest income; have your dangan lost, in China, or gain a felony conviction in the US, and lose any reasonable prospect of a fulfilling career; fail to pay an insurance premium in a country without universal care and lose your heath, or possibly your life.
Systems that can lurch into disaster, in this way, are unstable. Some may be inherently and irreducibly unstable – and those systems we should strive to avoid completely. Others can be kept stable by active control. The problem is that as the systems become faster and more efficient, to our benefit, they also appear to be becoming more unstable, and, since many of the systems are supported by giga-flop computing and speed-of-light communications, the instability can be manifested at super-human speeds, and with super-human complexity.
The term “super-human” is not used here for effect. It’s quite literal. Human beings have limitations, the most important being output bandwidth and memorization speed. These limitations mean that there are problems so complex that human beings – alone or communicating simply cannot solve. Even all 6 billion of us. In other respects, including raw computational power and sensory processing, we humans far out-compute even the largest supercomputers. What we need to maintain the stability and increase the effectiveness of our systems, is super-human computers, and human super-computers, working together. What we need is a new, AI and psychology-based field I’ve begun to call ‘human-computer collaboration’.
Subscribe to:
Posts (Atom)