Thursday 19 November 2009

Society, eGovernment, Web 3.0 and Us

Location of Sweden within Europe and the Europ...

Image via Wikipedia

Today I took part in a panel discussion with Munir Ismet, Andy Mulholland & Anthony Williams at the 5th Ministerial eGovernment Conference 2009, in Malmö Sweden.  The talk made the case that web2.0 crowd-sourcing depends on a very limited resource: human attention and communication, and that only by harnessing the collaborative work of people and intelligent computers can we make the systems that support our societies really work.

In researching the talk, I did a quick calculation:  There are 7 billion people, more or less in the world. We can speak about 4 words per second, and each work has about 5 characters (let’s say 16 bits each). That’s 320 bits per second. So, all of us, maxing out our output bandwidth (blogging is slower, can can’t really be done while speaking), have an aggregate output data-rate of about 2.25 Tbps. Sounds like a lot, doesn’t it, but it’s only a quarter of the bandwidth of one, single, Dense Wavelength Division Multiplexed fibre optic cable (a fast one). 

And yet our systems, from which we derive enormous benefit, saturate the capacity of a very, very large number of optical fibres – we – all of us, all together – cannot possibly monitor all of this, and of course we shouldn’t try. But inflexible computer systems can’t either – at the base, inflexible  computation outsources flexibility to human beings, and we’re going to run out of people to do that, too.

Here’s the conference programme (http://www.egov2009.se/programme/) and here’s a link to my talk (http://www.slideshare.net/witbrock/talk-on-human-computer-collaboration-from-egov2009-conference).

Reblog this post [with Zemanta]

Tuesday 18 August 2009

Why we need Human-Computer Collaboration (I)

EDSAC was one of the first computers to implem...
Image via Wikipedia
In an important sense, AI is the reason we humans invented computers in the first place. At first the motivation for work on AI was reflective – a desire to better understand the way we think, by looking in a mirror – and mixed with the Everest motive: “because it’s there”; because we might be able to. Of course, actually building computers, which in the 1940s could hardly be expected to run AI software, was driven by the urgent need to break codes and defeat fascism in WWII.
Now, though, we’re facing a situation where we’ve got computers, and we actually need AI. Our society is deeply interconnected: almost everything we do depends on what other people do; our systems depend on other systems; our rules depend on other rules. And there’s no easy way  to reduce this interconnectedness; we’ve set things up this way because it allows us to live richer lives. And, now, there are so many of us humans that our very lives depend on the resource-use efficiencies this interconnection brings now, and will bring in the future.
There’s a risk here, though: it’s not clear that these systems we’ve built are stable; the 2009 financial crisis and the H1N1 pandemic are only proximate examples of barely, and only partially, averted disasters of global interconnection, with global effect. World War I, the Spanish Flu, the great depression, the decimation of the Americas during colonization, and the Spanish Inquisition, are examples of network disasters in earlier, far less interdependent, eras.
At a more personal level, we’re faced with looming networked disasters:  failing to notice that your software implements one of millions of patented ideas, or has inadvertently included GPL code, can destroy your livelihood; failure to track, say, the purchase time of a stock lot, can lead to a failed tax audit; failure to fully understand a mortgage contract can cost one’s house, and along with it years worth of a modest income; have your dangan lost, in China, or gain a felony conviction in the US, and lose any reasonable prospect of a fulfilling career; fail to pay an insurance premium in a country without universal care and lose your heath, or possibly your life. 
Systems that can lurch into disaster, in this way, are unstable. Some may be inherently and irreducibly unstable – and those systems we should strive to avoid completely. Others can be kept stable by active control. The problem is that as the systems become faster and more efficient, to our benefit, they also appear to be becoming more unstable, and, since many of the systems are supported by giga-flop computing and speed-of-light communications, the instability can be manifested at super-human speeds, and with super-human complexity.
The term “super-human” is not used here for effect. It’s quite literal. Human beings have limitations, the most important being output bandwidth and memorization speed. These limitations mean that there are problems so complex that human beings – alone or communicating simply cannot solve. Even all 6 billion of us. In other respects, including raw computational power and sensory processing, we humans far out-compute even the largest supercomputers. What we need to maintain the stability and increase the effectiveness of our systems, is super-human computers, and human super-computers, working together. What we need is a new, AI and psychology-based field I’ve begun to call ‘human-computer collaboration’.
Reblog this post [with Zemanta]

Wednesday 1 April 2009

Why Inference is Better than Hacking

Large scale knowledge bases are inherently more flexible than dedicated databases and specific software. In traditional software, a task is preconceived, an algorithm to perform that task – and no other – is conceived and implemented, along with a task-specific data representation, and data is collected and maintained in compliance with that representation. With large scale knowledge bases, both the data to be processed, and the means to apply data to solving tasks, generally, are stored in a single logical representation. It is then the responsibility of the inference system, not a programmer, both to identify the steps required to perform the task, and to identify the required data and transform it into the right form for the required processing. This method of computation is fundamentally more powerful than manual programming, just as the invention of stored-programme computation in the 1940s was fundamentally more powerful than the dedicated calculators and patch-panel setups that preceded it. But, just as stored-programme computing required a huge jump in the complexity (and memory) of early computing devices, knowledge-based computing imposes requirements that are only being satisfied after sixty years of theoretical and engineering  advances. Some of these requirements are physical: to simultaneously search for solution methods, solutions, data transformations, and data, computers must be very powerful, and have very large storage. But many of the requirements have been conceptual: we have needed to assemble enough data, in computer-understandable form, to allow solutions in principle; we have needed to assemble enough background knowledge about tasks and data transformations to allow a solution to be findable; and we have needed to develop reasoning techniques that allow a solution to be found.

The Internet generally, and Web2.0 and the Semantic Web in particular, are providing the seeds of a solution to the data problem, as are specific high-quality KBs such as UMLS (Bodenreider, 2004) and the AKB (Deaton et al, 2005). The laborious hand assembly of the existing Cyc KB (Lenat, 1995) was required to provide an inferentially productive basis for reasoning over these vastly larger amounts of knowledge.