Friday 5 January 2018
Monday 29 December 2014
Too Many Languages; Too Much Code; Too Widely Connected
Saturday 24 December 2011
No Limits to Intelligence
An article earlier in the year in Scientific American suggested that there may be inherent limits on human intelligence, far beyond those imposed by the process of birth.
One intriguing suggestion in the article is that functional specialisation in neuroanatomy may not be driven by algorithmic requirements (i.e. by partitioning the task being the “right way” to go about, for example, seeing), but by connectivity limitations: functions cluster together because it’s not possible to maintain enough longer-distance connections. The article also pointed out that, by some measures at least, even across humans, intelligence is predicted by the speed of neural communication, controlled, roughly, by the number of neural links traversed by a signal: that long distance communication, when it is possible, is important.
The main point, however, of the article is that humans are unlikely to be able to evolve to get much more intelligent, but that social connections and technology, outside the brain, may have made that unimportant.
However, the evolutionary limit is a pretty specific thing: it’s a constraint on what a type of organism can become, given what it has been previously. For us as humans, it means where we could get to as a species, given no
"non human" intermediate states, and large, but not astronomical, amounts of time. But we are no longer constrained in the same ways. As a simple example, it seems likely that the maximisation of the size of the human birth canal is the result of a trade-off against the survival costs of birth earlier during brain growth. But that constraint is now released: infants can be safely born far earlier, after which their brains could grow as big as they like.
Many of the other constraints (transmission speed, neuron size, axon diameter) the article discusses could be also released if we needed them to be, since we no longer need to respect the path dependencies (e.g. we now use myelinated axons for rapid signalling, because we started with axons; but in the reachable future, we will almost certainly be able to engineer multiplexed metal paths, or even multiplexed optical paths (see the recent experiments on inducing lasing inside cells, under conditions that could probably be made to hold "naturally")). This would simultaneously reduce the limitations on wiring density (since theses “neoaxons” could be far thinner than the current hundreds to thousands of nanometres ( IC transistors are around 20 nm), and increase transmission speed (by around 2million times).
Networking is also badly limited, as you have heard me say, by our IO channel bandwidth. My guess is that advancement will come both by optimising networking and by reengineering brains, using both biology and non-biological techniques. Imagine, for example, the architecture I’ve sketched at the right, in which a simplified brain, reduced to two sheets of neurons is provided with a laser optical interconnect instead of axons. The sheets in question (if wolfram alpha and I are correct) would be roughly 5 meters on a side. Assuming that they’re on two sides of a five metre cube, that gives an worst case interconnect delay of 58ns. The worst case delay in a brain, even if there were a directly connected myelinated axon, is more like 2ms. This one (somewhat ungainly – we are talking about putting your brain in a 5m cube) change provides a transmission speed up of about 35000 times.
In short, even for human-like intelligence, it isn’t clear that natural human evolution matters any more. It's certainly happening, but it's so slow that it won't have any appreciable effect compared with the much faster processes of brain augmentation that are now just beginning to occur. The same argument, on a faster timescale, is also why we (and our computer collaborators) are simply going to win against disease-causing organisms, within the next 50 years.
Tuesday 23 November 2010
A Wikipedia that AI systems can safely learn from
Below is a letter I wrote in reply to a fundraising email from Jimmy Wales, for Wikipedia. I heartily support the wonderful effect of Wikipedia, and was involved in nupedia, from whose ashes Wikipedia rose phoenix-like. However, I think that an inadvertent but serious error was made at Wikipedia’s founding – the adoption of GFDL instead of a truly free, CC 0 style license. I believe that the SA licence unnecessarily interferes with the freedom of Wikipedia users to use the content, and that that is regrettable. In the case of human users, though, the effect is mitigated by the difficulty of establishing what of their subsequent intellectual work is a derivative work of Wikipedia. For AI systems, though, this SA requirement implies a level of invasion of data privacy (the systems’ but also that of people they interact with) that is wholly unconscionable. The letter suggests a way to slowly remedy this, by building up, gradually, a truly free portion of Wikipedia. I hope it is adopted.
“Jimmy,
I think this appeal will be effective. However, its effectiveness for me is reduced by the fact that it's not entirely true that "you can use the information in Wikipedia any way you want". You cannot combine it with other information without infecting that combination with a "Share Alike" obligation that you are obliged to impose on others.
If you were able to persuade the Foundation to give creators of new articles a choice of creating them under a pure CC licence, with no SA, and if it was permissible to create parallel articles, without reuse of SA content, under the truly FREE CC 0 licence, then Wikipedia would be truly free, as in freedom. And, if that happens, I will make Wikipedia my main object of charity, and will encourage others to do so.
If not, perhaps you could modify the language in the appeal to be more legally accurate. However, even as flawed and unfree as it is, it remains at present, a wonderful thing, and I will probably continue to donate, a little reluctantly.”
Friday 19 March 2010
Semantic Data
Part of what we’ve been trying to do with the LarKC project is to scale up AI to tackle real problems. One part of that is supporting the storage of vast amounts of inferentially productive knowledge. The SemData initiative is trying to do just that.
CALL FOR PAPERS Workshop on Semantic Data Management (SemData) |
Thursday 19 November 2009
Society, eGovernment, Web 3.0 and Us
Image via Wikipedia
Today I took part in a panel discussion with Munir Ismet, Andy Mulholland & Anthony Williams at the 5th Ministerial eGovernment Conference 2009, in Malmö Sweden. The talk made the case that web2.0 crowd-sourcing depends on a very limited resource: human attention and communication, and that only by harnessing the collaborative work of people and intelligent computers can we make the systems that support our societies really work.
In researching the talk, I did a quick calculation: There are 7 billion people, more or less in the world. We can speak about 4 words per second, and each work has about 5 characters (let’s say 16 bits each). That’s 320 bits per second. So, all of us, maxing out our output bandwidth (blogging is slower, can can’t really be done while speaking), have an aggregate output data-rate of about 2.25 Tbps. Sounds like a lot, doesn’t it, but it’s only a quarter of the bandwidth of one, single, Dense Wavelength Division Multiplexed fibre optic cable (a fast one).
And yet our systems, from which we derive enormous benefit, saturate the capacity of a very, very large number of optical fibres – we – all of us, all together – cannot possibly monitor all of this, and of course we shouldn’t try. But inflexible computer systems can’t either – at the base, inflexible computation outsources flexibility to human beings, and we’re going to run out of people to do that, too.
Here’s the conference programme (http://www.egov2009.se/programme/) and here’s a link to my talk (http://www.slideshare.net/witbrock/talk-on-human-computer-collaboration-from-egov2009-conference).
Tuesday 18 August 2009
Why we need Human-Computer Collaboration (I)
Now, though, we’re facing a situation where we’ve got computers, and we actually need AI. Our society is deeply interconnected: almost everything we do depends on what other people do; our systems depend on other systems; our rules depend on other rules. And there’s no easy way to reduce this interconnectedness; we’ve set things up this way because it allows us to live richer lives. And, now, there are so many of us humans that our very lives depend on the resource-use efficiencies this interconnection brings now, and will bring in the future.
There’s a risk here, though: it’s not clear that these systems we’ve built are stable; the 2009 financial crisis and the H1N1 pandemic are only proximate examples of barely, and only partially, averted disasters of global interconnection, with global effect. World War I, the Spanish Flu, the great depression, the decimation of the Americas during colonization, and the Spanish Inquisition, are examples of network disasters in earlier, far less interdependent, eras.
At a more personal level, we’re faced with looming networked disasters: failing to notice that your software implements one of millions of patented ideas, or has inadvertently included GPL code, can destroy your livelihood; failure to track, say, the purchase time of a stock lot, can lead to a failed tax audit; failure to fully understand a mortgage contract can cost one’s house, and along with it years worth of a modest income; have your dangan lost, in China, or gain a felony conviction in the US, and lose any reasonable prospect of a fulfilling career; fail to pay an insurance premium in a country without universal care and lose your heath, or possibly your life.
Systems that can lurch into disaster, in this way, are unstable. Some may be inherently and irreducibly unstable – and those systems we should strive to avoid completely. Others can be kept stable by active control. The problem is that as the systems become faster and more efficient, to our benefit, they also appear to be becoming more unstable, and, since many of the systems are supported by giga-flop computing and speed-of-light communications, the instability can be manifested at super-human speeds, and with super-human complexity.
The term “super-human” is not used here for effect. It’s quite literal. Human beings have limitations, the most important being output bandwidth and memorization speed. These limitations mean that there are problems so complex that human beings – alone or communicating simply cannot solve. Even all 6 billion of us. In other respects, including raw computational power and sensory processing, we humans far out-compute even the largest supercomputers. What we need to maintain the stability and increase the effectiveness of our systems, is super-human computers, and human super-computers, working together. What we need is a new, AI and psychology-based field I’ve begun to call ‘human-computer collaboration’.
Related articles by Zemanta
- Photos: Code machines at Bletchley Park (news.cnet.com)