OrcIDs are used by arXiv to uniquely identify authors. I'm not particularly indistinguishable, as named entities go, but it still seems like a good idea.

Thoughts about Artificial Intelligence, Technology, Food, Travel, Life, and putting oneself to the fullest possible use.
An article earlier in the year in Scientific American suggested that there may be inherent limits on human intelligence, far beyond those imposed by the process of birth.
One intriguing suggestion in the article is that functional specialisation in neuroanatomy may not be driven by algorithmic requirements (i.e. by partitioning the task being the “right way” to go about, for example, seeing), but by connectivity limitations: functions cluster together because it’s not possible to maintain enough longer-distance connections. The article also pointed out that, by some measures at least, even across humans, intelligence is predicted by the speed of neural communication, controlled, roughly, by the number of neural links traversed by a signal: that long distance communication, when it is possible, is important.
The main point, however, of the article is that humans are unlikely to be able to evolve to get much more intelligent, but that social connections and technology, outside the brain, may have made that unimportant.
However, the evolutionary limit is a pretty specific thing: it’s a constraint on what a type of organism can become, given what it has been previously. For us as humans, it means where we could get to as a species, given no
"non human" intermediate states, and large, but not astronomical, amounts of time. But we are no longer constrained in the same ways. As a simple example, it seems likely that the maximisation of the size of the human birth canal is the result of a trade-off against the survival costs of birth earlier during brain growth. But that constraint is now released: infants can be safely born far earlier, after which their brains could grow as big as they like.
Many of the other constraints (transmission speed, neuron size, axon diameter) the article discusses could be also released if we needed them to be, since we no longer need to respect the path dependencies (e.g. we now use myelinated axons for rapid signalling, because we started with axons; but in the reachable future, we will almost certainly be able to engineer multiplexed metal paths, or even multiplexed optical paths (see the recent experiments on inducing lasing inside cells, under conditions that could probably be made to hold "naturally")). This would simultaneously reduce the limitations on wiring density (since theses “neoaxons” could be far thinner than the current hundreds to thousands of nanometres ( IC transistors are around 20 nm), and increase transmission speed (by around 2million times).
Networking is also badly limited, as you have heard me say, by our IO channel bandwidth. My guess is that advancement will come both by optimising networking and by reengineering brains, using both biology and non-biological techniques. Imagine, for example, the architecture I’ve sketched at the right, in which a simplified brain, reduced to two sheets of neurons is provided with a laser optical interconnect instead of axons. The sheets in question (if wolfram alpha and I are correct) would be roughly 5 meters on a side. Assuming that they’re on two sides of a five metre cube, that gives an worst case interconnect delay of 58ns. The worst case delay in a brain, even if there were a directly connected myelinated axon, is more like 2ms. This one (somewhat ungainly – we are talking about putting your brain in a 5m cube) change provides a transmission speed up of about 35000 times.
In short, even for human-like intelligence, it isn’t clear that natural human evolution matters any more. It's certainly happening, but it's so slow that it won't have any appreciable effect compared with the much faster processes of brain augmentation that are now just beginning to occur. The same argument, on a faster timescale, is also why we (and our computer collaborators) are simply going to win against disease-causing organisms, within the next 50 years.
Below is a letter I wrote in reply to a fundraising email from Jimmy Wales, for Wikipedia. I heartily support the wonderful effect of Wikipedia, and was involved in nupedia, from whose ashes Wikipedia rose phoenix-like. However, I think that an inadvertent but serious error was made at Wikipedia’s founding – the adoption of GFDL instead of a truly free, CC 0 style license. I believe that the SA licence unnecessarily interferes with the freedom of Wikipedia users to use the content, and that that is regrettable. In the case of human users, though, the effect is mitigated by the difficulty of establishing what of their subsequent intellectual work is a derivative work of Wikipedia. For AI systems, though, this SA requirement implies a level of invasion of data privacy (the systems’ but also that of people they interact with) that is wholly unconscionable. The letter suggests a way to slowly remedy this, by building up, gradually, a truly free portion of Wikipedia. I hope it is adopted.
“Jimmy,
I think this appeal will be effective. However, its effectiveness for me is reduced by the fact that it's not entirely true that "you can use the information in Wikipedia any way you want". You cannot combine it with other information without infecting that combination with a "Share Alike" obligation that you are obliged to impose on others.
If you were able to persuade the Foundation to give creators of new articles a choice of creating them under a pure CC licence, with no SA, and if it was permissible to create parallel articles, without reuse of SA content, under the truly FREE CC 0 licence, then Wikipedia would be truly free, as in freedom. And, if that happens, I will make Wikipedia my main object of charity, and will encourage others to do so.
If not, perhaps you could modify the language in the appeal to be more legally accurate. However, even as flawed and unfree as it is, it remains at present, a wonderful thing, and I will probably continue to donate, a little reluctantly.”
Part of what we’ve been trying to do with the LarKC project is to scale up AI to tackle real problems. One part of that is supporting the storage of vast amounts of inferentially productive knowledge. The SemData initiative is trying to do just that.
CALL FOR PAPERS Workshop on Semantic Data Management (SemData) |
Image via Wikipedia
Today I took part in a panel discussion with Munir Ismet, Andy Mulholland & Anthony Williams at the 5th Ministerial eGovernment Conference 2009, in Malmö Sweden. The talk made the case that web2.0 crowd-sourcing depends on a very limited resource: human attention and communication, and that only by harnessing the collaborative work of people and intelligent computers can we make the systems that support our societies really work.
In researching the talk, I did a quick calculation: There are 7 billion people, more or less in the world. We can speak about 4 words per second, and each work has about 5 characters (let’s say 16 bits each). That’s 320 bits per second. So, all of us, maxing out our output bandwidth (blogging is slower, can can’t really be done while speaking), have an aggregate output data-rate of about 2.25 Tbps. Sounds like a lot, doesn’t it, but it’s only a quarter of the bandwidth of one, single, Dense Wavelength Division Multiplexed fibre optic cable (a fast one).
And yet our systems, from which we derive enormous benefit, saturate the capacity of a very, very large number of optical fibres – we – all of us, all together – cannot possibly monitor all of this, and of course we shouldn’t try. But inflexible computer systems can’t either – at the base, inflexible computation outsources flexibility to human beings, and we’re going to run out of people to do that, too.
Here’s the conference programme (http://www.egov2009.se/programme/) and here’s a link to my talk (http://www.slideshare.net/witbrock/talk-on-human-computer-collaboration-from-egov2009-conference).