I have a long-standing interest in artificial intelligence, borne of early experience, fascination and disappointment with Eliza and her siblings and grandchildren. Many approaches have been taken over the years, but most tend to fall down when it comes to natural-language interaction with human beings per Alan Turing's famous Test. English is a strange and complex construct, full of metaphors and turns of phrase with which computers are not well-equipped to cope. Language expresses thought, but can also contradict it -- we tend to fill in the blanks and leave details unspoken, and we lie sometimes, even to ourselves.
So it seems high time for a novel and potentially more successful approach. The Public Library of Science recently published an excellent essay describing some very interesting recent experiments, entitled Evolution of Adaptive Behaviour in Robots by Means of Darwinian Selection. You can follow the link and read it yourself -- the language is academic and assumes quite a bit of biological background, but it's not incomprehensible -- and I'll try to summarize it here.
The idea is to replicate what evolution actually does, mimicking Charles Darwin's well-established principle of natural selection coupled with more recent work in DNA, by using robots set loose in an environment that provides suitable obstacles to "fitness" as defined by the researchers. The robots have to navigate, find "food" (energy), avoid enemies and/or accomplish other goals to survive. So the artificial intelligence at work here has nothing to do with language -- the challenges are complex but physical. It's very much what real, simple creatures like microbes and insects have to do, with very little brainpower as we think of it.
Here's where the technique gets really interesting. The "genome" of these machines is the raw programming code that drives their various subsystems -- and it's initialized with a completely RANDOM set of data. Each generation is allowed to run, the results for each individual evaluated, and the most successful (i.e., fit) code from each run is put through a selective breeding and mutation process, as happens in real biological DNA. The code is combined and merged (a la sexual reproduction), randomly mutated and expanded here and there, and put into the "next" generation of robots. In most of these experiments, the hardware is not changed, only the code, so basic limitations of the robot design persist; it's as though only the brain is evolving, unlike real organisms which can physically change over generations. Some of the robots in the experiments described are real, physical robots, while others are simulations -- but the results are similar across the board.
And the results are quite impressive, though I hesitate to say surprising -- it's clear from the diversity of successful life on this planet that natural selection and random change are very powerful in combination. Within a few hundred generations, these robots evolved very successful navigation, goal-seeking and environmental awareness algorithms, without a human being ever writing or explicitly tweaking the code.
In one experiment, robots found the maximum speed at which they were able to move -- which was NOT the maximum speed the motors could handle, as a human engineer would be tempted to try, but the fastest they found they could SAFELY go based on the sensory input they could receive and process. They learned how to map the world so they knew where their energy source was, and never went so far from the "nest" that they couldn't get safely back to it before running out of juice. Predator-prey experiments were also done, although the robots started out with different physical properties (faster movement for the prey, better sensors for the predator) that are analogous to animal behavior but jumpstart the question a bit. Still, interesting and novel seek-and-avoid strategies for each role evolved as the generations progressed, with co-evolution in each group as the other population's strategies changed. Another experiment established that related robots (similar ancestry and algorithms) would evolve cooperation to help each other, even at some expense to an individual, while unrelated robots would maximize individual fitness on their own -- very much in line with current thinking about altruistic behavior in social species like humans.
It's fascinating stuff. And if you ask me, this is the kick in the pants artificial intelligence research really needs to develop past its current state -- the opportunity to "naturally" find the best solution within environmental constraints. Most previous attempts have started with a goal state in mind -- and human beings don't really understand how our own brains work, let alone how to make a computer imitate the process. The robot approach seems simpler, but its results are ultimately more complex as it deals with real-world problem-solving scenarios. It also takes the hubris of human engineering, our tendency to jump to what we think is the best solution, out of the equation altogether. It simply allows the process to happen, preserving the most effective solutions as they pop up for the next generation's benefit.
The solutions this approach arrives at are not necessarily clean and perfect designs -- evolution doesn't work that way; it's satisfied with what works and will generally build on what already exists. It never goes back and cleans up the foundation, as there's not really any natural reason for doing that -- what works gets kept, what does not gets discarded (or more often in reality, covered up by a further modification that works around it). The genome, and the "design" expressed from it, becomes what we would think of as needlessly complicated, but the net effect still resembles grace and elegance. See the human eye as a reference, which is all upside-down and limited in its frequency range and blocked with a weird bundle of optic nerves in exactly the wrong place, but is very effective for our practical needs as intelligent primates living on Earth.
Blue-sky speculation time -- it would be very interesting to do a large-scale Turing Test experiment using this approach. Start with a web page with text input and output capabilities, and a "memory" stored on a server. Set up some random code that takes an input and produces some output based on it and whatever it has in its memory -- any output will do to start, it's expected that it will spout random gibberish for quite a while. Let random people approach it and attempt to converse with it, and rate its success (to inform the next generation's selection process). Perhaps eventually it will start to make sense -- if it gets to the point where it can say "HI" when a connection is opened, that would be something. If it can carry on a conversation at a minimal Eliza-like level, without anyone having specified how it is to do that, that would be major progress. If it develops multilingual capabilities and a curiosity about the world around it... well, that's just crazy talk. At least for now.
But the robot experiments, extended to the Turing space, suggest that crazy talk with feedback may lead to slightly-less-crazy talk. It's a hellishly more complex problem, no doubt, and the early generations (which could take years or decades or more) would be very likely to produce NO candidates that score higher than the bottom of the scale. But it's also very likely that all it takes is time, random change and remixing of the most successful individuals, and an environment to provide criteria for selection.
It worked for our ancestors, anyway.