When I was a child, I was fascinated by the idea that some day we will be able to construct computers that will be smarter than us. Later, as a teenager interested in mathematics and programming, I created a little chatterbot (a program with which you can chat). It had some prefabricated patterns of canned responses to various questions and even a mood system which would cause it to vary the responses based on how nicely the user behaved to it.
It was nowhere near as smart as current chatterbots. Alan Turing, the father of modern computer science, predicted that by the year 2000, specialized computer programs would be able to fool 30% of humans into thinking they were humans after a 5 minute chat session, and this indeed happened: a chatterbot named Eugene Goostman was able to convince 33% of the judges in a chatterbot competition it was a real person (you can read an example of a conversation with it here).
Nonetheless, Eugene is, in principle, nothing more than a sophisticated version of my own chatterbot. It possesses no real intelligence of its own; its developers implemented a system of canned responses that can make it sound like a real human attempting to participate in small talk, but it isn’t able to solve any real-world problem.
Thinking about the way my chatterbot (and software in general) was designed, the teenage me came to the following (completely wrong) conclusion:
Why is the idea wrong? Because it assumes that an intelligent creation has to have an intelligent creator. We know from the animal kingdom that this is not the case. If intelligence is naturally or artificially selected, each new generation becomes smarter. This has been directly observed in dogs, and it is most probably also the case with humans.
Virtually every piece of software you use—ranging from your web browser to the operating system itself—was coded line by line by a programmer, but there are an increasing number of useful algorithms which are too complex to be developed by a human being, such as automated detection of diseases based on images obtained by magnetic resonance.
Computer scientists came up with an ingenious and yet completely natural solution: Let computers learn. There are various approaches to computer learning, but let’s take neural networks as an example. Just like your brain was in a certain way an empty box when you were born, without any knowledge of language or signs of intelligence, we can design a simulated neural network that has roughly the right “shape”, but which evolves (learns) based on the input it gets. This leads us to idea no. 2:
This would already allow us to create machines as intelligent as humans. Such a machine would, however, have a great advantage:
If we decided to give the artificial being creative freedom, we could do even better. As I. J. Good noted already in 1965, artificial intelligence so created could repeat the process indefinitely:
And that’s where the whole process starts getting quite terrifying. Once this gets started, we won’t have just “cybernetic superhumans”; within a short period of time, there will be artificial beings intelligent beyond our understanding. In terms of intelligence, we won’t be much more than bacteria to them.
The question is: Will it happen? The decision is ours to make. Raymond Kurzweil, an American scientist and leading expert on the topic, expects the Singularity (the term used for the process described above) to happen around 2045. Most of us will be still alive by that time. Are you ready?