Will computers ever be smarter than humans?

by Jakub Marian

Tip: See my list of the Most Common Mistakes in English. It will teach you how to avoid mis­takes with com­mas, pre­pos­i­tions, ir­reg­u­lar verbs, and much more.

When I was a child, I was fascinated by the idea that some day we will be able to construct computers that will be smarter than us. Later, as a teenager interested in mathematics and programming, I created a little chatterbot (a program with which you can chat). It had some prefabricated patterns of canned responses to various questions and even a mood system which would cause it to vary the responses based on how nicely the user behaved to it.

It was nowhere near as smart as current chatterbots. Alan Turing, the father of modern computer science, predicted that by the year 2000, specialized computer programs would be able to fool 30% of humans into thinking they were humans after a 5 minute chat session, and this indeed happened: a chatterbot named Eugene Goostman was able to convince 33% of the judges in a chatterbot competition it was a real person (you can read an example of a conversation with it here).

Nonetheless, Eugene is, in principle, nothing more than a sophisticated version of my own chatterbot. It possesses no real intelligence of its own; its developers implemented a system of canned responses that can make it sound like a real human attempting to participate in small talk, but it isn’t able to solve any real-world problem.

Thinking about the way my chatterbot (and software in general) was designed, the teenage me came to the following (completely wrong) conclusion:

Idea 1: Computers will never be as smart as humans because they can only do what a programmer “teaches” them to do, and the programmer cannot teach them more than a subset of what he himself knows.

Why is the idea wrong? Because it assumes that an intelligent creation has to have an intelligent creator. We know from the animal kingdom that this is not the case. If intelligence is naturally or artificially selected, each new generation becomes smarter. This has been directly observed in dogs, and it is most probably also the case with humans.

Virtually every piece of software you useranging from your web browser to the operating system itselfwas coded line by line by a programmer, but there are an increasing number of useful algorithms which are too complex to be developed by a human being, such as automated detection of diseases based on images obtained by magnetic resonance.

Computer scientists came up with an ingenious and yet completely natural solution: Let computers learn. There are various approaches to computer learning, but let’s take neural networks as an example. Just like your brain was in a certain way an empty box when you were born, without any knowledge of language or signs of intelligence, we can design a simulated neural network that has roughly the right “shape”, but which evolves (learns) based on the input it gets. This leads us to idea no. 2:

Idea 2: Create a sufficiently complex neural network (perhaps by basing it on actual data we can obtain from biological systems, such as ourselves) and teach it to become intelligent.

This would already allow us to create machines as intelligent as humans. Such a machine would, however, have a great advantage:

Fact: Since an intelligent neural network would be just a computer simulation, we could run it as fast as the hardware would allow us to do. This means that an artificial brain could learn (theoretically) within seconds as much as a human brain could in a thousand years, without any negative effects of ageing.

If we decided to give the artificial being creative freedom, we could do even better. As I. J. Good noted already in 1965, artificial intelligence so created could repeat the process indefinitely:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

And that’s where the whole process starts getting quite terrifying. Once this gets started, we won’t have just “cybernetic superhumans”; within a short period of time, there will be artificial beings intelligent beyond our understanding. In terms of intelligence, we won’t be much more than bacteria to them.

The question is: Will it happen? The decision is ours to make. Raymond Kurzweil, an American scientist and leading expert on the topic, expects the Singularity (the term used for the process described above) to happen around 2045. Most of us will be still alive by that time. Are you ready?

By the way, I have written several educational ebooks. If you get a copy, you can learn new things and support this website at the same time—why don’t you check them out?

0