Intelligent Machines: They’re taking over!

If you have read the recent blog on the news about the rise of AI, one might be forgiven for thinking that the dawn of some futuristic Science-Fiction film is upon us. That soon robots will be roaming the Earth causing devastation and some AI will free itself from the bonds of its owner and start a devastating, destructive campaign!

Well, this might sound exciting, but I must say I have a far more sober view of what the future of AI looks like (at least for the foreseeable present).  AI nowadays is entirely explainable: if you understand how it works, you can see how unlikely it is that the robots will be taking up arms any day now.


A history of AI

AI was originally a global term which included all problems that we had no clue about how to solve. Out of this sea came Object-Oriented Programming, Speech-Recognition, Language-Processing, etc. which are all forms of AI but use specific, highly-optimised algorithms that could never generate a robotic menace. 

However, the fact that all of these aspects of what was once AI now have their own groups at universities and are given names that do not strike fear shows how little we understand AI. We imagine AI as some futuristic whole which somehow, unknown to the researchers, the machine ‘wakes’ up. This is a rather SciFi view: in reality, the steps would have been incremental. The scientists would know exactly how the machine worked and thereby could program against the robot going on a rampage.

One may argue but what about ‘Neural Nets’: aren’t they a model of neurons in the brain, so don’t they learn? Doesn’t that mean they could learn to do bad things? 

If we considered the wiring of the brain in terms of wires (connections) and processors (neurons) then the scale of wiring of a computer is more similar to that of a bacteria than a human brain. As a result, Neural Nets are an incredibly crude and simple approximation of how our brains work, and they cannot learn something that they have not been trained or built for.

For example, say we wanted to recognise whether a given image recognised a tree or not. Then we could have a neural network set up as follows. It would take as input the image of a tree (x) and the output would be ‘is a tree’ or ‘is not a tree’ (y). 

A sample 'feed-forward' neural net.
A sample ‘feed-forward’ neural net.

The circles in the middle of the Neural Net represent features that we are interested in. They may be ‘is brown’ or ‘has a green smudge’, etc. The arrows correspond to weights: how important are these various features. Finally, we use the outputs of the previous circles in the Neural Net to determine whether or not the image is or is not that of a tree.

The learning happens when we train the Neural Net. We give the net a bunch of images which are labelled according to ‘is a tree’ and ‘is not a tree’. The weight of the arrows are ‘learned’ depending on an optimisation problem, so that as many images as possible in the training set are classified correctly. Finally we can use the trained Neural Net on other, non-test, images.

This is a very simple example of a Feed-Forward Neural Net (one in which all the arrows point forward) but it illustrates the point. That even though we don’t understand what the Neural Net is really looking for or quite how it works inside, it is not quite the black magic that the news implies. 



The Real Dangers

Though I do not think we are anywhere near (or perhaps ever will be near) a machine that actually is ‘human’ (in that it can think for itself), I do believe we are getting closer and closer to an approximation. We have robots that can respond in certain, specific, environments (take IBM’s Watson or the robot hotels in Japan) and these will slowly become more and more general. However, since the steps will be incremental and implemented by a large group of scientists at each and every stage, I find it hard to believe that the robot will suddenly become incomprehensible and uncontrollable by its creators.

In effect, this futuristic AI will be more like a computer: it will be composed of different parts (speech recognition, emotional recognition, language processing, speech synthesis, dynamics, sensors, etc) that are all independently understandable and that communicate with each other in a completely understandable manner. In essence, it will be similar to a PC: though the final product may seem almost ‘magic-like’ to someone who has no understanding of its inner-workings, in actuality each part is understandable and controllable.

This does not mean that in the hands of a given individual the futuristic AI is not dangerous, but so is a PC. A malicious programmer can steal the details of an unknowing user, a programmer error can cause the Ariane 5 rocket to explode. However, we put our faith in these computers and programmers anytime we take an airplane, drive a car, buy something online, so why should we be so much more afraid of AI?

Instead of focusing on the dangers of AI (many of which, in my opinion, are rather far-fetched), why don’t we focus on what we can achieve with it: a country of safe driverless cars, the ability of disabled people to be able to communicate using predictive text, and the ability to perform facial recognition to name just a few. 

Leave a comment