A-level Computing/AQA/Paper 2/Consequences of uses of computing/Robotics





Robots are becoming an increasingly important part of modern society. They work in factories, fight wars and might one day nurse you in your old age.

Artificial intelligence
A large and important field of computer science is Artificial Intelligence (AI), the study of making intelligent machines. Many robots and computer programs are said to have Artificial Intelligence (AI). AI can be summarised by the definition above, with explicit examples of it including:

Trying to get machines to perform very speciﬁc tasks, e.g. You might even have some ideas for how AI can be used for your A2 project next year.
 * recognition of faces or other things in pictures
 * automatic translation of written or spoken words from one language to another
 * controlling processes like landing aeroplanes, optimising a chemical plant or power station
 * vacuuming rooms
 * computer opponents in video games
 * building things in factories

The 'thinky' AI can even learn from experience, meaning that you don't have to program them how to explicitly respond to each and every situation. This AI starts to pose some very big questions for humanity. Is there really a difference between the intelligence of a human being and that of a program? We'll look into this a little below:

What are machines good and bad at, in comparison to humans?

 * 1) Machines are good at doing tasks repeatedly (think about car manufacturing robots), as they don't get tired or make mistakes.
 * 2) Machines are seen to be bad at making judgements which they haven't been built to make, sympathy, inventing things. etc. But if we built machines smart enough, couldn't we build these capabilities in?

What can this tell us about the way that the human mind works?
There are many scientists and philosophers who believe that computers will one day become as intelligent as humans. But there is a question about what 'intelligence' really means. If it is just performing tasks well, then there are computers that can compose music, or sweep a road, or fly a plane, or solve maths equations better than most humans. We can even get computers to display emotions such as sympathy and anger. Does this mean that we can fully recreate how the mind works?

In 1950 Alan Turing, an early pioneer in computer science, proposed a test for machine intelligence. If you could have a conversation with a panel of human beings and with a computer AI program, and be unable to tell the difference between whether you were speaking to a human or a computer, then the computer could be seen to be as intelligent as a human. This is known as the Turing Test.



In 1980 the philosopher John Searle posed a thought experiment that some see as proving machines cannot understand what they are doing. The Chinese Room is a box in which a man sits. He does not speak Chinese at all, but is passed Chinese characters under the door. He has a book of Chinese characters and their matching responses. On receiving a character he looks for it in the book and sends the corresponding character in reply. At no point does he understand what he is doing, he just follows the instructions. AI can be considered to be just like this, however complex the code, all it is doing is responding to inputs with set outputs, there is no understanding present.

A similar argument was made by Stanley Jaki in 1969, where he proposed that AI is a little like a drain pipe, where two water droplets roll down, at the bottom they combine to form a larger droplet, but at no point does the drainpipe understand what has happened. He argues that human beings possess this understanding, whilst machines do not.

However, many philosophers and scientists see intelligence and understanding as nothing more than complex algorithms responding to stimuli. Is there really a 'me' that 'understands' and what exactly is it? Could our mind be reduced to a set of algorithms?

See also:
 * Problem of other minds
 * Dualism
 * Qualia

What can we learn from machines
As machines are expendable they allow us to experiment and simulate human beings without worrying about their safety. For example machines are used by the army to simulate the damage received by a human being from the detonation of an Improvised Explosive Device. This allows us to design vehicles and clothing better able to protect soldiers.

What are the limitations of using machines as tools?
If you create a machine without emotions and without the ability to acquire emotions, for example a car manufacturing robot, then there are some important limitations about how they can be used.

If you were to work next to a robot in a factory and you started not feeling very well, the machine would be very unlikely to be programmed to feel any sympathy, and would most probably not change its work routine to accommodate your changing circumstances. However, it could be possible that the robot might be programmed with these features.



Machines in most cases lack the ability to adapt to new situations, being stuck with the code they have been given, and unable to see safety problems when carrying out their routine. The first robot-caused death was in 1979 when a robotic arm struck Robert Williams, a worker at a metal casting plant in the USA.

There might also be problems on the horizon if AI produces machines as 'intelligent' as us. In this situation they would have no limitations, they would be just like us. Isaac Asimov saw this problem and defined three laws of robotics to make sure that robots and humans can live together in peace.
 * 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
 * 2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
 * 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws