Issues in Interdisciplinarity 2018-19/Subjective and Objective Truth in AI

Objective Truth in AI
Artificial intelligence (AI) is often thought to make objective decisions easier. Here, objectivity refers to conclusions based on critical thinking and scientific evidence, where the conclusion is indisputable and there is only one true answer. Made up of formula and algorithms, AI can process vast amounts of data to come to a conclusion that is significantly more accurate, and therefore objective, than a human can achieve.

An example of this is machine learning and the task of identifying subjects in pictures. Though simple for humans, AI needs repetitive training with massive amounts of data to tell the difference between drinks, or a table and a stool. Neural networks in AI begin with one or multiple inputs, such as a picture, and processes them into one or multiple outputs, such as whether the picture shows wine or beer. These outputs consist of a complexity of ‘neurons’ that are grouped into layers, where one layer interacts with the next layer through weighted connections – each neuron carries a value, which is multiplied with the neuron in the consequent layer. Bias functions such as Eθ(θˆ) − θ, can be coded into the neural network and passed through the layers. As a result, inputs can be propagated through the whole network and the machine is taught to make predictions and draw conclusions that are as accurate as possible. This continual testing can reach decisions for extremely complex problems.

As used by Accenture in their teach-and-test framework for AI, the continual connectivity and data processing mentioned previously can be tracked, and decisions or conclusions reached by the AI system can be questioned. The AI can even be coded to justify the decisions it reaches. This can provide peace of mind that the AI is achieving human-centred, unbiased and fair conclusions – objectivity.

Subjective Truth in AI
It is often argued, however, that the supposed objective decisions made by AI end up becoming subjective because the data sets being used are biased. Here, subjectivity refers to a belief based on personal opinions, experiences and feelings and not on scientific evidence. As human beings we all have our own biases, and no one can be truly objective. As we are both creating the AI itself and the data it processes, it can be inherently implied that AI is never going to be objective.

Gender and ethnicity biases are often unconsciously inputted into algorithms. A notable example of this is AI facial recognition software identifying black women as men. It is suggested that this is down to the unconscious bias of computer scientists and engineers, the majority of which are white and male. Similarly, when searching for pictures on Google, the word ‘CEO’ will bring up pictures of men and the word ‘helper’ will bring up pictures of women. This is based on biased data sets on what a CEO looks like. Most CEO’s are indeed men, but this is based on historical patriarchal ideas that are generally considered wrong.

As AI is becoming increasingly more prominent in everyday life; self-driving cars, Google home devices, advertising, and many more applications, ethics need to be considered. Ethics can be defined as means to tackle the question of morality, but ethics can be interpreted differently according to one's opinions, beliefs and perspectives, as a result trying to create AI that is ethical is likely to cause many problems. Especially when these decisions are coupled with potentially biased data.

Interdisciplinary Approach to AI
From a mathematical, objective point of view; AI provides significant computing and decision-making power that humans will never be able to accomplish on their own, achieving more of an insight into complex problems. From a subjective, ethical and philosophical stand point; AI will never be truly objective and we’re likely to run into significant problems where AI ‘gets it wrong’, such as the 2010 Flash Crash, in its pursuit to find ‘the truth’ or to reach a logical conclusion.

As an example, AI could be used in recruitment to eradicate unconscious bias in hiring. However, if a machine learning algorithm was used, data about gender, race, disability etc. could inform the AI to make decisions to hire white, straight, able-bodied men – who according to bias data are the least risky, and therefore, most cost-effective choice of employee. It could easily highlight our own biases and amplify them. And, because machine learning is done in itself, it is a black box – we input data and we get data out, without auditing the results, we could be completely unaware of what data points the AI was using to inform its decision.

AI struggles to be truly objective when presented with problems that have ethical questions tied to them. However, evaluating AI from an interdisciplinary perspective ensures that there has been considered thought about the effects of AI and the decisions it has to make. Obviously, computer science and electronic engineering play a huge role in creating the technology, but philosophy and the social sciences such as anthropology, economics and psychology are needed in the development of AI to ensure we produce systems that ‘think’ about the other effects of its conclusions, making AI both useful and safe for humans to use in the future.