Issues in Interdisciplinarity 2018-19/Evidence in driverless cars

= Evaluation of Evidence Within, Surrounding and In Consequence of Self-Driving Cars = This article examines how evidence is evaluated in self-driving cars from an interdisciplinary perspective encompassing: how self-driving cars use algorithms to collect and evaluate evidence (section 'within', disciplines: computer science and engineering), how policy-makers deal with risk and the uncertainty of evidence (section 'surrounding', disciplines: politics, statistics and psychology), and the role of evidence as an ethical entity (section 'in-consequence', discipline: ethics). A different, unique definition of evidence will be applied to each section in order to show the breadth of meaning of this concept.

Within: Evidence and Bayes Theorem
Evidence, within a driverless vehicle, is defined as the continuous information gathered by cameras, radar and laser sensors from the surroundings. Algorithms are the central body, which process the data to perform reasoned actions. According to the SARTRE project a vehicle graded at Level 5 is fully autonomous in all driving modes, navigating entirely without human input.

Convolutional neural networks (CNNs) have been revolutionary in 'training' the algorithm in driverless cars, allowing them to learn automatically from training drives. CNNs use pixels from a front-facing camera to direct steering commands.

This system operates largely on the basis of Bayes’ theorem. Simply, Bayes’ theorem offers a systematic way to update one’s belief in a hypothesis on the basis of the evidence presented. For example, Google’s driverless cars use evidence from both Google Street View and artificial intelligence software.

Occasionally, the human operator is required to take driving control. A vehicle considered to be Level 3 can monitor its environment and drive with full autonomy under certain conditions, but not if sensors become damaged in challenging weather conditions. Additionally, external data sources can oppose each other, but if the concepts of Evidentialism stand, each is justified in its recommendation to the driverless vehicle, if their evidence supports it. To overcome this, the algorithm may re-direct the control to the human driver.

However, increasing reliance on automated systems could mean that humans will not maintain the skills to operate cars competently. Consequently, although algorithms arose from computer science, their future role in driverless transportation is also relevant in the social and political disciplines.

Surrounding: Evaluating Evidence in Risk Assessment
The definition of evidence as 'that what justifies belief illustrates the potential use of evidence in informed policy-making, where often the decision is justified by the assessment of potential risk.The cases of human deaths in the crashes of self-driving cars show that their development and implementation can pose safety questions. These questions are investigated through risk assessment, which involves collecting and evaluating evidence on the variety of possible hazardous events and the probability of their occurrence.

Human evidence evaluation in statistics can be seen in the "analytic system" and the "experimental system", utilized in risk assessment. The former uses normative rules (including statistics and formal logic), the latter uses emotion (including associations and experiences), although the "analytic system" requires guidance of the "experimental system". Subsequently, programmers might be considered using their "experimental systems" to decide, for example, how the algorithm should react to certain situations (see 'in consequence' section). The algorithm and the evaluation of evidence (e.g. data) as the "analytic system" collaborate with the programmer's "experimental system".

Limitations in obtaining and evaluating evidence
Psychological factors affect the evidence evaluation performed by humans, who consequently make predictions and form policies. The perception of self-driving cars is connected to the emotions towards this innovative technology. Therefore, evidence is important to inform opinions. Another concern is the possible access of third-parties to personal information compiled by self-driving cars. The continuous data that would be gathered regarding the surroundings is likely used as it is collected in public. This contributes to privacy concerns and negative feelings towards the technology.

Statistics can define observed data as evidence and evaluate data. Evidence about fatalities and injuries using self-driving vehicles is hard to obtain as the vehicles have not driven sufficient miles to provide clear statistical evidence. Miles driven does not correlate clearly to fatalities and injuries, so the cars need to complete hundreds of millions of miles to provide reliable evidence. The limitations of obtaining and evaluating evidence show that it might not yet be feasible to demonstrate safety and uncertainty might remain, which affects policy-making.

Approaches to uncertain evidence
One approach to deal with the uncertainty of evidence in policy-making is the precautionary principle. The meaning can be reduced to adopting measures to avoid harm to human health and the environment, even if these are not confirmed with data. For example, in the USA NHTSA safety standards, it is assumed that a human driver should always be able to control the actions of motor vehicle in order to ensure its safety.

However, in its extreme sense, the precautionary principle could lead to restraining from taking any action. A more moderate approach is represented by adaptive regulations, which create new evidence (e.g through pilot experiment) and review it in order to adapt to the evolution of technology. In case of autonomous vehicles, the adaptive regulations might become a mediator in negotiation between risk and progress, as experience and technological change will inform safety deliberations.

In Consequence: Evidence as An Ethical Entity
Evidence in relation to ethics can be defined as mediating outcomes of driverless cars operating in accident scenarios used to determine how algorithms should react.

Programming autonomous cars requires addressing of dilemmas where the algorithms must make decisions in no-win situations or trolley problem premises, choosing which people involved are implicated, perhaps harmfully. One concern relating to these decisions is whether autonomous cars should act in the interest of the passengers or society. Although these are philosophical thought premises, they help determine how the algorithms will react in accident scenarios where collisions are unavoidable.

There is, however, no evidence to suggest which reaction is the best way for a self-driving car to respond. From an utilitarian economic perspective, it should be to maximise total social benefit, hence resulting with the accident incurring the least total cost. From an engineering perspective, optimisation of machine functions and decisions outweighs ethical and legal considerations. From a law standpoint, optimisation of an algorithmic decision to kill is unjustifiable and indefensible. An interdisciplinary outlook must be applied as there are many conflicts in interests and little evidence to suggest a clear prioritisation of factors in these trolley problems.

Societal cultural values, which differ across nations, shape the normative ethical beliefs of individuals within those societies. Studies on a range of countries have demonstrated varying opinions on the implementation of autonomous cars, revealing differences in ethical considerations. The validity of evidence is dependant on the desired outcome and desired outcomes will vary. There is a lack of real world evidence to guide a resolution amongst these variations in normative ethical ideas as autonomous cars are relatively untested.

Conclusion
This article has analysed how evidence is evaluated in both practical settings and in an abstract, emotional form: relating to the disciplines of computer science, engineering, statistics, psychology, ethics, and politics.