User:Bartekmik/sandbox

Surrounding: Risk assessment as evidence evaluation
The definition of evidence as 'that what justifies belief' illustrates the potential use of evidence in informed policy-making, where often the decision is justified by the assessment of potential risk.

Driverless cars and risk assessment
The cases of human deaths in the crashes of self-driving cars show that their development and implementation can pose safety questions. These questions are investigated through risk assessment, which involves collecting and evaluating evidence on the variety of possible hazardous events and the likelihood of their occurrence.

Human evidence evaluation in statistics can be seen in the "analytic system" and the "experimental system", utilized in risk assessment. The former uses normative rules (including statistics and formal logic), the latter uses emotion (including associations and experiences), although the "analytic system" requires the guidance of the "experimental system". Subsequently, programmers might be considered using their "experimental systems" to decide, for example, how the algorithm should react to certain situations (see 'in consequence' section). Therefore, the algorithm and the way it evaluates quantified evidence as the "analytic system" collaborate with the programmer`s "experimental system".

An example of analytical data, that assess risk of utilizing self-driving cars are the Californian disengagement reports. They consist of the frequency of human interventions in self-driving cars tests. According to this data, the cars of Waymo were intervened 0.02 times per 1000 miles. This means they could potentially cause 4-5 times more crashes than human drivers.

Psychological factors
Psychological factors affect the evidence evaluation performed by humans who consequently form opinions, make predictions or create policy. The perception of driver-less cars is can by affected by the emotional relationship towards this technology, which could represent a barrier to its implementation. Moreover, Negative feelings towards self-driving cars are also caused by privacy concerns, as connected cars access a lot of personal information about the driver, which could be accessed by third-parties. Because of this possible emotional prevalence, evidence and how it is presented is important in informing human opinions. These fears might lead to a pessimistic judgement on the consequences of technology.

Lack of clear statistical evidence
Statistics can define observed data as evidence and evaluate data. Evidence the fatalities and injuries caused by self-driving vehicles is hard to obtain as these cannot complete sufficient miles in the near future. Because fatalities and injuries compared to miles driven occur infrequently, the cars need to complete hundreds of millions of miles to show reliability and deliver clear statistical evidence.

Other barrier in obtaining clear statistical evidence are the concepts, which create a framework for safety testing. For example, human intervention in Californian disengagement reports, does not necessarily determine that otherwise it would crash. In addition, the context of testing can result in higher number of disengagements, as engineers often put vehicles in more challenging settings.

Approaches to uncertain evidence
The barriers in obtaining and evaluating evidence suggest that the uncertainty on safety of self-driving cars might remain. Some of the ways by which policy-makers deal with this problem are precautionary principle and adaptive regulations.

One of the existing approaches towards risk in policy-making is the precautionary principle. Its core meaning could be reduced to taking the precautionary measures when there is a possibility of threat to human health and environment even if there is a scientific uncertainty about the cause and effect relationship. An example of that are the USA NHTSA safety standards, which currently assume that a human driver should always have a possibility to control the actions of motor vehicle in order to ensure its safety.

However, the extreme precautionary approach could lead to restraining from taking any action. Therefore, adaptive regulations, which involve the 'review and update [of] policies in light of evolving scientific knowledge and changing technological, economic, social and political conditions.' In case of autonomous vehicles, the adaptive regulations might become a mediator in negotiation between risk and progress, as experience and technological change will inform safety deliberations.