User:Nicola.georgiou/sandbox/Approaches to Knowledge/Seminar group 13/ Evidence

Seminar Group 13 Evidence Contributions

=Introduction=

'''Evidence is broadly defined as anything presented in support of a claim: providing a reason to believe that something is either true or false. '''

Evidence may be primary, in which case the material presented is a primary source, or it may be secondary, meaning that it is primary material that has been relayed by a secondary source. An example of a primary source is an original document like a letter, whereas an example of a secondary source is a textbook. Generally, primary sources are more credible, but in research, primary and secondary sources are used in conjunction.

Evidence may be presented in a quantitative form (numbers, statistics) or in a qualitative one (abstract concepts that may involve subjectivity), both are usually interdependent.

On this page, we will take a look at the different evidence-based practices, at the particularities of the concept in itself, at different types of evidence as well as the evolution and use of evidence in different disciplines.

= Concepts relating to evidence: =

The Law of Evidence
“Evidence, in law, any of the material items or assertions of fact that may be submitted to a competent tribunal as a means of ascertaining the truth of any alleged matter of fact under investigation before it.” The laws of evidence therefore govern the validity of evidence and the assessment of the quantum of information, this process often refers to the relevancy of the evidence defined as the “tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence."

Burdens of evidence
In accordance with the laws of evidence, any evidence admitted to court is only admissible under certain burdens/requirements. These include relevance but also unfairness, authentication/ self-authentication, that it is not characterized as hearsay (“Hearsay, in Anglo-American law, testimony that consists of what the witness has heard others say. United States and English courts may refuse to admit testimony that depends for its value upon the truthfulness and accuracy of one who is neither under oath nor available for cross-examination.”) , and lastly that the evidence has suffered no form of falsification or ‘spoiling’ whether intentional or negligent.

Witnesses and privilege
A key component of evidence in the tradition of English common law is the supporting testimony of witnesses, these figures are required to pass through an Oath or Affirmation to Testify Truthfully, also referred to as rule 603 by the Federal Rules of Evidence. Following which a period of interrogation and evidence gathering commencing in which the processes of direct examination and cross examination are conducted (Examination, in law, the interrogation of a witness by attorneys or by a judge. In Anglo-American proceedings an examination usually begins with direct examination (called examination in chief in England) by the party who called the witness. After direct examination the attorney for the other party may conduct a cross-examination of the same witness, usually designed to cause him to explain, modify, or possibly contradict the testimony he provided on direct examination.) During this process the witness, in the tradition of English common law, has a series of privileges; These include Doctor-Patient privilege, Attorney-Client privilege, spousal-privilege and various others, which serve to guard the confidentiality of information and protect the witness from self-incrimination as per rule 501 by the Federal Rules of Evidence.

= Evidence-based practices =

EBM - Evidence-based medicine
“Evidence-based medicine (EBM) is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.” EBM originated in the second half of the 19th century and has philosophical roots. Evidence based medicine is based on individual clinical experience to which you integrate clinical evidence found through research. This research is about targeting a patient’s precise condition taking into account his rights, preferences and predicaments.

Practise
Evidence-based medicine follows a rigorous methodology, composed of severals steps:

There must first be a translation of the patient’s problem into a clear question. The next step consists in a thorough search for the information necessary to answering said question. Third, the evidence must be evaluated through different methods in order for its usefulness and validity to be determined. Once all of these have been done properly, the last step consists in applying the results with the clinical expertise to the case and evaluate the performance. The hardest part of this process is the collection of the evidence and its validation.

Disadvantages
Evidence-based medicine has some disadvantages. Primarily, the process of evidence-based medicine can be long and time consuming where the doctor spends most of his time looking through evidence. Moreover, in certain circumstances, a doctor may face a shortage of information, especially younger clinicians with less experience to draw from. It can quickly become extremely frustrating.

Advantages
On the other hand, it is a great way for doctors to learn more from seniors and from their research. It can be done at any stage of their career and is way for them to not only stay up to date but to also integrate medical education with clinical experience. As this method is more time consuming, doctors will need to improve their skill to read selectively, choose what study or article is the most useful and trustworthy.

It is a young discipline that will continue to evolve with new medical advancements and more successful trials.

EBP - Evidence-based Policy
Evidence-based policy (EBP) is a particular strand of public policy making which insists on the importance of evidence. Indeed, EBP insists on how policies should be grounded in solid and objective truth, otherwise known as evidence, in order to accurately determine and answer the needs of the people. In EBP, clear scientific evidence is used to analyse the situation and create the best policy. They reject the consensus of ideology based policy making, which consists in making policies based on « common sense », which they see as insufficient.

Evidence-based policy’s origins can be linked to evidence-based practice, especially in medicine. In evidence-based medicine, clinical decisions are only made after rigorous scientific testing, which are called « randomised controlled trials » or RTCs. The relation between both originated from the Cochrane Collaboration’s influence. In 1993, the Cochrane Collaboration was due in the UK to keep RTCs up to date in order to produce reviews and help the development of human health and health policy. However, evidence-based practice grew so much that in 1999, the Campbell Collaboration organisation was created. The Campbell Collaboration, similarly to its sister organisation, analyses evidence in order to find which one can be the most useful to help the development and understanding of social and public policy.

In evidence-based policy, there are many different methods to determining which evidence is the best one to use and what policy should be made accordingly. However, all of them present similar characteristics. They all belong to a very specific and scientific framework which evaluates risk and benefit in order to guarantee a net payoff in the case of the policy being implemented. Evidence-based methods tend to be quite exhaustive, and a complete overview and thorough understanding of the situation is necessary before any action is to be taken. Policy-makers have to make sure all their theories are tested rigorously and can be replicated by a third party. They also give great importance on the impact of the policy they think of implementing, which means they must analyse all the direct and indirect effects it might cause.

Evidence-based policy remains a very controversial subject that has received a lot of criticism. It is very complex to determine if an issue can be best answered through the use of quantitative research and information. Indeed, the methods and instruments of evidence-based policy have been criticised and are often seen as inadequate when it comes to answering questions with a more human basis, such as social justice or human rights. Often, evidence-based policy is seen as insufficient when it comes to its research. Many professors such as Paul Cairney or Cartwright and Hardie argue that decision making policy is a very complicated process which takes in consideration a great number of factors that cannot be answered simply with the help of evidence-based analysis with RTCs.

Hierarchy of Evidence
Hierarchy of Evidence (or Levels of Evidence) are a heuristic employed in Evidence-Based Practice, such as Evidence-Based Medicine, to relatively rank the validity or strength of recommendations in health care. The most commonly applied metric of Levels of Evidence is the Grading of Recommendations Assessment, Development and Evaluation (GRADE), which is implemented by systematic reviewers and guideline developers to evaluate the quality of evidence and determine whether an intervention should be recommended. This is one of the formal manners in which evidence is assessed for its validity before being applied in Evidence-Based Practice s, suggesting the strength of different types of evidence.

GRADE is endorsed by over 100 organisations (such as the World Health Organisation and Canadian Task Force for Preventive Health Care) to form health care recommendations.

Evaluation of Evidence
Quality of Evidence

Initially, quality of evidence is ranked so that randomised controlled trials (RCTs) hold an a-priori high ranking, and observational studies have a lower ranking. Thus, the heuristic emphasises accuracy, precision and prioritises primary forms of evidence. From this initial ranking, evidence can be upgraded or downgraded to reflect issues with the method. Once these considerations have been taken, all evidences are given a ranking of either “high”, “moderate”, “low” or “very low”, where the ranking indicates the confidence with which the data presented may be interpreted as being accurate. This is then considered by clinicians in forming a recommendation. These rankings suggest some of the limitations or biases that may adversely affect the validity of evidence.

Strength of Recommendation

Recommendations are categorised as either “strong” or “weak”, which reflects the extent to which the desirable impacts of an intervention can reasonably outweigh any negative effects. A strong recommendation is likely to be chosen by informed patients and will be structured with their clinicians accordingly. Weak recommendations suggest patients choices will change in line with their preferences and values, which must be considered by clinicians. As such, the quality of evidence outlined above is only one consideration that factors into a GRADE recommendation. GRADE differentiates the quality of evidence from recommendations, as it also considers other factors such as the balance between desirable and undesirable implications, patients preferences and cost-effectiveness. A practitioner will then make a recommendation to a patient based on these conclusions. The use of other factors also demonstrates how evidence alone may not determine a best course of action, and cannot alone provide the answers to real life medical questions.

= Evidence within disciplines: =

Digital Evidence - The new challenges digital evidence presents to the world of Law and Forensics.
The expression "Digital Evidence" (or electronic evidence) is used to refer to digitally stored data that could be presented in a court case, and, if valid, could either prove the innocence of the party facing trial, or incriminate them. As a form of evidence it presents many new challenges to many different fields.

Uses
Digital evidence is mostly used to fight electronic crimes, such as the propagation of child pornography, blackmailing, electronic harrasment, drug trafficking, criminal copyright infringement or any form of credit card fraud. This information, presented in binary form, could be found on a computer hard drive, on a mobile phone, and exists in multiple formats: video or audio files, browser history, transaction logs, message histories, e-mails and digital photographs are all examples of digital evidence.

Challenges
Digital evidence, as opposed to physical evidence, presents quite a few new challenges for forensic examiners and law enforcers. First of all, it is known that digital data can be used as digital evidence in case of need, however, it only gives a partial image of the actions taken by the prosecuted. Only some results of the individual's activity are memorized by the digital device and can be later deciphered by specialists. For example, digital activity such as email messages and server logs, are retained, however, mouse clicks and any keyboard depressions are not retained. Forensics examiners are thus forced not only to acquire said quantitative data, but they also have to be able to translate and interpret it into actions. Another issue that digital data presents, is the difficulty law enforcement agencies have training professionals capable of both collecting and deciphering said data. The field of digital technologies is in constant development, and thus specialists in the field have a hard time keeping up. Said specialist must consequently develop knowledge in both the field of computer science and criminal justice, in order to be able to efficiently understand the quantitive data that digital evidence provides, and interpret it in the context of an actual "physical situation" that may have had consequences on other individuals - as electronic crimes may be just as sever as their physical counterpart.

Lastly there exists a distrust within many court systems for the new forms of evidence that continue to appear before them. This has lead to the treatment of digital evidence in different forms depending upon the court system it is used in, these differences include differing definitions of hearsay for electronic data, and standards of authentication. Often digital evidence is rejected regardless of suspicion of illegal modification.

Evidence within Art History - The debate between the physicality and conceptuality of evidence within an artistic field
In | Art History we find that evidence can exist in many different forms from paintings, sculptures, buildings, photographs, books, etc. Art History has the challenge of dealing with unusual forms of evidence, which now with the rise of technology also include video, sound, and | digital media.

Nonetheless, we need to make a clear distinction between the piece of art, or the object, and its purpose and intention behind it. Thus, within Art History we can find a distinction between evidence in its physical form as well as the conceptual nature of evidence. For example, when looking at | The Starry Night by Dutch painter |Vincent van Gogh we can recollect information such as the stylistic and compositional tendencies of the moment, being the end of the 18th century, and get a good snapshot of the effects of the [| post-impressionist] movement on the art of painting at that time. Simultaneously, we can observe this painting that is considered Van Gogh’s | magnum opus, as a physical piece of evidence being an aging 73.7 cm × 92.1 cm canvas. Information such as the oil paints and pigments used, Van Goghs writing behind the painting and it’s auction history could reveal previous owners that might’ve interacted with (and affected) the artist.

One of the biggest issues regarding material evidence in the field of Art History is precisely related to its materiality. In many cases when studying art history, the original source of evidence is not preset, as it is found in another country or in a private collection. Here is where the physicality of the evidence is sacrificed as then reproductions of the original are used. The problem with these reproductions is that in this translation between original to reproduction the size and scale of the original are lost, asl well as many details that depending on the medium of reproductions, be it photography, cast making, videotaping, etc, cannot capture. Hence any knowledge that could’ve been garnered from these aspects is immediately lost.

What makes evidence in the Arts as a whole so unique is that in comparison to most other fields artistic evidence doesn’t confine itself to logic and thus isn’t based on quantitative facts but instead on experience and ones individuals expression of feelings. Therefore, in the analysis of evidence within Art History we do not just test if something confirms or denies a hypothesis, we try to understand past artist intentions and their way of expressing reality through a specific medium. For example, if we look at | Picasso. When he tried to depict the horror | Guernica and the terror of war, he based himself on reality but painted what he considered to be his truth. Thus, we can use his mural as evidence of the pain of the war and the atrocities that happened to the people and society, through his cubist approach and his specific view of the catastrophe. The question lies in the fact of this being valid evidence for the real event of for Picasso’s specific viewpoint or is it just evidence of the | cubist movement of the time.

Evidence within Anthropology - The Evolution of Evidence in Anthropology
When looking back at the history of anthropologic studies, it is apparent that there has been a shift in the understanding of 'evidence,' as well as a reevaluation of what constitutes an ethical approach to acquiring evidence. Looking at the evidence used by early anthropologists to advance their argument, it comes as no surprise that this evidence has been the source of intense scrutiny, vehement criticism and general outcry; eventually leading to a change in the anthropological understanding of what constitutes reliable evidence.

Early forms of anthropological evidence
Morton's use of white pepper seeds as evidence to determine the intellectual capacity of arbitrary races in his book Crania Americana, was revered an exact science which in turn, upheld the theories of scientific racism. Only when Darwin's book On the Origin of Species was published, was Samuel George Morton's legacy reviewed. More recent studies by Stephen Jay Gould draw attention to the extent to which Morton's evidence was not objective, but subject to an unconscious bias: the seeds were more or less compressed when poured into different skulls so as to obtain findings in compliance with the argument the physical anthropologist was hoping to make. During the same period, most British anthropologists based their studies on reports and accounts from missionaries, travelers and diplomats. This type of secondary evidence was later coined as "paraethnographic" data by Trouillot. This form of evidence too was widely opposed in more recent decades by the community of anthropology scholars, who referred to anthropologists adhering to this method of gathering evidence as "armchair anthropologists."

The critique of these forms of evidence
The critique of these earlier forms of anthropological evidence served to elucidate the core issue surrounding it. Indeed, there existed a widely ethnocentric approach to evidence. As such, rather than searching for objective evidence which would consequently be analyzed, anthropologists such as Milton would select evidence with a certain bias: rather than moving from the evidence towards a conclusion, his ethnocentric standpoint convinced him he knew the conclusion and only needed the requisite evidence to support that conclusion. Similarly, evidence from travelers is likely subject to ethnocentric bias. Indeed, travelers, explorers and missionaries were not looking for anthropological evidence, they were mostly interested in recounting what they saw, placing emphasis on the differences between their own cultural practices and those of the peoples they were with, not shying away from interpreting what they saw in a particular way. Anthropologists who drew on these accounts, did not have access to primary evidence. In working with secondary sources, their evidence was subject to altercations.

The development of new approaches to evidence
In more recent years, there has been a reevaluation of what constitutes anthropological evidence, with a particular focus on the ethics of gathering anthropological evidence. The methods of acquiring empirical and quantitative evidence are diverse and vary from participant observation to interviews and video recordings. Furthermore, a strong emphasis is now placed on evaluating positionality to produce more objective forms of ethnographic evidence, or at least, to allow the reader of anthropologic studies to evaluate the bias which may be embedded in the evidence presented.

Quantitative Methods
Quantitative data refers to 'data expressing a certain quantity, amount or range' amounting to a measurement unit. The collection of empirical data with quantitative research methods makes possible the generation of mathematical modelling for prediction, hypothesis, optimization purposes in statistics, game theory, econometrics and other sub-fields of economics. The aim of quantitative research is to produce a reliable and objective data set. It endeavours to produce a generalized finding.

Examples of quantitative methodologies :
 * cross sectional studies : surveys undertaken at a specific time « t ». By adding up different surveys, one can obtain measures reflecting change in society.
 * opinion polls : survey measuring the diverging public opinions (limited portion of the population) on a specific issue (ex: allegiance to a party for an upcoming election)
 * questionnaires : questions given to a specific group of respondents (ex: 18 to 25 years old) to evaluate behaviours or opinions about a topic (ex: proportion of young people affected by an eating disorder). The questions are ‘closed responses’
 * social attitude surveys : broader questions about beliefs and behaviors (ex: life satisfaction)

Limitations of Quantitative Methods
Quantitive research assumes that a sample of the population studied represents the whole population. This is a very limited view of human beings that can lose the complexity and particularity of each individual, group and situation. For example, a classical assumption of economics is that the individual —or the corporation— is a rational decision-maker always seeking to maximize his/her own profits. This presupposition distorts research and policy-making by perpetuate an economic system of constant absolute gains-seeking. Quantitive methods are limited in their understanding of society as a human web. This poses a real issue of narrowness and bias in research. An example would be the establishment of the World Happiness Report (UN). This new measure was first issued in 2012. It was seen as an improved —more complex— standard quantifier of progress than the GDP for it takes into account additional variables : social support, healthy life expectancy, good governance (ex: freedoms, perception of corruption), environmental sustainability, and some more 'personal variables' like education and physical/ mental health. However even this "improved" generalized set of data has its shortcomings. These criteria do offer a certain comfort of life, but can they add up to a complex, nuanced and personal state as happiness ?

The highest ranking countries are generally Nordic (Norway, Denmark, Finland, Sweden). Nevertheless studies show that the rate of depression, alcohol and drug consumption, and suicides are high and growing in the same countries that score high on the WHR, suggesting at the very least that the picture is more complicated than the index would suggest.

‘Alternative’ and growing use of Qualitative Methods
Qualitative methodology is often opposed to quantitative research. It includes 'unstructured, open-ended interviews with economic actors' over a broad range of issues such 'migration, labour market, technological change', etc. Long and detailed case-studies allow social notions such as gender or/and race (intersectionality) to be part of the equation, as in the analysis of the intricate relationship between discrimination and inequality. Jacqueline Scott in a ESRC's project on Gender Inequalities in Production and Reproduction argues that quantitative methods are 'invaluable when examining the way production (workforce) and reproduction (returning from a maternal leave) interact in terms of a women’s career opportunities'. An examination of 'outcomes for different groups of women, who are in different occupations and who adopt different approaches to juggling work and family responsibilities' is necessary to create inequality-reducing policies.

However, quantitative and qualitative methods are not mutually exclusive. Combined, they offer broadened research and consequently impact policy-making’s understanding of a complex world with intricate social, economical, cultural (etc) relations.

Cartography as a discipline
Cartography is generally not thought of as a discipline, rather as a field of study. However, it can be argued that it is in fact a discipline. The Cartographic Journal is a peer-reviewed academic journal dedicated to the study, discussion, and analysis of cartography. The publishing of the journal in 1964 can therefore be considered as a marker for the beginning of cartography as a discipline.

Mercator Projection
Evidence in cartography mainly revolves around maps, these can differ depending on the map projection used. A map projection is the way in which the globe is converted into a 2-D format, from a sphere to a flat map. There are several types of map projections, one of the most well-known being the Mercator projection. The projection was developed by Flemish geographer Gerardus Mercator. This cylindrical projection involves evenly spaced longitudinal lines, while the distance between latitudinal lines become more largely spaced further away from the equator. Mercator’s intention behind this design was to allow easier navigation for sailors at the time. The straight and perpendicular lines mean that they bear true to north, south, east, and west, allowing navigators to plot a straight-line course. Additionally, these straight lines allow for no distortion when zooming into the map, one of the reasons why it is used in online maps such as Google Maps.

Eurocentrism in the Mercator Projection
The Mercator map projection has been accused of providing a very Eurocentric perspective of the world. The increasing latitudinal lines stretch landmasses that are closer to the poles making them appear bigger than they should be in relation to other countries. Countries in Europe and North America especially have been “artificially enlarged” making them appear more important to the people that view it. In some Mercator maps, Antarctica has been cropped out due to an excessive increase in size. Science education author Michael DiSpezio argues that this emphasises the Eurocentrism of the Mercator projection as the centre of the projection now lies closer to Europe. This distorted representation of the world has therefore had a significant impact on reinforcing the idea of European superiority. The sense of western superiority is thus reinforced via the association humans draw between size and power.

Different forms and scales of evidence in biology
Different methods in the two branches of biology.

Two branches can be seen in biology. The theoretical one and the experimental one. Thus, different methods exists to prove evidence in this field. Theoretical biology is said to be very quantitative. The method of model building very useful in biology implies many quantitative approaches. Evidence is here found through logic. Conversely, in experimental biology, Marcel Weber argues that strong evidence is provided because a very large diversity of approaches are presented. This type of evidence can be seen as qualitative.

Link between the quantitative and the qualitative evidence that biology provides.

Stephen H. Jenkins argues that evidence can be seen both in comparative studies as well as in experiments. Both need observation. Therefore, one can argue that here, at least, evidence is based on observation. For instance, a comparative studies shows that there is a high risk of toxicity for aquatic species for a high environmental concentrations of a specific chemical compound (bisphenol-A) can be harmful for aquatic communities. Here, repetitive observation brings evidence. In addition, Jenkins says that observation implies data. The accumulation of empirical observation can form a data. Therefore the method of analysis here shifts from a qualitative one to a quantitative one. In this point of view, we could conclude saying that in biology, both methods are needed but one should be used before the other.

'''Biological and scientific evidence in public context. '''

Evidence in biology and more largely in science can be see at a different scale. Here, Jenkins talks about how scientific research is spread in the media and how it is difficult for the general public to find evidence in this unorganized spread of facts. He argues that even if all methods and assumptions used in a research project are not shared by the media, the general public can still find ways to try to question and interpret these research methods.

Ethics in obtaining evidence in biological research
Progress in certain areas of biology largely depends on being able to carry out studies in vivo (studies where consequences of experiment/trial are observed in a whole organism - plants and animals - rather than in a tissue extract or dead organism). Drugs and cells interact differently in a test tube versus in a living organism, as there are other physiological and metabolic factors that affect drug efficacy that cannot be observed in vitro; the body might break down the drug before it reaches its target, or it might not be able to absorb the drug.

However, for obvious reasons, animals (excluding humans) are unable to provide informed consent, and even among humans that have consented to participate in clinical research, the search for evidence can and does infringe upon human and animal rights. A balance must be found between the benefits of scientific progression and the rights of the individual. The Tuskegee Trials (beginning 1932) were a breach of ethics. 2/3 of the 600 men enrolled had latent syphilis, and were given no effective care, so suffered the effects of the disease until they died so that it could be studied.

The ethical principles that are common across scientific disciplines have origins in documents like the Nuremberg Code (1947), the Universal Declaration of Human Rights (1948) , and the Declaration of Helsinki (1968). Such ethical principles include duty to society, beneficence, informed consent, privacy, and integrity (not falsifying or omitting relevant data). These documents emphasise that patient health is the first priority, and that protecting the rights of subjects takes precedence over the acquisition of knowledge.

It is necessary in any experiment to have a control group, otherwise it is not worth conducting the experiment. However, an issue arises when considering trials like that of antiretroviral drugs to treat AIDS like AZT. If the drug functions in the way it is intended to, the placebo group misses out on the benefits in order to fulfil the basic requirements of a clinical trial. Whilst it is necessary to do so to get accurate results, the consequences on the individual can be fatal. A response to this mentality is seen in the original AZT trial, which was terminated 5 months prematurely as it was considered unethical to give people placebos when the drug might keep them alive longer. Yet this meant that the trial was not completed and it could not really be said unequivocally that AZT prolongs life.

'Evidence within Medicine' - Addressing the gender bias
Medical research has historically excluded women from trials despite the evidence from these studies determining the treatment and medication available to men and women alike. There is a growing consensus that the evidence basis of the medical discipline is fundamentally flawed. Some notable examples are the 1982 Multiple Risk Factor Intervention trial, conducted to evaluate the impact of certain lifestyle choices on cholesterol and incidence of coronary heart disease, which enrolled 12,866 men and 0 women; the Physicians’ Health Study on the effectiveness of aspirin in reducing risk to heart disease included 22,071 men and 0 women ; and the National Institute on Aging’s Baltimore Longitudinal Study on aging excluded women for the first 20 years it ran. The reasoning behind this exclusion can be grouped into three categories:

1. Scientific: a lack of knowledge of female physiology which may compromise the validity of results if female subjects are included in trials. Also, the concern to avoid harming women’s fertility given women are born with all their eggs at birth as opposed to men, who constantly produce sperm.

2. Historical: to maintain validity of studies, trials must be repeated on the same populations which are, historically, exclusively male.

3. Economic: female participation is restricted due to limited research budgets. Although, this gender bias is also the direct cause of significant economic costs: of the 10 prescription drugs removed from the US market between 1997 and 2000, 8 were on account of women’s health being placed at higher risk than men’s. The process of drug withdrawal is a costly one, the burden of which is typically borne by pharmaceutical companies and their stakeholders. .

= Interdisciplinary uses of evidence: =

Philosophical evidence : “Cogito ergo sum”
Evidence is defined as “one or more reasons for believing that something is or is not true”. It can therefore be stated that the essential principle of evidence is to rule out any doubt on truth, no matter the area of study. For that matter, doubt is the very basis of philosophical thinking. One of the best known questions ever made in this field was the one of existence, covered by the cartesian "cogito ergo sum". By this logical reasoning, Descartes aims to prove his existence: by doubting it, he is producing thought. But in order to think, he needs to exist. Consequently, “cogitat ergo est”, “he thinks, therefore he is”. However, Descartes himself claims in a later production “The senses deceive from time to time, and it is prudent never to trust wholly those who have deceived us even once”. This very assumption leads to question the sufficiency of his theory and on what tangible grounds and indisputable evidence it is based.

Providing irrefutable evidence of the existence of oneself
The philosopher’s approach is qualitative as his doubt cannot be put into numbers. In addition, it can be described as primary evidence since he reflects on his own self. Nevertheless, the Meditations on first philosophy quoted before indicate it is not enough to fully approve the cartesian “cogito”.

Indeed, biological quantitative data would probably give powerful evidence of one’s existence: a living human being in good health should have a heart rate of 60 to 100 bpm when resting. Someone in this range is very likely to exist. Another way to complete Descartes’s theory might be to provide secondary evidence, to include contribution coming from an exterior relevant party. This secondary evidence could be a legal proof such as a birth certificate or surveying his acquaintances to testify to their relationships.

In conclusion, Descartes’s philosophical cogito appears not to be significant enough to acknowledge one’s existence with strong and indisputable evidence. Nevertheless, it would surely be proven with additional evidence and data coming from distinct fields of study such as biology, law, social interactions and relationships etc. This effectively illustrates that an argument is most strongly supported when approached through a wide range of accurate evidence emanating from various disciplines.

Evidence for extraterrestrial life
The search for evidence for extraterrestrial life is based in astrobiology, a highly interdisciplinary field that brings together many disciplines. Evidence from both the sciences (including physics, biology and ecology) and the humanities, such as philosophy, is needed to prove the existence of extraterrestrial life. As it is such a hypothetical field, the collection and analysis of evidence is integral.

Most qualitative evidence for the existence of so called ‘aliens’ is based on exoplanet analysis. The study of exoplanet atmospheres gives an indication of any present biosignature gases and therefore an indication of both microbiological life as well as potential intelligent life. Furthermore, a planet needs the right atmosphere and temperature to support the presence of liquid water which acts as a biosolvent. However, this qualitative evidence is mainly based on experiences of terrestrial life and evolution therefore, more grounded evidence could be found if thinking shifts away from a geocentric mindset; as well as water, other biosolvents are worth considering, ammonia for instance, which is found on the moon Titan as water-ammonia eutectics.

There have also been attempts to quantify evidence for extraterrestrial life. The Drake Equation strives to give a numerical value, N, being the number of already communicating extraterrestrial civilisations that exist. However, there are issues with quantifying such data. The equation has been criticised as evidence, as humans only know their own technology and so this means the last three factors of the equation are complete estimates. A new ‘Seager Equation’ has been proposed and quantifies life in space based on biosignature gases, rather than estimations of radio contact between civilisations, and applies to both microbiological life as well as potential intelligent life.

While astrobiology seems a very scientific field, philosophy has shown to be key in providing evidence for extraterrestrial life where technological limitations mean that it is difficult to quantify data at present. The Panspermia hypothesis is one such philosophical idea, originating from the Greek philosopher Anaxagoras. Panspermia states that life spreads through the universe, as ‘seeds’ from one planet lead to life on another. Meteors are the proposed vectors for extremophile bacteria such as Bacillus subtilis to spread through space, and simulations on artificial meteorites have provided evidence that such spores can indeed survive the harsh conditions of space and atmospheric entry, and therefore evidence for panspermia. However, this evidence is only based on simulations on Earth, and so it might not be considered irrefutable evidence for extraterrestrial life.

=Conclusion= Evidence is manifestly varied in nature: not only is there a general distinction between forms of evidence - primary versus secondary evidence - but there is also a distinction as to what can be considered valid evidence from one discipline to the next. Indeed, certain disciplines place stronger emphasis on quantitative evidence, assuring it has a more objective quality; others focus on qualitative evidence, defending its worth by its broader reach: not all things are quantifiable. The research in this article seems to illustrate a tendency towards multiplying forms of evidence, looking across disciplinary boundaries, thus constituting a mean of developing a more substantiated argument.

= References =