User:JREverest/sandbox/Approaches to Knowledge/2020-21/Seminar group 6/Evidence

Evidence in History
History is “(the study of or a record of) past events considered together” aiming to help understand the evolution of events and the reasoning behind past customs. Hence, the only way in which you can practice history is by looking into records of past events and connecting them so that they accurately represent what happened at a specific time. However, interpretation also plays an important part in historical enquiry due to how often there is missing or misleading information.

Ernst Bernheim (1889) proposed a historical methodology comprising of seven steps which aim to deal with ambiguous or absent sources, as well as to criticise the sources’ validity. However, what is currently more appealed to as a “manual” of historical methodology is Garraghan’s (1947) “A Guide to Historical Method”. These books answer questions around what happens when two sources contradict each other, how to interpret eyewitnesses’ claims and when to consider them, anonymous sources, et al. It is essential that there is a complete and sound methodology to evaluate evidence, given that history’s entire foundation revolves around it. The procedure must touch on all possible situations in which uncertainty occurs and express which is the best compromise. For example, this is how a mixture of Bernheim's and Garraghan's methodologies deal with the situations below :

Contradiction between sources
When sources evidently contradict each other, the most appropriate thing to do is to contest the authority of the claims in relation to who rendered them. Experts are usually more reliable than eyewitnesses, which makes them the preferred source. If there is no means of evaluating the sources' authority, the historians make their judgement based on which source best fits common sense.

Eyewitnesses
When considering an eyewitness’s account, there are many variables which must be taken into attention for the judgement of their claims to be sound. It is important to settle what are the intentions of the eyewitness, their way of expression (ironic, literal, etc.), if the statement is highly improbable in relation to the context or to human nature, etc.

Importance of Historical Revisionism in Interpretation
Historical revisionism, in the discipline of history, is the challenging of already established beliefs that are based on evidence. It revisits the inscribed interpretations in history books and tries to approach them from new positions – that is also what is happening today on creating a “new” history on the colonisation era by considering the opinions of the people who lived on the colonised lands prior to the arrival of Europeans. When historical evidence about a certain event or person is revisited with success, it creates a completely new take on the issue. An example is Annette’s Gordon-Reed's research on the relations between Thomas Jefferson and Sally Hemings – by interpreting the evidence about these two persons (she wrote about the fact that Jefferson and Hemings, a slave in his household, had a relationship lasting 38 years), Reed changed how these personalities are seen and what is their new significance within the U.S. history. Her take was not false portrayal of evidence but rather a new take on the history.

On the other hand, C. Behan McCullagh (2000) argues that false portrayal of historical evidence happens due to two reasons – either the historian has had a personal preference for a specific fact set (personal bias), or the evidence has been misunderstood by accident due to the society’s cultural bias. He states that in order to understand the evidence given correctly, the most important thing is a developed set of inquiry standards. However, academics in the discipline enjoy a certain level of personal or cultural bias in evidence since it allows them to come up with explanations for the proceedings in the past from a more sophisticated perspective giving an even better explanation for the previous events.

The Issue of Personal Diary Entries as Evidence in History
The information given in diaries (evidence) is both objectively descriptive and subjectively explanatory, meaning that it gives a very personalised experience of the historic event explained. Both historically and now, journalling is considered common practice that allows people to express in writing how they feel, what they see, explain their values and reasons for certain way of thought.

Within the context of history, diary is perceived as a memoir, a historic account, in which a person has written the thoughts it had experienced throughout a certain time period about the emotions and activities lived through. Since history is a social science (with the main aim to explore the socio-economic activities in the past and how these proceedings shaped the society into what it was then and how it came to be today), diaries are used as real-time evidence giving direct, but subjective explanations for past’s phenomena (and ensuring interaction between the subjective and direct element of the writer’s experience). And, mostly, historical diaries were written about experiences that were negative and humiliating. In history, personal journals are considered as primary sources – original perspectives on the researched happening from its actual time, and from these entries historians can draw new interpretations and further analysis.

But, by considering that diaries provide a unique and individual insight into the past events, it also poses an alarming problem – with the use of diaries as evidence, how can the researcher be fully sure that the information provided is truthful and scientific enough? When using diaries as evidence of events lived through, how can an academic determine that this written entry is acceptable to the standards of social science (i.e., that it does not exclude a particular race’s views, opinions), or how can the academic ensure that the information given is not just the outcome of the writer’s imagination? Although it would be hard for any professional from the history academia to answer these questions, diary entries are used nevertheless, since there are no other materials which, as stated by Irina Paperno (2004), could “reveal the tension between the opposites”. Yet, scholars should facilitate awareness about the one-sidedness of the material to understand the discourses provided.

The Limitations of Using Extrapolated Data as Evidence
Extrapolation is a method used to make predictions based on the trajectory of patterns from previously gathered data. Since in empirical experiments it would be impossible to test the behaviour of every particle/organism under every variable condition, extrapolation is a useful tool to make informed assumptions about the untested ranges of data.

While extrapolation can be reliable and an extremely useful tool, it comes with the risk that the extrapolated data is based on patterns rather than experimental data. There is never absolute certainty that in the untested ranges of data, particle/organism behaviour will follow the predicted pattern. Therefore, extrapolated data, while not entirely unfounded, can be potentially dubious information upon which to claim scientific confirmation of a theory.

An example of extrapolation’s uses and limitations can be found in its utilisation when trying to overcome ethical barriers in biological experiments. The effects of potentially harmful/toxic chemicals cannot be tested on humans for ethical and legal reasons, so animals (commonly rodents) are used as surrogate test subjects to gather evidence. As mammals and as an intelligent species, there are a lot of similarities in the biological, psychological, and physiological makeup between rats and humans. Along with the fact that they are small, cheap to breed, easy to genetically mutate, and share genes linked to genetic diseases in humans, rats are great test subjects that produce reliable data on the potential effects of new drugs on humans. Rats have been involved in the study of many now prevalent prescription medicines, including oxytocin and high cholesterol combatants. However, there are nuances between a rat’s and a human’s metabolic pathways, varying similarities in their reactions to and recovery from drugs, and laboratory techniques for inducing illnesses and injuries can influence the outcome of the data. In such cases, extrapolation is shown to not be a 100% reliable source of evidence for assuming the expected effects of drugs used on rats on humans.

Experiments as Scientific and Social Evidence
Evidence is a proven fact, that is used to demonstrate a statement. In many fields of knowledge, especially in the scientific and sociological fields, theorists rely their analysis on experiments, considered therefore as evidence.

The first traces of scientific experiments as part of the Scientific Method date from Ancient Egypt and Greek Antiquity. Indeed, with the apparition of the scientific discipline during Antiquity, some philosophers such as Plato believed scientific statements could only rely on knowledge. However, some thinkers like Aristotle, who was one of the first to take this position, argued all fields of knowledge should develop and rely on empirical evidence. At the same time, materialism current of thoughts developed in India. But the scientific method was then only based on observation and measurement, the last one brought by Aristotle.

The origins of experimental Scientific evidence
The ones to brought physical experimentation into the scientific method were the Muslims philosophers. They kept the basis of the Greek Methods but added experimentation, hypothesis, the explicit statement of a problem, the interpretation of data and the elaboration of a conclusion. All these new elements put together to contribute to the elaboration of a scientific method very close to the nowadays’ one, more precise than in the past, and reusable.

Of course, the scientific method was reborn during the Renaissance with new and even more precise ways of processing to deepen the scientific approach. The work of Galileo especially, was very useful in a sense that he realised the world around us, and its components were too unstable to apply perfectly on its theoretical principles elaborated in the past.

The Industrial Revolution with the contribution of scientists such as Newton has continued to deepen research methods in the various disciplines and has drawn from the theoretical and practical advances which have so far contributed to real scientific breakthroughs. Finally, nowadays, the Digital Revolution and its ever more advanced technological advances allow scientists to carry out ever more efficient and radical experiments to face ever more complex questions and understand the functioning of the organisms and physical elements that constitute our environment.

The origins of experimental Social evidence
In the context of psychological and sociological research, often used evidence are social experiments. Social experiments appeared later in History than scientific ones, and were extensively used during the 20th century. One of the firsts social experiments have happened during the second half of the 19th century, in 1895. It was conducted by American psychologist Norman Triplett, who found out that cyclists tend to bike faster when racing. He replicated the experiment in a laboratory setting and obtained similar results. Most of the experiments aimed to understand the social behaviour of individuals, how and why was the society constructed the way it was, and to understand the social evolution of human being.

For the highest possible validity, social experiments must fulfil certain conditions. Those include for example standardized procedures, replicability, large, representative sample, conduction of pilot test.

All the ethical restrictions put in place have contributed to fostering advances in the field of social experiences and make it hard for social scientists to be able to carry out and put into practice their theoretical statements.

Evidence In Law
Law is a discipline based on evidence and truth. However, often, when it comes to truth, everyone has a different perspective and we quickly come to the conclusion that truth is not a monopole: similarly, to Schrödinger’s cat in physics, truth in law can take different forms and there can be multiple truths. This is why evidence is so important in law to support an argument. In front of a court, evidence can take the form of a testimony, an exhibit, a documentary material or a demonstrative evidence.

Testimony:

According to USLegal, “A testimony is a statement made in a legal proceeding or legislative hearing by a witness while under oath”. . Here, a testimony is considered as an acceptable evidence in law in a certain situation that is defined and has prerequisites: it is made by a witness and while under oath. However, “testimony in the form of opinions or conclusions cannot be received, except from experts”, showing that law requires certain criteria to qualify a fact as “evidence” in the court room.

Exhibit:

An exhibit is “a document or object (including a photograph) introduced as evidence during a trial (copy of a paper attached to a pleading, any legal paper filed in a lawsuit, declaration, affidavit, or other document, which is referred to and incorporated into the main document”. [31]. Again, there are strict criteria as to what counts as evidence and what does not. For example: secondary evidence as to the content or intent of writings cannot be received. [32]

Documentary material or Demonstrative evidence:

“The term documentary materials is defined by 44 USCS § 2201 as all books, correspondence, memorandums, documents, papers, pamphlets, works of art, models, pictures, photographs, plats, maps, films, and motion pictures, including, but not limited to, audio, audiovisual, or other electronic or mechanical recordation’s”. [33] Hence, by creating strict rules, it ensures that the term “evidence” in law is specific which promotes its importance and utility in front of a court. This is important as for example: hearsay evidence cannot be received as evidence under this title in front of a court. . Hence, we see that depending on the discipline, evidence has a certain level of exclusivity and sets of rules that apply to reach the appellation of evidence.

Eyewitnesses in court: how memory can be altered and create inaccurate testimony?
One of the key kinds of evidence in law is eyewitness evidence. However, it has been argued for over a century that this type of evidence is not always reliable in court. Indeed, it is known that when a memory is revisited, emotions among other factors play a big part in its reconstruction, especially in its alteration. This can lead to inaccurate accusations during a trial. Bartletts theory on reconstructive memory, largely based on the research of Loftus and Palmer (1974) and the result of leading questions on eye witness testimony is just one example of how memory can be distorted and can therefore put into question the reliability of eyewitness testimonies in court. So should an uncertain process such as revisiting memory, influenced by many different factors, be able to choose someone's fate for the rest of its life?

In fact, due to wrong testimonies in the United States, about 375 prisoners spent on average 14 years in prison for an accusation that was proven wrong by a DNA test. However, in the judiciary system, eyewitnesses are considered very reliable. Most neuroscientists agreed “that memory is encoded in the patterns of synaptic connectivity between neurons”. This synaptic connection can be altered and thus shape future testimonies. Furthermore, there is the LTD (Long Term Depression), a phenomenon that reduces neural synapse efficiency. Such a phenomenon can damage memory. . This occurs during a simulation of the synapsis whenever someone is working with their memory, and lead to an improvement of the connection, and thus the memory, but also to a deterioration of the connection. Moreover, in others scenarios, when stress occurs, which is very common when individuals witness a crime, it can release hormones, such as epinephrine or cortisol during the arousal. It has been shown that during a high arousal stage, “memory encoding may be enhanced or impaired depending on the person's individual stress response.”.

Additionally, there are some bias that can alter the memory, such as the cross-race bias, when it comes to identifying someone from a different race, humans tend to struggle more and make the testimony less reliable. Therefore, it is easier for us to recognize a face from our race. Furthermore, Elizabeth Loftus, a cognitive psychologist and expert on human memory, conducted a study on the interaction between language and memory. She conducted it with 45 students. They had to watch a short film about a car accident. At the end of each one, they had to answer a few questions. One of them was to estimate the speed of the car during the collision. Indeed, every video showed a car that went at the same speed, the only difference was in the question. Mrs Loftus used different words when talking about the collision (hit, smashed...). The results have shown variability in the answers. This experiment shows us that the words used in a question can have an impact on the answer. Even though it is not possible at the moment to get into someone’s memory, it is essential to acknowledge that human memory is not fully reliable and therefore it is key that judges balance the weight they put into eyewitnessed evidence when making their final decision.

Forensic Evidence In Criminology
Evidence can be quite delicate to define: what makes something good enough to prove a point? What is proof based on: facts, knowledge, observation, or a bit of everything? Evidence has become crucial in supporting claims, advancing research, and problem solving. In the case of criminology, evidence can be under several forms: forensic evidence, expert evidence, circumstantial evidence, direct evidence, and primary and secondary evidence. Criminology is a discipline that englobes the scientific study of crime and criminal behavior: therefore, evidence is a pillar of this discipline. Specifically focusing on forensic evidence, obtained through scientific methods such as toxicology, ballistics, blood tests, fingerprints, anthropometry, maturation, and DNA tests, we can understand part of the role of evidence in this discipline.

Forensic evidence is critical: it helps determine the innocence or guiltiness of a person in a court case. Some evidence can be undisputable, and can allow the certainty of specific facts. Other pieces of evidence can be insufficient, and arguments can not be based solely on the information brought by the evidence. In the court of law, the analysis of forensic evidence is used to establish facts on the situation. However, there also exists evidence tampering or faulty results, so it is also up to the involved professionals to examine and determine if the evidence can be used as real proof or fact.

In the past 40 years, forensic evidence has greatly advanced with the development of technology and new techniques. Today, many new tools are still being developed, such as drones which allow 3D representations of crime scenes or accidents, leading up to a future with more and more accurate evidence.

However, forensic evidence can show some limits. In fact, forensic evidence implies two disciplines, Law and Science. These two disciplines use different approaches and constraints when talking about evidence. When forensic evidence is put forward, it's considered as the truth since it is scientific, however, a piece of scientific evidence does not necessarily mean the truth. Since it is possible that, for example, a DNA test is wrong. In 2005, Mitt Romney approached forensic evidence without taking into account the different constraints in the disciplines. Indeed, he only looked at it in the legal system and assumed they were the truth without any doubt. Thus, he decided to introduce a bill on legalizing again death penalty in Massachusetts, considering that scientific evidence will prevent every wrong conviction and did not consider any margin of error. So, forensic evidence is a very good improvement for the legal system. However, it is always important to look at the different disciplines and approaches within this issue in order to fully benefit from it, and not making it counterproductive.

Psychology - a 'real' science?
Psychology is defined as ‘the scientific study of behaviour and mental processes’, yet despite this definition, whether it qualifies as a science has been disputed since its conception. Much of this debate relates to research methods and by extension, evidence. As it follows the empirical method, a methodology characterised by the use of directly measurable or observable data, it is technically considered a science. However many scholars and members of the public alike argue that the object of psychological study in its broadest sense - the mind - is too abstract to be observed and measured in the same ways as other phenomena, and thus cannot be considered in the same vein as the natural sciences. Such individuals argue that psychological study at times evokes and attempts to answer metaphysical and ontological questions, which simply cannot be supported by concrete evidence. The conversation around psychology therefore provides an insight into what both individuals and entire disciplines believe constitutes evidence, and the role of evidence in the perceived validity of knowledge.

Despite there being no stipulation in the definition of evidence as to what type of information can provide support for an idea or theory according to distinguished dictionaries, what type of data has more validity and thus what evidence is more reliable is widely contested within academia. This is most commonly seen in the quantitative versus qualitative debate. Broadly speaking, those in favour of quantitative argue that numerical data provides a means to draw conclusions from large data sets which otherwise could not be analysed, whilst those in support of qualitative argue that conclusions should only be drawn from detailed analysis, otherwise researchers pose the risk of oversimplification. The former is often considered more rigorously scientific and the evidence more valid, whilst the latter can be treated as unscientific and the subsequent evidence ultimately less legitimate.

Whilst this is often considered an interdisciplinary debate, between the arts, the social sciences and the natural sciences, psychology provides a microcosm in which such polarisation can be observed due to the variable nature of the discipline. Unlike other disciplines, psychological research will differ markedly in methods depending on the needs and nature of the sub-field or even specific research question. Neuropsychology or biological psychology may call for quantitative research to provide evidence for a theory, meanwhile counselling psychology or personality psychology may call for more qualitative methods. Further, some approaches within psychology, namely the humanistic approach, reject the scientific method entirely. Most famously associated with Carl Rogers and Abraham Maslow, psychologists who follow this philosophy instead stress the precedence an individual's subjective perception and understanding should take over empirical methods of enquiry. This interpretative approach to evidence and the consequent construction of knowledge differs significantly from the positivist approach generally taken by modern psychologists. In this way it can be ascertained that evidence is not rigidly defined in actual usage and is instead a subjective entity defined at a personal, departmental and disciplinary level.

Confirmation bias - Accuracy of evidence in psychological research
Evidence can be described as a way of concluding on the validity and credibility of information. Disciplines in the human sciences such as psychology, use research with the purpose of allowing them to interpret events, observations and thoughts leading to the common purpose of understanding societal occurrences. Primary research such as research studies in psychology are considered a form of evidence within the discipline. This evidence is then used to make models and predictions which are used to deal with complex real-life ideas. This can be demonstrated by looking at the Multi Store Memory Model. Research regarding the serial position curve in studies such as Glanzer and Cunitz (1966) have been the driving force of modelling the multi store approach to memory which is used to explain brain functions and predict how the human brain works. Therefore, we can observe studies such as Glanzer and Cunitz are used as evidence to come to conclusions in psychology, however, some may argue issues might arise with relying so heavily on human controlled evidence.

Confirmation bias, a process by which researchers involuntarily mis-interpret results by adapting them to their personal beliefs or ideas, is a phenomenon which has been proven to appear in many psychology research studies. Peter Wason was one of the first researchers to explore the topic of bias in the 1960s and used his “2-4-6” experiment to show this. His research consisted of asking individuals to identify a rule in the number sequence he proposed. It was observed that participants answered with sequences which confirmed their personal hypothesis of what they thought Wason´s rule was, rather than trying to falsify their hypothesis by using opposing or different sequences. Some argue confirmation bias is an easy path our brain uses to process new evidence as it is easier to confirm what we believe is right than to process new complex evidence. Others argue it is a matter of confirming one’s own hypothesis unconsciously and trying to falsify any other alternative ones. However, if confirmation bias is identified in psychological research, what does this imply about the evidence used to make predictions and assumptions.

It is argued confirmation bias can lead to a distortion of evidence. If a conclusion is made precipitately it might ignore the opportunity for other results to arise or for further exploration, possibly leading to the formation of misguided conclusions. Even when an effort is made to control biased collection of evidence through methods such as random allocation, systematic errors can be introduced from much earlier, such as in the design of the experiment or execution of the proposed method. We can apply these thoughts to research within psychology. Impulsive and premature conclusions can lead to misguided evidence causing the establishment of wrong predictions and models to be developed within the discipline. This creates scepticism on the accuracy of evidence and is something we should consider when exploring psychological research. Exploring this idea on a broader level, it is possible to analyse other research related disciplines which also base conclusions on evidence. These disciplines may experience the same questioning on the validity of evidence given the possibility of confirmation bias and might therefore have to consider whether evidence should be as dependently relied on to form conclusions within their disciplines.

Use of evidence in Management
Management is an academic discipline within social sciences. Some of its focuses include setting up strategies for an organisation based on its various available resources to achieve specific goals. University proposes various degrees within Management, such as Bachelor of Business Administration (BBA), Master of Management (MScM or MIM) or Doctor of Management (DM), Doctor of Business Administration (DBA).

Evidence-based management
In recent years evidence-based management (EBMgt) movement has been gaining popularity, as a part of a general movement for evidence-based practice, advocating that practises should draw on scientific evidence. Evidence-based management has three pillars and strives for decision-making and setting up an organisational strategy based on the best available evidence. Evidence in the context of evidence-based management means either quantitive, qualitative or descriptive information. Mention main principles include published peer-reviewed research evidence, managemental judgment in experience, values and preferences of concerned people. Evidence is drawn from scientific research in published academic journals usually focuses on management and social sciences journals, which provides a critical evaluation of specific management practice. This may include information on whether financial incentives improve employees commitment and motivation, or evaluation of merger success rate. Evidence may come from organisational data, and this can range from quantitive data such as financial data and market share, or qualitative information such as an organisation's culture perception. Experimental evidence is characterised by accumulated overt time reflections on relatively similar actions and comes from practitioners. Other forms of evidence include stakeholder evidence, which reflects the concerns and values of individuals or groups that may be affected by the organisation's actions.

Evidence-based practice in interdisciplinary healthcare
Evidence-based practice

Evidence-based practice in healthcare focuses on using the best research evidence that is available, along with clinical expertise and consideration of the individual patient, to deliver patient care. The term evidence-based medicine was first introduced in 1991 and is defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.” The term evidence-based medicine then evolved into evidence-based practice, in order to emphasise the need to make use of medical evidence in actual clinical practice. In 2005, the Sicily statement on evidence-based practice was published which defined evidence-based practice as requiring that “decisions about health care are based on the best available, current, valid and relevant evidence.

Interdisciplinary healthcare

Interdisciplinary teams in healthcare are those comprising of a range of healthcare professionals from different backgrounds, working together to deliver high quality patient care. In complex patient problems, multiple disciplines are required to share information with each other and an interdisciplinary approach in healthcare can lead to more effective services and improved problem-solving capability. Studies show that when teams of people from different disciplines work together, they can provide better patient care outcomes, which can be seen as the main reason for utilising an interdisciplinary approach in healthcare. It is therefore important to understand how to adopt the best interdisciplinary approach.

We can bring together the existing knowledge from health and management, to better understand how to collaborate between professions. In business management it is known that as the number of people in the team increase, more specialised teams need to be created to make efficient use of skills and people. Each team is then led by a different manager who is focused on their own team rather than the organisation as a whole. This leads to issues in teams duplicating their efforts or not working in the most efficient way. There is the idea of a “silo”, or a department, in which information is freely shared within that silo, but not with others. This means that different departments will make decisions based solely on the information that is available to them, which may not provide the complete picture. Healthcare professionals often find themselves working in “uniprofessional silos”, which means that sharing information between professions is difficult.

Conflict in generation and use of evidence

In an interdisciplinary approach, aside from the lack of sharing of evidence, there is also often conflict in how this evidence is used in different disciplines. There is disagreement on how to use this evidence to generate guidelines for the best evidence-based practice and how these guidelines are adopted. The Institute of Medicine emphasises the need for evidence, and for this information to be used to provide guidance on: how to translate evidence into practice, how to deliver interdisciplinary evidence-based education, as well as to guide practice itself. The Council for Training in Evidence-Based Behavioral Practice (EBBP) was set up to support collaboration across health disciplines, and includes experts from medicine, nursing psychology, social work, public health and library sciences. The inclusion of both scientists and non-scientists in this council brings forward the added dimension of being trans-disciplinary. The EBBP model describes how evidence in healthcare can be provided by primary researchers who generate the evidence in the first instance. Evidence synthesisers, such as the Cochrane Collaboration, can then use research studies to generate systematic reviews. Information from systematic reviews can then be used, for example, by the US Preventive Services Task force, in preventive services, or to make guidelines and recommendations. The EBBP model was developed after a review of existing evidence-based models from different disciplines, stating that clinical decisions need to be made based on the best available evidence.

Case study of homeopathy

In homeopathy, we can see an example of how conflict arises, due to the ways in which evidence is generated and used. Homeopathy is a holistic approach to medicine. It uses mainly natural sources; thus, it is considered as an herbal medicine also known as a Complementary and Alternative Medicine (CAM) by the World Health Organisation. Despite homeopathy’s rising popularity, it is often challenged regarding its efficiency, as its core concepts are contradictory with the ideas of modern medicine which uses an allopathic approach. Indeed, homeopathy uses a method based on the “like cures like”. It suggests that a certain substance which can cause harmful consequences in large quantities could also be used to treat those same symptoms with a very small dose of the substance. The whole principle of homeopathy is therefore based on the high dilution of a base substance.

There is an on-going debate about whether or not to trust homeopathy. The question of whether is is an effective method often is raised. Also, at the heart of the debate is the role of evidence and more particularly, the question of what counts as evidence in medicine.

Evidence based medicine depends strongly on statistical methods to prove the efficiency of a drug and clinical experiments which are then applied to a whole population. It also relies and privileges one way of experimentation being the Randomized Controlled Trials RCT. In this context, homeopathy lacks statistical relevancy. This is mostly due to the fact that homeopathy is used differently depending on the homeopath and the patient, therefore the type of treatment can change from patient to patient with the same symptoms. Thus, the core difference between evidence based medicine and homeopathy is the standardisation against an individualisation of treatments. Hence, the complexity of researching data in a quantitative way for homeopathy and other CAM. Evidence produced in regards to CAM are often questioned for their reliability, in regards to the methodology used to produce the evidence. Indeed, there is a wide variety of methodologies used in assessing the data for CAM which are not comparable between each other, as their boundaries are frequently unclear or are not recognised research methodologies in conventional medicine. It is also worth mentioning the fact that the data is not always published officially due to different policies in various countries, and this generates more debates.

Moreover, in evidence based medicine, the application of clinical results to a whole population (generalisation of health care) is also questioned with the example of rare diseases. These illnesses usually have little to no clinal research and therefore require a more qualitative approach in order to be treated, thus demonstrating the need to overcome the barriers imposed by using solely quantitative methods, and include qualitative methods as evidence as well. This reenforces the argument provided by homeopathy, for a more qualitative approach in evidence based medicine.