User:PanosKratimenos/sandbox/BASC001/2020-21/Thursday11-12/Evidence

Evidence
Evidence is defined as “one or more reasons for believing that something is or is not true” in the Cambridge Dictionary. The evidence itself and methods of its collection, interpretation and verification are different across subjects, but the role of evidence is similar. It is required in the feedback loop or “circuit of knowledge”, for collective understanding and action, to communicate ideas and persuade. Hence, the extent of reliability of evidence is a question of importance. Evidence has to be collected and then interpreted, and there is room for bias and inaccuracies in this process. The methods and sources for obtaining evidence can be flawed and limited or even incomplete, the researchers can be biased in choice of evidence, its interpretation and presentation and there is always a possibility of unintentional bias and mistakes. However, it also provides a common-ground for disciplines to produce knowledge, where in some cases create room for improvement and advancement. Due to the duality in the nature of evidence, it is crucial to evaluate the evidence present to us.

Strengths
Evidence can be used as a tool to produce knowledge claims in different disciplines. In Sciences, evidence is employed as part of the [https://en.wikipedia.org/wiki/Scientific_method#:~:text=The%20process%20of%20the%20scientific,observations%20based%20on%20those%20predictions.&text=Scientists%20then%20test%20hypotheses%20by%20conducting%20experiments%20or%20studies. scientific method]. The scientific method often starts with a hypothesis which is a synthesis from asking questions and doing background research. This hypothesis is then tested by an experiment, where the results of the experiment is regarded as evidence. Evidence here can be understood more specifically as scientific evidence. Scientific evidence allows a scientist to confirm or dismiss a proposed conjecture, which provides more knowledge in the discipline.

Similarly, in History, evidence not only generates knowledge but also provides the beginning of historical inquiry. In History, evidence comes in different forms such as written documentaries and material objects. Historical evidence improves our understanding of events that happened in the past and to some historians, this knowledge could be practically used to improve the present and plan for a better future.

As Kelly suggests, evidence acts as a ‘neutral arbiter’, as it provides an objective view. In both Science and History, evidence can also enhance evaluation within the discipline. For example in Science when the results of an experiment does not follow with the initial hypothesis, scientists can reflect upon the evidence they have and improve the existing methodologies. Likewise in History, different account of written documentaries can provide historians with a different lens to analyse and critically evaluate the historical events.

Limitations
Despite the role evidence has in a means to produce more knowledge in different disciplines, it is also important to have an awareness of its limitations. It is easy for one to use evidence to dismiss potential new knowledge. This is to use evidence negatively and simply suggesting that there is no evidence, hence the claim is not possible. The philosopher, John Locke named this fallacy argumentum ad ignorantiam,it means ‘a proposition is true simply on the basis that it has not been proven false or that it is false simply because it has not been proven true.'

Another misuse of evidence could be seen in fake news. This could be understood as "disinformation that is deliberately created to look like actual news in order to have a political effect." This means that misleading news is broadcasted to provide fake evidence furthering one's hidden political agenda as evidence, especially from a discipline that requires high expertise to understand, like natural sciences, could cloud one’s judgement. Hence, it is necessary to evaluate and be aware of the credibility of evidences.

Psychologists have also identified that humans have the tendency to conform to confirmation bias. This is understood to be the "tendency to give more weight to information that confirms one of our pre-existing beliefs." Confirmation bias is particularly identified to be present when dealing with abstract problems demonstrated through the empirical work by Wason and Johnson-Laird, 1972. This indicates the possibility of choosing evidence that is inherently biased. Therefore, again demonstrating that the limitations of evidence play a key role in confirming, producing and evaluating knowledge.

Another phenomenon identified by Psychologists that could affect the reliability of evidence is cognitive dissonance. Cognitive dissonance in psychology refers to a situation where one faces conflicting beliefs, ideas and values, these clashes causes huge discomfort in one's internal psychology and thus, one would alter one side of these beliefs, ideas and values to reduce the feeling of unease. One example of this is a study carried out by N.T. Feather about smokers and non-smokers. In which, Feather argues that a heavy smoker might be in a state of cognitive dissonance when coming across information or knowledge about the correlation between smoking and lung cancer. As a result, he is likely to reject the evidence that is presented to him or he would find ways to reduce the reduce the dissonance. This shows that another limitation of evidence is that it could be dismissed easily due to inherent subjective opinions that an individual holds. Therefore, it is essential that one not only uses credible evidence but also evaluate the evidence critically to make informed decisions or judgements.

Use of Translation in Evidence
Translation is derived from the latin word translatio which means “to bring across”. In the context of document translation, this means to transcribe a document that is in its original language into a different language. It is a common vector or tool that aids in the transfer and exchange of knowledge between communities, but can simultaneously impede it too. Consequently, this can affect the validity, value and reliability of evidence when it involves or utilises translation.

Strengths of Using Translation in Evidence
Although the history of translation dates back to as early as 3000 B.C., translation as a discipline was only recognised in the 1960s where contemporary studies in that field began to emerge. As a result of its gradual expansion as a discipline and the ever-growing collaboration between various communities, translation is frequently used in a multitude of disciplines to help minimise the knowledge gaps present in the different fields. Therefore it is to no surprise that translation is used within evidence: evidence can originate in a specific region and therefore written in that region-specific (and even era-specific) language. If the evidence is relevant to an investigation, it will need to be translated in order to be studied effectively.

For example, Egyptian hieroglyphs is an ancient language system that dates back to around 3100-3300 B.C. This language is composed of ‘iconic’ symbols with intricate meaning attached to each of the characters, usually found to appear engraved in limestone blocks. It is often studied by historians as these hieroglyphs depict and record historical events and stories of the time it was written in, classifying it as a primary source which ensures its validity as a piece of evidence. In history, primary sources are extremely valuable as they are “first-hand accounts” of an event or subject. For the hieroglyphs to be decipherable, it must be translated. This demonstrates the need for translation within evidence as it provides the historians access to time and atmosphere in which the script was written in. To some extent, translation constructs a temporary reality of that time and atmosphere for the historians, immersing them into that reality to extend their understanding of the source and the contents of the writing. Here, it is shown that translated evidence is valuable within history as a discipline.

To some degree, this is also applicable in religious studies. For example, biblical scriptures were originally written in 3 languages (Greek, Armaic and Hebrew). Over the years, they have been translated into approximately 3415 languages, allowing a plethora of communities to have access to the ancient holy scripts. It is established that translation, as a result, has indisputable ties with religious studies. Moreover, these translated scriptures are arguably the main form of evidence within religious studies about the existence of a creator, or a God. With more of the documents being translated into many different languages, the knowledge within the documents is now more accessible to many more individuals, both experts and public alike. To the general public, especially, the mere ability to access information as such can in itself appeal to the reader’s ethos and pathos, as translation can potentially introduce an illusive cloud of trust which allows for a bond to be formed between the evidence and the reader. This is particularly helpful in the case of an evangelist's goal of spreading the word of God in hopes of the religious conversion of the listeners — often using the Bible as evidence in their practices.

Limitations of Using Translation in Evidence
Translation itself is argued to be a “subsidiary art”. Therefore in the context of evidence, translation is categorised as both subjective and qualitative as it requires interpretation to an extent which is heavily reliant on the translator. Translation is not as simple as translating everything on a word-to-word basis; it does not only refer to the concept of “interlingual exchanges” but also “meaningful exchanges” that encapsulate the “full sense of the word”. Therefore, it requires the translator to ensure that the conveyance of the original meaning behind each piece of information is not tainted by mistranslation. This can occur because different words in different languages may have more than one meaning, and so without care, mistranslation can cause ripple effects that are damaging within different disciplines. Therefore translation has the ability to hinder the original meaning and consequently affect the validity and value of that piece of evidence.

A limitation in translation occurs with the translation of idioms in historical texts. Idioms are expressions in a language which are figurative and have non-literary interpretations. This means that the words in the phrase/expression collectively have an allegorical meaning, and a literal translation will not be able to deliver the meaning of the expression. Idioms are semantic expressions present in many languages and their allegorical understanding is culture specific. Idioms are studied within pragmatics to collectively understand the cultural context around the expression. Literally translating an idiom distorts the expression and does not convey the real and true evidence that the idiom expresses. This cultural gap and inability to translate an idiom to different languages results in misinterpretations as the idiom is used as evidence in another language. Analyzing inaccurate interpretations of evidence develops discrepancies in the information, data and evidence that analysts have, and hence the knowledge gained and shared from the evidence becomes unreliable. Translators adopt techniques to translate idioms by finding equivalent phrases/expressions that best convey the allegorical meaning while ensuring that no information is omitted in the language that the idiom is being translated to. However, finding an equivalent expression is perceptive to the translator, and may vary based on prior knowledge, an unconscious bias, or the preference of the translator. At times, the equivalent expressions may be in a similar sentence structure form, dissimilar form, ultimately paraphrased, or by completely omitting the idiom from the translation to prevent sharing inaccurate evidence of knowledge. Additionally, there are cases in translation where the original language of the text presents words that are not idioms, but can only be precisely understood as an idiom in the translated language. The disparity in literal meanings as a result of cultural differences in the language develops uncertainty in translation, leading to sharing unreliable evidence of knowledge.

Market Research
Market research is a process of collection and analysis of data about the target market, consumer preferences, competitors. There are different types of market research such as quantitative research, qualitative research, primary and secondary research. Facts are objective and describe the reality as it is, but researchers in business normally collect evidence while their personality, perception of reality, context and limitations, influence their understanding of the objective reality they are investigating. Evidence supports points of view and perceptions of people who collect it. Firms are interested in knowing the exact condition of the market they operate in but have to formulate their strategies based on biased evidence of limited scope. So the ideal market research would aim to have as little bias and consider as many factors as possible. Different methods of evidence accumulation seem to have different kinds of bias.

Qualitative and Quantitative Evidence in Marketing
Qualitative research and analysis in business tends to be factor analytic: it involves large amounts of qualitative data collected through interviews, observation, polls, focus groups etc. and derives the key factors from that data. Qualitative evidence can be impacted by the researcher’s choice of key factors. Quantitative research measures quantitative data, makes estimates based on it and verifies assumptions. It involves statistical and mathematical methods of analysis, large samples and clearer methodology and structure. Qualitative analysis seeks to interpret and explain issues, looking at them from broad perspective, while quantitative analysis allows to generalize the findings for larger populations, makes findings more precise, and shows positive differences between factors measured which facilitates decision making. The complex nature and amount of evidence that marketing research teams need for formulation of successful strategies requires a combination of quantitative and qualitative approaches. They are believed to be most efficient when one approach addresses questions raised by the other and vice versa.

Quantitative evidence is perceived as more reliable as it can be measured exactly and repeatedly while interpretative nature of qualitative evidence makes it seem less reliable for many scholars and firms’ analysts. On the other hand quantitative evidence simplifies reality, studies a very narrow and subjectively chosen segment of it and may not account for complexity of interrelations of different factors that influence the market. Qualitative evidence objectivity is sometimes doubted as the influence of the researcher’s personal circumstances and beliefs was found to be stronger on qualitative evidence than on quantitative. But quantitative research is not exempt of the biases during choice of scope, interpretation of statistics, limited options of response in quantitative surveys. Data comparisons that reflexive data collection and analysis involve, lessen the researcher bias of qualitative research.

Applications of the Approaches
First attempts to gather data for systematic marketing analysis happened in the first decades of the 20th century and involved quantitative questionnaires, basic non-representative interviews and some statistical methods. Some methods of social research used by political analysts became widespread in market research. Due to the shortcomings of quantitative evidence, the need to better understand consumer preferences and choices was recognised and qualitative research, such as focus groups, was added to quantitative. In 1950s, consumer behaviour became a focus of study and different psychoanalytic concepts and methods of evidence collection were applied to marketing, as researchers realised that people do not necessarily buy what say they want to, and their actions can’t always be predicted by questionnaire responses/focus group findings. Market research was becoming more broad and interdisciplinary, insights not only from psychology but also economics, sociology, anthropology, management science were helping researchers to construct a fuller picture of the market dynamics. David Ogilvy said that “The trouble with market research is that people don't think how they feel, they don't say what they think and they don't do what they say.” It was found that consumer tastes could be defined not only through direct questions (that were not always getting accurate and helpful answers) but also by understanding their cultural background and political and economic situations in the countries. The researchers were trying to collect multitude of evidence from diverse sources and by different methods, in order to eliminate researcher bias, confirmation bias, optimistic perception of own/preferred firm’s prospects and professionalism and other limitations of own perceptions of the market. With advancements in IT, market research began to rely on web analytics, Big Data, analysis of quantitative data that reflects the interests of different demographic groups, interests that they themselves aren't necessarily able or willing to formulate. Quantitative and qualitative evidence became strongly interconnected and are widely used in modern market research to increase the reliability of findings and success rates of products, marketing campaigns and strategies.

Evidence in Sustainability
Sustainability is a state of being wherein the society is able to satisfy the needs of the present generation without compromising the future generation’s capability of doing the same. From the emergence of the environmental movement in the 1970s, sustainability began as a discipline to further study and research. It also became a worldwide agenda to develop social and economic systems which are sustainable. Sustainability is broadly categorized into environmental, social and economic sustainability. Working towards sustainability is important, as it promotes the well-being of the community as a whole, and ensures a distribution of well-being to each individual. Especially as we face the crisis with limited resources and unlimited wants, a concept of scarcity where there is limited supply while there is a high demand, sustainability is important in maintaining a balance of satisfying our needs and thinking for future generations and their survival and needs.

The United Nation's Sustainable Development Goals


Being sustainable is a state that a society is in, but the concept of sustainability itself is a perspective that is experienced and has been developed by social beings through their perception of what is needed to be sustainable with the current knowledge of what is important, such as the economy, the environment, and the society. As being sustainable is subjective to what a society might consider should be preserved for future generations, the United Nations Organization has developed the UN’s Sustainable Development Goals which provide a standard measurement of sustainability. These goals act as criteria, and if they are met in a particular community, then this acts as evidence that the community is sustainable. The UN’s sustainable development goals are a way to quantify a community’s progress and act as evidence to identify a place as sustainable as it verifies that what is considered as the standard of sustainability is present in the place. The UN’s sustainable goals include, but are not limited to dealing with poverty, hunger and inequality, and promotes equality, economic growth, clean water, sanitation, climate action and natural life.

The UN ultimately aimed to attain ‘international cooperation’ to tackle global issues involving the economy, environment, society, and cultures, while equality is maintained throughout. While this was an aim that was the basis for the formation of the UN in 1945, through research across the years, individuals had evolved their perspectives on what would improve life globally. As the knowledge was shared about the importance of economic, social and environmental promotion for the overall enhancement of well-being, the UN had recognized that sustainable development was the way to adopt to reach their aim. Meeting the sustainable development goals in each developing and developed country provides evidence that lives have been improved globally. The UN’s objective approach to what improves lives globally has shifted because of the introduction of knowledge at a time where they were able to notice and experience a problem. The UN was able to experience and hence become aware that there was a problem with the system in delivering improvement in lives, and especially with the growing technology and industries, while economies grew, they experienced a toll on environmental care as pollution levels rapidly increased, the mortality rate of land and water animal life increased, and this consequently harmed human life. For example, the increase in burning fossil fuels for energy releases nitrous oxide which is a greenhouse gas, producing a greenhouse effect on the earth, where heat remains trapped in the atmosphere, causing the temperatures to rise. This influences extreme climate conditions, causes respiratory problems from the air pollution, and affects the food supply. The environmental influence on humans and their lives was observable. This experience increased awareness of the problem, and the UN first recognized the fundamental concept of sustainable development in 1972 at a UN conference about Human Environment which took place in Stockholm. In 1992, the UN had a conference based on sustainable development in Rio de Janeiro on the environment and development. This led to identifying a solution, which was objectively discovered through identifying 17 sustainable development goals that representatives of 178 different national governments discussed on. The standard of preventing environmental degradation, and enhancing human life in general in the present and future has been formed through an official body to globally and universally agree that meeting the 17 development goals throughout the countries verify and provide evidence of the progress in the enhancement of life, and improvement in survival.

Alternative Measurements of Sustainability
While a country having a high Sustainable Development Goals index (SDGs) is evidence of sustainability and enhanced well-being, only having one tool or form of evidence to support this claim may not be reliable as it may consist of variables that were not accounted for. To approach this, different tools of measurement of sustainability can be used so that the evidence of sustainability can be quantified holistically. These measures include, but are not limited to, the Sustainable Development Index (SDI), the Happy Planet Index (HPI) and the Index of Sustainable Economic Welfare (ISEW).

SDI is formed on the basis of the Human Development Index (HDI) which is an index that measures a country’s human development in account of life expectancy, income per capita and education. SDI builds on this and accounts for a ‘sufficiency threshold’ on income where it highlights the fundamental amount of income necessary for sustainability, and it is further divided by considering the ecological impact through carbon dioxide emissions and ‘material footprint’ which are measured by consumption per capita. A high SDI indicates that a country has strong sustainability through forwarding with human development while the country is ecologically efficient.

HPI takes an alternative approach from the HDI, and revolves around three dimensions which are ‘experienced well-being’, life expectancy’, and ‘ecological/environmental footprint’. A high HPI indicates that the country is successful in sustainability, and causes for a low score may include overconsumption of resources, and depleted supply of resources from the government to the citizens. The HPI promotes welfarism and commensurability, indicating that past actions lead to the present, and present actions have consequences in the future. This is a concept of consequentialism.

The Index of Sustainable Economic Welfare (ISEW) has been evolved from the Measure of Economic Welfare (MEW) which revolved around the value of Gross Domestic Product, leisure time, unpaid work, and environmental damage. The ISEW takes this beyond by further considering more harmful consequences of economic growth in a country while excluding the value that the government spends on defence. A similar economical index of sustainability is the Index of Economic Well-being (IEW)which is a utilitarian approach where it is considered ethical if it is for the good of more people, and includes dimensions such as ‘consumption, sustainable accumulation, inequality, social risk, and carbon dioxide emissions’. A utilitarian approach is a form of consequentialism as ultimately, the consequences of the actions are for the greater good of sustainable living, and this justifies the decisions taken to conforming to methods that increase the IEW (or other indexes).

The different indexes and goals allow us to quantify a country’s human development with respect to sustainability and the indexes themselves are evidence of sustainable human development. While nations can compare the difference of one index across a time period, they can also compare multiple indexes together which provides a quantitative analysis on multiple parameters. Being able to compare different indexes with different parameters allows a country to see which variable in the system they can further improve to advance in sustainable living. This collective information is not just evidence of the end goal of sustainability, but rather is evidence of the process towards sustainability as it quantifies what areas/parameters in the system need further work on. This allows the governments of different countries to analyze feasibility and apply effective allocation of resources that evidence proves that a particular allocation will prompt sustainable human development.

Evidence in Advertising
Advertisements are used by governments, corporations, and individuals to share information, or promote a ‘product, service or event’ publically to an audience. Advertising began as a discipline in the 1920s where the study into advertising techniques can influence an audience. Advertising is also studied in language disciplines to comprehend persuasive communication to an audience by using the persuasive techniques studied in the language. Advertising techniques were first introduced through tobacco advertisements by Edward Bernays who is known as the founder of modern ‘Madison Avenue’ advertising. Many advertisements use persuasive techniques to persuade their targeted audience, and these techniques are perceived as evidence by the audience, providing them reason and motive to engage in purchasing a product, experiencing a service, or attending an event.

Techniques in Advertising
One persuasive technique often used is the ‘bandwagon’ technique which suggests that everyone in the community has purchased a product, and that the individual should too, and this has psychological origin where people feel a strong sense of favour to holding the same or similar perceptions to the people in their social group which is a process part of social identity. Gustave Le Bon identified the concept of the ‘collective mind’ (and its primitive form: ‘the group mind’) in 1896 which describes that being in a group has the power to influence and shape an individual’s perspectives and behaviour. An individual feels the importance of membership in a group, especially if they have developed relationships with members, and hence follow the norms of the group, follow similar behaviours and hold similar perspectives to the members in the group. This technique applies ‘pathos’ which appeals to an audiences’ emotions.

Another persuasive technique is the ‘testimonial’ which includes celebrity endorsements and promotion by an expert in the field of a product, such as a dentist promoting a particular toothpaste for sensitive teeth. This technique is often integrated with the technique of ‘Plain Folks’ which explain to the audience that a particular product has been designed especially for them, and that it is ordinary and ‘normal’ to have the product. These techniques apply ‘ethos’ which appeals to an audiences’ ethics through emphasis on the ‘credibility and authority’ of the endorser or expert.

Furthermore, ‘facts and figures’ is a technique that presents statistics, facts and diagrammatic or graphical representation of information. This technique applies ‘logos’ which appeals to an audiences’ sense of logic. It pushes an individual to logical reasoning that there have been significant statistics showing for example, that a high number of people in the population have purchased the product and are satisfied with it.

The technique of ‘name-calling’ where an advertisement specifically calls out another brand or product that is similar to the one that they are promoting and undermines it while promoting their own as better. This technique highlights the confidence of the product’s producer and persuades audiences to believe that they are confident for a reason, and that they have a direct comparison of similar products to choose the most efficient from. In addition, ‘snob appeal’ is a technique which uses the reverse mechanism of the bandwagon effect and intends to make the audience feel better in a particular way that the other people in the society, and this helps in the process of feeling unique and developing a sense of individuality in identity, known as self-identity, separate from a social group.

Aristotle's Modes of Persuasion
Ethos, logos and pathos are Aristotle’s ‘three modes of persuasion’ and are rhetorics that are often used in advertising to appeal to different areas to the audience. An audience follows and is persuaded by advertisements throughout daily life, and what advertisements do is that they provide different forms of evidence to persuade and make individuals believe. Evidence is in the form of, for example, the bandwagon effect, where there is a representation of evidence that the product is very useful to many people in the community, including the individual, or the facts and figures which provides statistical evidence supporting a product’s popularity, or a testimonial where an expert with authority explains the benefits of a certain product. All the evidence that makes individuals reason with their decisions is a basis of ethos, logos and pathos.

Ultimately, individuals are persuaded by this evidence, because as social beings, perception and interpretation of evidence is specific to each individual. If an advertisement is able to convince and persuade an individual that the evidence itself is objective enough of a reason to follow what the advertisement is promoting, then that evidence is qualified as evidence to the individual as it is deemed reliable. However, while individuals analyze the evidence in advertisements to undertake objective and rational decision-making, perceiving and interpreting the evidence in advertisements itself is a subjective process.

Evidence in Psychology
Psychology is most aptly defined by William James, USA’s leading psychologist in the late nineteenth century, as “a science of our mental phenomena or states of consciousness, such as thoughts, feelings, desires, volitions and so forth.” The emergence of psychology as a subject of study was a result of advancement in philosophical thought, which can be traced back to ancient civilisations of China, Egypt, Greece, India and Persia. It was formally established as a discipline only in 1879, when William Wundt, a German physiologist and philosopher, founded the first psychology laboratory in Leipzig, Germany. However, evidence was being collected since the advent of psychological thought. Formally, the strand of psychology that deals with evidence collection using experimental methods that attempt to validate theorised psychological processes is known as Experimental Psychology. There has been intense debate regarding the founder of Experimental Psychology. While some credit Gustav Theodor Fechner, a German psychologist, philosopher and physicist, for deriving the first methods of quantitative measurement of the human brain in the nineteenth century, some believe that Ibn al-Haytham’s Kitab al-Manazir that was published in the first half of the eleventh century introduced new methods of measurement in psychology and some still acknowledge William Wundt as the founder as his research yielded the most successful results that laid the groundwork for this discipline. Another form of evidence collection is qualitative inquiry. Although the first qualitative experiment can be traced back to Charles Darwin, an English naturalist, geologist and biologist, who attempted to investigate the connection between emotions and moral sense in 1870s, and Sigmund Freud in his study of psychopathology in the 1890s, it was not recognised as a formal methodology until the 1980s.

Psychology as a whole has three key approaches to understanding behaviour – biological, cognitive and sociocultural. Each approach gives a different weightage to qualitative and quantitative methods in determining theories as objective.

Research Methods: Biological Approach
The biological approach dissects the effect of biological phenomena on psychology. The research methods employed in this branch of experimental psychology face a constant struggle with ecological validity and establishment of a causal relationship.

True Experiments
A simple experiment, also referred to as a ‘true experiment’, entails at least one variable being manipulated (independent variable) and at least one variable being measured (dependent variable), while all extraneous variables are constrained and controlled. Without defining the control variables, the experiment cannot establish a causal relationship between the independent and dependent variables, as the change could be attributed to the environment instead of the independent variable.

Although the tightly constrained nature of these experiments, along with the random assignment of ‘subjects of study’ to different treatment conditions, allows them to be easily replicated by others, helping improve reliability of results found, the results are seldom ecologically valid. The real-world effect of the biological environment on psychology is uncontrollable and multi-variable, unlike the hyper controlled environment created during experiments. This creates a ‘real-world or lab’ dilemma, as laboratory experimental results cannot be generalised, and precise data about human behaviour cannot be collected in naturalistic settings.

To combat this, multiple researchers cite the similarities between experimentation in psychology and natural sciences – “if two chemicals are going to react, they’ll react when combined in a test tube just as well as outside it.” They assert that evidence is objective in true experiments and should allow generalisability as context should not matter. However, as humans are a self-reflecting, cognitive species, the context in which they perform tasks always matters. Recalling an answer during a school examination and as a participant in an experiment will result in different experiences and possibly differing outcomes. Because an inherent ability like this cannot be controlled in any experiment setting, psychology relies on artificial experiments that isolate behaviours or thinking patterns. But, this source of error variance cannot be curbed as it affects the ecological validity of results produced.

Therefore, due to this emphasis on quantitative-focused methodology, researchers achieve reliability and validity, but at the same time lack generalisability of hypotheses into objective theories.

Natural Experiments
Natural experiments entail the observation of the independent variable subject to a real-world environment, where no variables are being controlled by the researcher and the quantitative measurement of the dependent variable. They are best suited in scenarios where traditional experimental or observational designs may be inadequate and/or impractical. Observations a research method used to collect sensory data pertaining to physical behaviour, and longitudinal studies, a research method that observes behavioural patterns over an extended period of time, are usually used to collect well-rounded data in natural experiments.

Longitudinal studies prove advantageous over true experiments due to their relatively generalised nature and long observation periods. Statistical data from these studies is imperative in the discipline of developmental psychology. However, they usually entail grouping of participants with similar characteristics (as chosen and defined by the researcher), leaving room for between-group and within-subject variation, which could potentially affect experiment findings.

Both methods are qualitative in nature and therefore not free from potential limitations such as selection biases and coincidental associations. In scenarios such as this, natural experiments prove instrumental in validating findings generated from naturalistic studies as they allow for the statistical control of dependent variables. But, due to the uncontrolled nature of this research method, the results gain ecological validity and at the same time, fail to objectively establish a causal relationship between the independent and dependent variables.

Therefore, due to their equal emphasis on qualitative and quantitative measurements, natural experiments prove ecologically valid but lack replicability and reliability.

Quasi-Experiments
In quasi-experiments, independent variables are identified, instead of manipulated, extraneous variables are controlled to some extent, and the dependent variable is statistically controlled. In this case, independent variables are the inherent characteristics of an individual, such as ethnicity, height, IQ etc.

When compared to natural and true experiments, a stark difference stands out in the selection and assignment of participants. Unlike natural experiments, quasi-experiments do not randomly choose its participants, but select the ones who exhibit or inhabit the independent variable already. However, they are similar in the sense that observation and longitudinal studies are also employed as research methods and that there is minimal control over extraneous variables. Unlike true experiments, participants cannot be assigned to random groups, rather, participants are grouped according to the characteristic chosen and defined by the researcher, usually used in scenarios where random assignment is impractical and/or impossible.

Due to this deliberate assignment of participants, it becomes difficult to compare test groups within the experiment, making results less ecologically valid, and a correlation hard to establish. Moreover, although quasi-experiments cannot control most of the extraneous variables, as each participant’s history and therefore schema cannot be controlled or changed, researchers can isolate common factors like age, sex, mental health history etc. So, if the research uses a matched-pairs design, the findings can be compared with each other and tangible conclusions can be drawn.

However, it could also lead to individuals self-selecting, either because they hold themselves in a higher esteem when it comes to a performative dependent variable, or because the researcher created pairs according to his/her perception of who is similar to who. Therefore, it is extremely important to conduct some pre-test exercises to know how similar the selected group is to each other before beginning the actual experiment.

Correlations Research
Correlations research is a quantitative measurement of the correlation between any two variables. It is usually conducted after observations of a certain phenomenon lead to a hypothesis. In order to prove this premature hypothesis, researchers conduct a quantitative analysis to analyse the relationship between the two variables. These work as an initial checkpoint for a hypothesis. If a strong correlation is established, further studies can narrow down the results and attempt to decipher a causal relationship through different research methods. When compared to true experiments, correlations studies collect a greater variety and depth of data. Sometimes, in cases where the independent variable cannot be manipulated due to ethical or practicality concerns, and the scope of the investigation is too wide to conduct natural or quasi-experiments, correlations research proves useful. Moreover, because this research method only involves two variables, there is no provision for controlling any extraneous variables, leading to greater external validity as none of the variables involved are being controlled in any manner.

Research Methods: Cognitive Approach
The cognitive approach dissects the processing of information in the human brain, as well as schema formation and the role of emotions. The reliability of cognitive processes such as thinking and memory is constantly questioned, researched and debated. The research methods used to collect evidence are mostly qualitative in nature.

Interviews & Questionnaires
Interviews, a qualitative research method, refer to a conversation, conducted in small groups or one-on-one, that aims to obtain information. Because they are precise and detailed, they would be best suited to case study experiments where the number of participants is small. Although smaller number of participants would lead to weaker conclusions, information collected in interviews provides valuable first-hand insight into human behaviour as they allow non-verbal responses to be recorded eg. body language and facial expressions. When conducted in smaller groups, data collected can be more valuable as responses from certain participants may elicit enriched responses as they can build on each other's insights and perspectives. However, the moderator must be skilled, making sure every participant contributes equally, and the participants are vulnerable to conform to the majority beliefs instead of voicing their personal, differing stance. When conducted one-to-one, interviews can be structured or unstructured. Structured interviews have a precise set of questions, resulting in relatively objectively data that is easier to process. The interviewer does not need to be highly skilled. Unstructured interviews pose open-ended questions, therefore eliciting subjective answers. They require highly skilled interviewers as the quality of findings may depend on the interviewer's skill to evoke valuable inputs from the participants.

Questionnaires, a quantitative research method, refer to a survey, either open-ended or close-ended, that also poses questions aiming to extract information and establish patterns. Open-ended questions prompt participants to answer with a context/direction of their choice. Close-ended questions provide multiple response options to answer a particular question. One of the limitations of this method is participant unreliability – when his/her responses are distorted, either because of an inherent desire to please the researcher or because they choose to falsify or exaggerate their responses.

Often, questionnaires are complemented with interviews in order to collect quantitative and qualitative sets of data from representative samples that can be used to rationalise the findings to the general population.

In qualitative research methods as much, the bias of each question, implicit or otherwise, can play a key role in shaping the outcome of the experiment. Therefore, researchers must make sure that question construction (language and diction used) along with the register of conversation (informal/formal) does not reflect their agenda of the experiment, as questions and register used can be misleading or probe the participant to answer in a certain way. Moreover, qualitative methods are always open to bias as they collect subjective data. It is nearly impossible for the researcher to analyse and interpret it without any bias, as each human has a differing cultural, social and educational background that will inevitably affect the way one approaches an experiment or data set. Therefore, evidence subject to unconscious human bias is less ecologically valid and needs to be backed by quantitative proof.

Field Experiments
Field experiments refer to psychological experiments conducted outside laboratory settings, in the real environment, to increase applicability of results to a problem and establish the effect of independent variable on dependent variable in the natural environment. They entail the manipulation of the independent variable, unlike field research that emphasises observations and qualitative inquiry, in order to establish a direct causal relationship between the independent variable and dependent variable.

The field experiments are designed using random assignment or pre-post test design. Experiments using random design randomly divide the participants into groups, with some receiving the treatment and some the placebo. But, using pre-post test design, the researcher determines the changes in some parameters before and after their interaction with the experimental stimulus. Therefore, the results of such experiments have greater generalisability than survey research as experimental design assesses the causal relationship.

In addition, field experiments potentially reduce participant unreliability – because the experiment is being conducted in a natural environment that the participant is familiar with, the desire to please the researcher by modifying responses is reduced. Also, the real-life setting will expose researchers to new behaviours or issues otherwise not present in a laboratory setting. However, multiple scientists argue that “causal knowledge requires controlled variation.” The important decisions concerning control variables and environments offered by laboratory settings are difficult to replicate in natural environments. Overall, empirical methods such as field experiments and quantitative-focused methods such as laboratory experiments are not alternatives but complements which will yield best results when used together.

Research Methods: Sociocultural Approach
The sociocultural approach to psychology dissects the role that social and cultural structures play in shaping behaviour and thinking. A combination of quantitative and qualitative research methods are employed to better understand intergroup relations and cultures.

Cultures are studied either by an outsider or an insider i.e. etic or emic respectively. The emic approach entails research being conducted by an insider. The insider has had first-hand experience of many years being a part of the culture being investigated. This equips them to easily identify or isolate variables of study and create suitable quantitative or qualitative markers for each. However, it also prevents them from being objective about the research.

The etic approach entails research being conducted by an outsider i.e. by someone who does not identify or recognise with the culture’s norms or practices. Malhotra et al. (1996) suggested that the etic approach assumes a universalist approach which emphasizes psychological processes or ways of thinking are similar in all human cultures, whereas the etic approach assumes a relativist approach that emphasizes the psychological processes specific to cultural groups.

Quasi-Experiments
Quasi-experiments (see also 5.1.3) are most commonly conducted in real-life settings, and involve an investigation into independent variables that are inherent characteristics of individuals, such as ethnicity, height, IQ etc. Random assignment and selection is not possible in this research method as the variables chosen are innate.

Due to the natural setting of these experiments, extraneous variables are hard to control (therefore arising the need for a pre-post test design), and therefore quasi-experiments have low potential replicability. Yet, they are accredited to have relatively more control over extraneous variables as they can differentiate between participants on the basis of age-groups, gender, or race etc. and ensure a uniformity in such factors before commencing the experiment.

One of the limitations of psychological experiments, however, except in the cases where the participant is unaware of the study, is social desirability bias, where a participant may alter his/her/their response in order to please or impress the researcher. More so, merely the awareness of participation as an object of study in an experiment may influence the performance of participants. Therefore, researcher must explain the study along with the tasks needed to be performed during it using unbiased, uncomplicated and objective language in order to avoid misunderstandings or misdirection.

Specifically, in sociocultural psychology, quasi-experiments are used in cross-cultural research, where participants’ cultures represent the independent variable and their behaviour the dependent variable. Cross-cultural research is a study into the correlation between behaviour and one’s culture, whether certain behaviours are bound to a particular culture or cross-cultural (similar across multiple cultures).

In cross-cultural studies, one of the research designs could be to subject the ‘objects of study’ to differing experimental stimulus until equal levels of post-test results are achieved among groups that may have initially differed in those aspects. So, rather than investigating one static paradigm in multiple cultures, the investigator looks at several variations of that paradigm within one culture.

Therefore, quasi-experiments in sociocultural psychology allow for a combination of qualitative and quantitative research methods that lead to successful outcomes.

Correlation Research with Self-report Questionnaires
Correlation research is used to determine the relationship between two variables i.e. if they are co-variables. It is not used to identify a causal relationship as no variables involved are manipulated, and the change could be attributed to any of the multiple factors involved. However, correlation studies do provide an initial checkpoint for a hypothesis, and confirm whether further experimental methods are required to test the relationship.

The correlation studies method is used in scenarios where it is unethical, impractical or impossible to manipulate the independent variable – it simply aims to establish the connection between two variables of study. Moreover, it is often assumed that this research method involves the study of two or more quantitative variables. However, it is important to note that correlation research always investigates variables that cannot be manipulated, irrespective of their nature (quantitative or qualitative).

In sociocultural psychology, correlation studies are mostly used in conjunction with self-report questionnaires to help quantify the data collected. Self-report questionnaires are an uncomplicated, relatively quick and easily duplicable method of collecting data. They usually entail large number of randomly selected participants as that will increase the applicability of findings. They involve simply the posing of questions to participants/respondents and then recording their answers. This can occur over the phone, in person, in a written format, or over the Internet. Questions are usually about the respondents’ feelings or behaviour – measured quantitively through various markers determined and defined by the researcher.

The reliability of data collected using questionnaires relies on participants’ capability to understand the questions and answer with honesty. But social desirability bias is prevalent. Although difficult to eradicate completely, it can be lessened by using objective language in questionnaires and misinforming participants of the specific details of study.

For example, a study conducted by Berry et. al. (2006) on the relationship between acculturation methods and the extent of adaptation and assimilation of immigrant youth used a correlations study involving a self-report questionnaire. Although the study discovered a correlation between acculturation methods that involved engagement with host culture and successful integration, they could not conclude a causal relationship between the two. Moreover, data was collected through responses on a self-report questionnaire even though a large number of participants did not have high language proficiency.

Ethical Framework in Experimental Psychology
Ethics is a human-made construction of moral principles that aim to objectively guide individuals in their method of conducting an activity. Specific to Psychology, ethics is a code of guidelines that psychologists follow when conducting experimental research consisting of participants. Gaining evidence of psychological phenomena is important to further advance knowledge of human behaviour, however, psychological research involves ‘living beings’ (inclusive of humans and animals) who hold their own rights to choose and decide. Ethics in psychology therefore aims to sustain these rights through ethical principles/guidelines when a researcher conducts a study, and when a researcher reports their findings of the study.

Ethical Framework of Conducting a Study
Researchers gain evidence of psychological phenomena through psychological experiments and this evidence collection and processing occurs on a basis of an ethical framework of collectively agreed-upon moral principles that adhere to a committee’s perception of what is objectively ethical. When conducting a study, one of the most important ethical guidelines to follow is to obtain the participant's informed consent, wherein the decision of providing consent to participate in an experiment should be made after being informed of the experiment’s aims, what they will undergo/experience, and how their personal and experimental data will be used/applied. Furthermore, an ethical guideline while conducting research is informing participants of their right to withdraw from participation at any time during the research study. Another guideline is that all participants must be protected from ‘undue stress’ from physical and mental harm, including both short-term and long-term effects of the research. The researcher must refrain from conducting such a study, and if harm occurs through unforeseeable circumstances, the researcher must ensure that they take measures to reverse or prevent long-term consequences such as therapeutic intervention or psychological care.

In addition, a participant’s confidentiality and/or anonymity must be protected. A participant’s confidentiality is where the participant has provided their personal details to the researcher, and the researcher must not share any of this information as per a ‘research agreement’ which is often included in an informed consent form. A participant’s anonymity is where the participant does not provide any personal details, so even the researcher does not have the information of the individual that participated in the study. This ethical principle adheres to a participant’s right to privacy through protecting their information.

Furthermore, the minimization of deception is an ethical principle followed throughout psychological research. At times, researchers ‘deceive’ the participants by concealing the real aims of the study to prevent participation bias or the social desirability bias. This deception should be minimized feasibly. However, when deception is used, participants should be informed of it after the study, which is an additional ethical guideline called debriefing. Debriefing involves revealing the real aims of the study, explaining the implications, providing an explanation of how their data and information will be used, explaining the need for any deception, all after the study and researchers must provide the participants with an opportunity to withdraw from participation after being debriefed.

Ethical Framework of Reporting Findings
As a researcher reports their findings and results, one ethical guideline that must be followed is not conducting data fabrication Researchers must not fabricate data or results, and if faced with an identification of an error in an already published article, then they must take appropriate measures such as retracting the article, or publishing an erratum listing and explaining the error. Additionally, researchers must not conduct plagiarism. Furthermore, they must provide appropriate authorship through publication credit of the article which is according to each author’s relative substantial contribution to the study. As a researcher shares their results, they must also share their process, including the research data, so that other researchers can replicate the study to verify the results. Research data includes raw data which is a numerical matrix if the research is quantitative, or a transcript of the participants’ responses or behaviour if the research is qualitative, as well as processed data, which includes any calculations of statistical tests to identify significance, the inferences, and the conclusion.

Furthermore, researchers must follow appropriate guidelines while sharing the results of ‘sensitive and personal’ information/data. Through a research study, psychologists may gain sensitive information such as genetic/biological details or history of mental disorders. Through genetic research, information about ‘misattributed parentage’, ‘health status’ or the existence of an unknown family member might be discovered. Additionally, in technology-related studies where a participant’s brain is scanned, a researcher could identify a health issue such as a brain tumour. Furthermore, through a study, psychologists could identify a mental disorder which was previously unknown to the participant, or an identification of a high risk of developing a disorder in the future. This may affect the self-esteem of the participant, and affect the perception of the participant in a family dynamic. Psychologists must handle this information with sensitivity, and must monitor the psychological state of the participant at times when there is a high probability of detrimental consequences, along with psychological counselling/therapy if needed. Another aspect of reporting that researchers must consider is the implications of the results on the society. Often, a study may result in findings which may defy a social norm or generally have a severe impact on the society due to social implications of how individuals perceive knowledge. Psychologists should accordingly assess whether to publicize their results to a journal that is specific to the scientific community, or to a journal that is targeted to a public audience. Moreover, there is always a possibility of inaccurate or imprecise data collection or presence of bias throughout the study which affected the results. To handle this implication, the ethical framework encourages psychologists to identify which information is strongly necessary to publish to provide evidence of their findings behind a psychological phenomena, to include and acknowledge all the limitations of the study, and to report the results with accuracy and precision in the publication. This ethical framework enforces the evidence's reliability through providing the necessary information needed to replicate the study to verify the results.

Forensic Science
It is fairly undebated to say that forensic science is a discipline centered around evidence - particularly the production and analysis of it, however the exact definition for forensic science is difficult to determine. It is something that even forensic scientists struggle to establish an agreement on. However, most commonly, forensic science is known as the utilisation of scientific approaches - particularly from Chemistry, Biology and Medicine - to produce evidence that aids criminal investigation, court cases and ultimately the legal justice system. However, many specialists in the field may find that this common definition is lacking in that it is unable encompass the full scope of forensic analysis. Perhaps there is no better example of this than two of the first ever forensic scientists, Paul Kirk and Edmund Locard's perspective; they believed that the process of comparing and interpreting data, rather than the scientific methods used to extract such data, is what truly defines the study of forensic science. Forensic science also entails the use of collected evidence to deduce, explore and recreate probable scenarios and simultaneously eliminate improbable scenarios. Consolidating the above perspectives, it can be said that evidence in forensic science is produced through the combination of applying scientific methodology in investigations along with the interpretation of the information gained to attach meaningful deductions and conclusions to collected data.

Types of Evidence
Evidence that would be collected for analysis includes any physical remnants from a crime that can be any range of sizes. It can be separated into transient evidence and tangible evidence. Transient evidence is legally classified as evidence that is easily perishable, changed or lost, making them extremely susceptible to losing evidentiary value. This includes biological evidence (such as body fluids, hair, nails, skin tissue) from which DNA may be analysed, latent print evidence (such as fingerprints, foot prints), residue from weapons and firearms, (such as gunshot residue) and trace evidence (such as organic material, glass fragments, fibers). Tangible evidence is the opposite; they are comparatively larger items and are not easily perishable. This can include weapons, firearms, bullets, cartridges, drugs, paraphernalia, documents and digital devices.

Criticisms on the Reliability of Evidence Gained
The evidence collected in Forensic Science is often heavily relied on in court and the prosecution of criminals. It is an extremely crucial component of the criminal justice system, hence assurance of the reliability of the evidence produced is absolutely imperial. Currently, Forensic Science holds immense credibility in both in the public’s perspective and in the courtroom. Law professors Michael Saks and David Faigman from Arizona State University critically explores the shortcomings of Forensic Science as a producer of evidence in the context of Law in their article “Failed Forensics: How Forensic Science Lost its Way and How it Might Yet Find it”.

The article targets a specific subset of Forensic Science – fields in the discipline that are paradoxically titled “nonscience forensic sciences” by the authors. These nonscience fields (or soft forensic sciences as referred to by the National Institute of Justice) include ‘identification sciences’, which “involve pattern matching in an effort to associate a crime scene mark or object with its source”. The identification sciences field in question includes well known and heavily relied on areas of forensic analysis and evidence such as fingerprints, handwriting, bitemarks, voiceprints, tool marks, fire arms, tire prints, shoe prints and gunshot residue. Heavy criticism is expressed towards the apparent absence of scientific practices to ensure validity (such as hypothesis testing, falsification, empirical testing) and overall lacking vigour for accuracy and objectivity in the processes of research and producing of evidence, which is very atypical of normal Physical Science disciplines. Empirical evidence in Forensic Science has also been criticised for being methodologically weak. Rather, hypotheses and suppositions in Soft Forensic Science fields seem to have achieved validity purely through time with no forms of legitimacy besides proclaimed past successes and “experience”. Methodology in individualisation are also very hidden, destroying its reproducibility – also something very important in Physical Sciences to test and verify methods of arriving to evidence. Research in Natural Sciences also recognise and particularly value the concept of identification and qualification of errors in research process as to produce a conclusion that can be considered in tangible range (due to margins of error) and evaluated on its reliability for future improvement. However, once again, Forensic Science deviates from the norms of disciplines in Physical Sciences, having many claims of theoretical error rates of zero – something that would be appalling to scientists of other disciplines, as error is viewed as inherent and inevitable within the limitations of current technology. Putting aside assertions of zero theoretical error, the lack of a tangible margin of error in forensic practices also deteriorates its purpose as a means of producing evidence. Examples of this are false positives and false negatives or the likelihood that two prints are alike, both of which the probability of is completely unknown, unlike disciplines such as Medicine. Furthermore, statistical data points to errors in forensic testing as the second leading reason of false convictions which is inconsistent with the claims of forensic scientists. Even DNA typing – which is credited to have completely revolutionised the criminal justice system and praised by Saks and Faigman for being one of the most significant developments and truly scientific processes in Forensic Science – is not free from errors, which include, “among others, errors in collecting biological material, mislabelling of evidence, accidental contamination of samples, and, sometimes, intentional fraud.”

This analysis challenges the true applicability of the general definition of Forensic Science (applying scientific methods to legal issues) to the whole discipline. As demonstrated above, the Soft Forensic Sciences lack or contradict many beliefs and practices of physical sciences. Though the term “science” is in its legally classified title, it lacks origins in basic science and has largely developed independently outside of mainstream sciences. Examining a quote from a Utah Court of Appeals judge: “In essence, we have adopted a cultural assumption that a government representative’s assertion that a defendant fingerprint was found at a crime scene is an infallible fact, and not merely the examiners opinion.” This shares a very interesting perspective, bringing attention to how, ultimately, individualisation and identification science is a subjective judgement process by Forensic Scientists on their perception of the rarity and uniqueness of the evidence. Overall, it raises questions about the large amount of credibility assigned to evidence produced by the Forensic Sciences, and whether the lack of policing or questioning of forensic evidence in legal cases is justified.

Preventive Measures and Precautions to Ensure Reliability
When analyzing evidence, if there has been a transfer of DNA from first responders, the police, or lab analysts to the evidence, it could contaminate the evidence, and would falsely show that an individual was at the crime scene in the sense of implicating the individual to the crime. As this is false, the evidence consequently becomes unreliable, and inadmissible in court. While there are measures taken to prevent contamination, it is also important to take precaution of how to interpret the evidence in the case that the crime scene has been contaminated. There are three types of exogenous DNA which are irrelevant to the origin of the crime scene and have been as a result of contamination. Even with preventive measures, analysis of evidence will occur in a lab which has humans working in it who all have DNA. One type of exogenous DNA is the DNA of the lab analysts, and the precaution taken is retaining a DNA profile of all the personnel working at the lab to be able to cross-analyze in case of contamination. Another exogenous DNA is from a reaction with the allelic ladder which is an artificial sequence of common alleles in humans. Even a small allelic ladder can be amplified to observe the trends in peaks. The precaution against this reaction contamination of the allelic is that analysts identify if there are extra peaks present in the pattern. Additionally, other DNA samples in the laboratory are also exogenous DNA where multiple samples are processed as a batch, and during the process of identifying the DNA sequence and genotype, there are many steps which require handling the sample multiple times. While evidence may be unreliable if it is contaminated, scientists and analysts can identify whether a sample has been contaminated through extra peaks in the allelic ladder amplification pattern, and are even capable of differentiating between a contaminated DNA sample and the real DNA profile. However, this is an intricate and time-consuming process. Instead, for the evidence to remain reliable, preventive measures and precautions are taken strictly such as wearing gloves, refraining from contact with surfaces or objects while wearing the gloves, and preparing a ‘negative control reaction’ for every reaction to compare the peaks pattern, and sterilizing the workstation and instruments for every new sample. The reliability of forensic evidence has formed through a standard of maintaining protocols and preventive contamination measures, and this allows criminal justice to attain objectively processed evidence which strongly addresses who the DNA belongs to, to identify why that individual’s DNA was present at the crime scene.

Reliability of the Ones Using the Evidence
While forensic evidence is a result of scientific analysis, its reliability in implicating a suspect can be dependent on the individual using the evidence to draw conclusions. In the initial process of collecting evidence, humans that are social beings are the ones that decide what should or should not be collected to analyze for and as evidence. This decision-making is perceptive and hence may result in misleading information or missing information. The system of evidence use also involves social beings making decisions on what to understand from the evidence. They may hold a confirmation bias which is the tendency to seek for information that confirms or supports pre-existing beliefs. They may already suspect an individual of a crime, and they tend to seek for evidence or information that confirms that the suspect is the assailant. While the scientific process of evidence analysis is unbiased, the people that conduct this process are naturally biased social beings. When an individual gets a hold of evidence, it is upto the individual to develop an explanation behind the evidence, and this is a perceptive process, and while an explanation can be right or wrong, developing it can differ from person to person. For evidence to be admissible in court, it must be ‘probative’, which means that it can support one side of the case (or the other side). While evidence analysis is a method to gain data which is an interpretation that shows factual analytics, evaluation of this interpretation delves into the significance of the findings in the specific context of the case. This evaluation assesses the probative value of the interpretation in the case to be able to support an explanation. Evidence being ‘consistent’ which one explanation is not sufficient to provide valuable information in a court, rather, it is important to take into consideration of all the factors including all collected and analyzed evidence and be able to identify which has a higher chance/probability of having happened out of all the possible explanations that the evidence is consistent with.

Forensic Evidence in Wrongful Convictions
DNA fingerprinting was first used in a criminal investigation in 1986, in the case of the rapes and murders of Lynda Mann and Dawn Ashworth in Narborough, Leicestershire. Just a few years prior in 1984, British geneticist Dr Alec Jeffreys had discovered that differences in genetic material could be used to accurately identify individuals. Richard Buckland, a 17 year old man with learning difficulties, who was in custody for the crimes, was exonerated when genetic fingerprinting proved that both crimes had been committed by the same man but that man was not Buckland. An investigation by the Leicestershire Constabulary, resulted in the arrest of Colin Pitchfork in 1987. With his confession, later confirmed by DNA evidence, Pitchfork was convicted to life imprisonment the following year.

Since then, DNA evidence has been used extensively across the world to both convict and exonerate criminal suspects. To date, in the United States, 375 people, of whom 21 had been on death row, have been exonerated by DNA evidence. On average, each of the people exonerated had served 14 years in prison.

Owing to the proven innocence of those involved, DNA exonerations provide a unique insight into the factors behind wrongful convictions, such as ineffective assistance of legal counsel, racial biases, misapplication of forensic science and eyewitness misidentifications:
 * A 2010 study found that of the first 255 cases of DNA exoneration, 54 of the exonerees had filed claims of ineffective counsel. Of these 54 appeals, 44 were rejected by courts, however in 7 cases courts established that the defendants had been offered inadequate legal assistance.
 * A May 2020 study of data on DNA exonerations of individuals convicted in the 1980s and 90s examined the racial differences in wrongful convictions. The findings suggested, that the wrongful conviction rate of black convicts for rape is over two and a half times higher than that of white convicts.
 * 45% of wrongful convictions in the United States involved the misapplication of forensic science. In 2015 the FBI revealed that the likelihood of a match between hair samples and suspects’ hair had been overstated by it’s microscopy experts in 95% of the reviewed cases.
 * Eyewitness misidentification has been found to be the primary factor in wrongful convictions, with many estimates finding that this is true in 69% of cases.

DNA exonerations have had a substantial influence on the criminal justice system in the United States. A 1996 study carried out by the National Institute of Justice(NIJ), Convicted by Juries, Exonerated by Science: Case Studies in the Use of DNA Evidence to Establish Innocence After Trial,, revealed repeated instances of convict-requested DNA testing resulting in overturned convictions. Consequently, the Attorney General at the time Janet Reno tasked the NIJ with establishing a National Commission on the Future of DNA Evidence. One of the reports published by the Commission, Post-conviction DNA Testing: Recommendations for Handling Requests, initiated radical changes in many states’ legislature regarding post-conviction testing. Today, all fifty states have some form of legislation that enables access to post-conviction DNA testing. At the federal level, the passing of The Justice for All Act by Congress in 2004, permitted all federal inmates to request testing.

Furthermore, DNA exonerations, especially those of inmates on death row, have also contributed to a decrease in favourable public opinion of the death penalty. In some cases, the overturning of wrongful convictions has led states to temporarily suspend the death penalty, such as New Jersey in January 2006. Given that successful exonerations represent a minute fraction of all estimated wrongful convictions, many people believe it is likely a wrongfully convicted individual has already been executed.

An Introduction to the Case Study
In 1984, Jennifer Thompson-Cannino was sexually assaulted by a male assailant that broke into her home one night, and later during the same night, broke into another woman’s home in the same neighborhood and conducted sexual assault. A man named Ronald Cotton was convicted of both crimes by a jury. The police identified that Cotton had a similar torch that the assailant was carrying, and a piece of rubber found at the victim’s house matched the rubber from Cotton’s shoes. In addition to this, the first victim, and eyewitness of the events, identified Cotton as the assailant. However, after the technological advancement of DNA testing of the semen found during the rape, it was declared that the semen did not belong to Cotton, and instead to another inmate doing time in prison for a similar charge. Cotton was exonerated of all charges.

The victim identified Cotton as the assailant, when he was not. The intricate proceedings of the arrest occurred first when the victim gave the details of what she remembered from the crime about the assailant, including the face features, the clothes and any dominating features. After a composite sketch was produced and shared, tips were called in from locals, and one was about cotton who also had a prior record for breaking and entering, worked in the same neighborhood, and appeared similar to the person in the sketch. The police asked the victim to identify the assailant from a photo line-up of 6 individuals, and one of them was of Cotton. The victim studied the line-up for an approximate time of 5 minutes in detail, and picked out a photo of Cotton and claimed that this was the assailant. The police also ensured to ask if the victim was certain of her choice and she replied that she was. Cotton provided an alibi which was later found to be false, and Cotton claimed that this was because he got confused about which weekend. The police then conducted a line-up of individuals, and while Jennifer was confused between two people, she finally chose Cotton from the line-up, and the police told Jennifer that that was the same individual that she had picked out in the photo line-up. During trial, in front of a jury of peers, Jennifer was asked to identify the assailant, and she pointed towards Cotton. Cotton was sentenced for life in North Carolina central jail. In prison, Cotton met a fellow inmate, Bobby Pool, and believed that he resembled the composite sketch, and Cotton overheard Pool confessing to other inmates that he committed the crime against Jennifer and the other victim. Cotton appealed his case in court, and Jennifer was asked to identify if Pool was the assailant, and she did not recognize him. She continued to testify that the assailant was Cotton. After 7 years from the appeal, the DNA from the semen was tested, and proved that Pool was indeed the assailant, not Cotton. And while scientific evidence proved this fact, Jennifer could not believe that Pool was the assailant.

Causes for Unreliability of Evidence: A Psychological Examination
Jennifer’s memory was not reliable. Garry Wells researches and studies about eyewitness testimony and its reliability, and in Jennifer’s case, and many other cases where innocent people are convicted of a crime that they did not commit, one common factor is that the actual assailant was not in the original photo line-up. In Jennifer’s case, Pool was neither in the photo line-up, or the in-person identification line-up. In such situations, eye witnesses assume that the suspect is one of the people in the line-up, and disregard the possibility that an individual out of the line-up could be the assailant. Research shows that eyewitnesses choose the most similar looking individual to the person that they remember. Additionally, Jennifer spent 5 minutes studying each photo side by side, comparing them, where in fact, recognition memory is quite spontaneous and instantly responsive (approximately 10 to 15 seconds). This means that the mind was doing activities other than recognition during this time. She did not compare each individual photo to the person in her memory, but she was comparing the photos with each other to find the most similar one to her memory. Additionally, after the in-person line-up, Jennifer was told that she picked the same person in the photo line-up. This reinforcement increases the perception of certainty in memory. Elizabeth Loftus conducted research that showed that if an individual identifies a person as someone particular when the actual person is not there, and are asked to re-identify between people while the original person is there, the individual will continue to choose their original choice. Jennifer’s testimony and inability to recognize Pool is a product of confirmation bias and congruence bias. Confirmation and congruence bias are cognitive biases which are a result of heuristics which are mental shortcuts or simplified strategies that help a person with decision-making so that the judgements are efficient in time and choice. Confirmation bias is the tendency to recall information in a way that it confirms or supports preexisting cognitions, and congruence bias is the tendency to ‘over-rely’ on their original cognitions and hypotheses (which is the most congruent information), and tend to neglect acknowledging alternative or new information/hypotheses, or only acknowledge the information that positively supports the original cognitions. Loftus and Palmer studied and conducted research that if an individual recieves new information after experiencing an event, this information merges with the original memory, alters it, and actually reconstructs the memory itself. When incorrect details are added to a memory, it is called confabulation, and should be considered while evaluating the reliability of eyewitness testimony as evidence in law.

An Integrated Biological and Cognitive Examination
Flashbulb memories as discovered by Roger Brown and James Kulik in 1977 are vivid memories that develop in a circumstance where the individual experiences an element of surprise, and hold high levels of ‘personal consequentiality’ which is emotional arousal to that particular situation. This means that an eyewitness of a crime will retain vivid memories of the event if they experienced surprise or shock, and felt highly emotionally aroused. It was identified that the complex process of mechanism that an individual is able to maintain flashbulb memories is through overt and covert rehearsal, where in the former, the individual shares their experience and reiterates what they remember, and in the latter, the individual recalls their memories. This rehearsal helps with memory retaining and being able to retrieve it in the long term. However, it is known from Loftus, Miller and Burns’ research in 1978 that auditory or verbal information heard after an event may be integrated into the visual memory of the event, reconstructing the memory itself, and this is a consequence of overt rehearsal. In 2007, Sharot et al found that there is actually a neural basis for the production of flashbulb memories, that the amygdala selectively activates when recalling memory, and the condition that the individual was personally involved in the event is what produces vivid images that make the memory a flashbulb memory. Additionally, in 2004, they found that a neural basis is not it's only requirement for the production of a flashbulb memory as their research showed that the amygdala also activated if the memory was not surprising or personal, hence, they found that if an individual is emotionally affected, flashbulb memories are produced, even if not necessarily ‘objectively accurate’, as a result of processing heavily emotional experiences. Into the study of the accuracy of flashbulb memories, Talarico and Rubin conducted a study after the 9/11 attack to identify the comparison in accuracy between flashbulb memories and a regular memory. Over a stretch of a time period, they collected qualitative information about their memories of the 9/11 experience and the everyday regular event experience. Their findings show that both the accuracy of the flashbulb and the regular memory declined over time, the vividness and perception of accuracy (belief in accuracy) only declined for the regular memory and emotional arousal correlated to the perception of accuracy over time while it did not correlate with the accuracy if the memory. This demonstrates that flashbulb memories that are vivid are only strong with their perceived accuracy, and can easily be influenced, manipulated, or reconstructed. The surprise and personal consequentiality from emotional arousal of an event do not indicate that the memory is reliable or accurate. In Jennifer’s case, she experienced the crime herself and had a high emotional response, yet even though she had vivid memories, they were not accurate.

Process of Evolving Towards Reliable Evidence
The eyewitness testimony evidence was unreliable in this case, but this does not mean it should be removed as admissible evidence in the law. Instead, the procedure of collecting and gaining evidence should be enhanced. Evaluation of evidence of crime and forensic processing is a scientific method of evidence analysis, while social science such as psychology guides a method to the process of collecting and interpreting the evidence itself. Instead of deeming eyewitness testimony evidence as inadmissible, the process has rather been enhanced as official court regulations, and this change began through the research from the 1960s about eyewitness testimony reliability. With the introduction of DNA identifying technology and a database, many innocent people were exonerated, which caused a shift in ideology in the courts, and a shift of what was accepted as reliable. A social process developed change through an enhanced and new way of defining what is reliable. Social scientists have found ways to gain reliable eyewitness testimony as evidence. One such way includes reform to identification of the assailant wherein the eyewitness should only see one suspect at a time in a line-up, the ‘filler’ individuals posing as suspects in the identification process should be similar to the assailant’s characteristics such as age and race, the eyewitness must be told that the actual assailant ‘may or may not’ be presented in the line-up, unbiased instructor presenting the suspects, recording observations of confidence in eyewitness’ identification, and do not present a ‘show-up’ to the eyewitness of an apprehended suspect found shortly after the crime. These are ways that can allow evidence to stay reliable and admissible in court. These changes have been developed as a result of a social process and these conditions are now accepted by many courts and people as the standard way of producing, collecting and interpreting eyewitness testimony evidence.

The forensic DNA evidence in this case is what exonerated Cotton 11 years later. DNA is admissible in court, and it is not only because it is scientific data, but because its collection and analysis process has been enhanced and developed through research. It took 11 years for the DNA to prove that Cotton was innocent, and this is not because they did not have the DNA. While DNA forensics was discovered in 1984 by Dr. Alec Jefferys, evolving its understanding and studying its sequencing took time. Additionally, expanding the DNA database with convicts’ DNA samples was a time consuming process as well. Furthermore, forensic evidence is admissible because it is reliable, and this is because of the multiple precautions taken in the process of collecting evidence to prevent contamination of the DNA, or any other forensic evidence such as fingerprints. When detectives inspect the crime scene, or when the first responders arrive to triage the situation, without proper precautions, their DNA could easily become present in the crime scene by a single sneeze or the loss of one strand of hair. Through research by the National Criminal Justice Reference Service (NCJRS), the consequence of contaminated evidence was identified where evidence had to be disregarded as it was inadmissible. They guided the process of evidence collection and processing. These guidelines include, but are not limited to, wearing gloves and foot coverings at the crime scene, refraining from speaking, sneezing or coughing when in front of evidence, refraining from touching one’s face while collecting evidence, recording the DNA of the analysts and people on the scene to cross-reference in preparation of a possible crime scene contamination, air-drying the evidence before packaging it, using disposable and new instruments or tools when handling a new sample/specimen, packaging and transporting the evidence in paper bags without any staple pins (in contrast to plastic bags which can retain moisture, allowing bacteria to grow, and will compromise the reliability of the evidence). This is important in preserving the evidence as well. Like in the Cotton case, the DNA was preserved for 11 years until admitted in court. This forensic evidence has to be packaged and stored in appropriate conditions of paper and a non-moisture retaining environment to preserve the evidence while avoiding degradation. The identification by an official body that simply collecting forensic samples is not enough, and that the process of collection and analysis should follow guidelines that allow the evidence to remain reliable, is a social process of identifying a mass problem, and implementing a universal solution. These guidelines are now considered as the standard process which was developed through research on the consequences of contamination and ways of contamination. This provides an objective insight that following the guidelines in the evidence collection and interpretation process produces reliable forensic DNA testing which is admissible in courts.

Importance of Reliable Evidence in Justice and Ethics
Justice is a representation of an ethical principle that all individuals must be treated fairly and equally, and applying justice itself is ethical. Justice has very long roots, where it was once considered the most important of the four Cardinal Virtues recognized by Plato in religious theology, and has evolved to be viewed as ‘the first virtue of social institutions’ in the modern age as outlined by John Rawls. Yet when justice is not applied fairly, there is an ethical breach in moral principles. Especially when this unfairness is a result of a problem in the system. Ethics is a construction of objective moral principles, which is ultimately formed by mankind. Hence, if humans have limited knowledge while developing these principles, justice and ethics itself is consequently limited. To conquer this limitation, as humans, that are ultimately social beings, produce more knowledge, and share that knowledge, systems should be adapted to maintain objectivity in ethical principles. If society experiences a problem, then recognizing the problem is not sufficient to sustain an ethical community, but applying solutions integrated from multiple areas of study enhances the system itself. For example, the knowledge of how eyewitness testimony and forensic DNA samples should be collected and processed to be reliable in proven ways should both be integrated in the system of court regulations and law enforcement to uphold the ethical principle that justice is a basis of, especially as we have the knowledge that implementing this change will continue the construction of objectivity and a standard of reliability of evidence in justice.