User talk:SteRos7/sandbox/Approaches to Knowledge/Seminar 1/History

Seem to have had trouble with being able to use hyperlinks to wikipedia pages. For many I was told a wiki page for the topic/person simply didn't exist even though they do. Would appreciate any corrections anyone can make as this might be a technical issue with my laptop? Xrcaatnp (discuss • contribs) 08:31, 16 October 2020 (UTC)

History of Biology
I was wondering whether it might be useful to include an explanation for why Darwin’s theory of natural selection was not originally taken up? Perhaps there could be a short sentence on how he was at the time unable to provide evidence which solved the issue of how variation was maintained within a population without an understanding of genetics. There could even simply be a link to the wiki-page on ‘blending inheritance’... What do you think? Xrcaatnp (discuss • contribs) 14:02, 17 October 2020 (UTC)

It's really interesting to contrast biology and epistemology (which I wrote about) because they show quite different approaches to historical knowledge - biology has more or less completely refuted a lot of historical knowledge whereas historical knowledge is still very relevant in epistemology and in many other philosophical disciplines. Something I want to think about is what this has to do with the role and nature of evidence in these two discplines. 10:09, 20 October 2020 (UTC)Pacific23

Artificial Intelligence: A trendy discipline
I think the section on public perception of AI is really interesting. AI systems are very complex and often the creators of these systems don't understand how they work but people often assume specific behaviours can be explicitly programmed. This might be due to the popularity of Isaac Asimov's laws of robotics. Fantastical representations of AI can be very dangerous in the way they shape public perception. People's fears, brought about by movies like Terminator, could result in a reluctance to trust new technology like self-driving cars and significantly slow progress in AI. On the other hand certain aspects of AI might be thought to be safer than they actually are. For example an AI judge might seem more objective than a human judge but AI can easily be just as biased as humans and a large part of AI research is dedicated to making AI safe, like the work of Open AI or the CSER (Centre for Study of Existential Risk). Does AI research need to devote efforts to improving the public's perception and understanding of AI?