과학철학 분야 해외학자 초청 강연 - Florian Boge(TU Dortmund University) | |||||
---|---|---|---|---|---|
분류 | 콜로키움 | 날짜 | 2024-08-28 11:20 | 조회 | 151 |
첨부파일 | [포스터]2024-Summer_Special_Lectures_on_Philosophy_of_Science__3_.pdf | ||||
Theme: Machine Learning, Black Box, and Understanding Speaker: Prof. Dr. Florian Boge (TU Dortmund University) Moderator: Hyundeuk Cheon (Seoul National University) 프로그램 (Program) 1. 특별 강연 일시: 2024년 8월 28일 (수) 오후 3시 - 6시 장소: 서울대학교 인문대 신양학술정보관 302호 Lecture 1. What is Special About Deep Learning Opacity? Deep Learning systems, also called Deep Neural Networks (DNNs), are state-of-the-art in Artificial Intelligence (AI). It is well known that these systems are in some sense “black boxed” or opaque, roughly meaning that it is not easy to understand details about their functioning on various levels and in various respects. However, similar things have been known to be true for a long time about more traditional scientific devices, such as simulation models. Hence, why is there such a big fuzz about Deep Learning opacity, and is there anything special about it? In this lecture, I am going to elaborate on an independent dimension to the opacity of DNNs, which is unlike the opacity associated with computer simulations. As I will show, it is this second dimension that makes DNNs special devices at least within scientific research. Lecture 2. Re-Assessing Machine Cognition in the Age of Deep Learning How seriously should we take the “I” in AI? Do ChatGPT and co literally understand our prompts? This question has long puzzled philosophers and scientists alike, with verdicts ranging from outright enthusiasm to profound pessimism. In this lecture, I will re-address the issue from two vantage points. First, I will suggest that Searle’s classic “Chinese Room Argument” can be revived in the age of Deep Learning, but in ways quite different from those Searle himself envisioned. Combining a more careful approach to Deep Learning theory with a slight alteration of the original scenario I call “The Chinese Library”, I will show that, insofar as Searle’s arguments were applicable in the 1980s, they are still applicable today. Second, I will suggest a close connection between understanding and the possession of concepts and, based on evidence from the technical literature, suggest that we should not assume that present-day DNNs have concepts – and hence, that they do understand anything. 2. 과학철학 Mini-Workshop 일시: 2024년 8월 29일 (목) 오후 1시 - 3시 30분 장소: 서울대학교 인문대 7동 309호 Lecture 3. Understanding (and) Machine Learning's Black Box Explanation Problems Practitioners in eXplainable Artificial Intelligence (XAI) view themselves as addressing a range of problems they call ‘black box explanation problems’ (Guidotti et al., 2018): Problems either related to rendering a Machine Learning (ML) model transparent or to rendering its outputs transparent. Many (Páez, 2019; Langer et al., 2021; Zednik, 2021) have argued that standards of explanation in XAI vary with the stakeholder. Buchholz (2023) extends this idea into a means-ends approach: Different stakeholders use different instruments of XAI to render different aspects of ML transparent, and with different goals in mind. In my talk, I shall argue for a more unified view within the context of scientific application. In particular, I suggest that we need to antecedently distinguish between two sets of aims in deploying XAI methods: proximate and ultimate aims. While the proximate aim of deploying XAI methods within the context of a scientific application may be to render either the model or its outputs understandable, the ultimate aim here is to increase one’s understanding of a given subject matter. Furthermore, building on the literature on objectual understanding (Elgin, 2017; Dellsén, 2019), and following a number of suggestions from other philosophers of science (Sullivan, 2019; Knüsel & Baumberger, 2020; Meskhidze, 2021; Räz & Beisbart, 2022), I ask whether the ultimate aim cannot also be pursued by means of ML but without any explanations. 신진 연구자 발표 주최: 서울대학교 과학데이터혁신연구소, 철학사상연구소, AI연구원 ELSI센터 |