Machine Learning, Interpretability, and Drone Strikes
4:00 - 6:00 pm
Room 3401 (Lift no. 17-18), Academic Building

Abstract:

While automated weapons systems have become a mainstream topic in military ethics,  I argue that novel problems concerning the interpretability of algorithms used in present and future systems present under studied ethical issues, especially in the context of recent US military research,  foreign policy, and extant operationalizations of ‘terrorist’. Using contemporary US drone strike methodology as a case study, I argue for an account of algorithmic interpretability that is most appropriate for enhancing our abilities to assign proportionate moral responsibility to the numerous actors involved in developing, approving, and utilizing automated weapons systems.

 

Biography:

Adrian K. Yee (PhD, Toronto) is Research Assistant Professor at Lingnan University, Department of Philosophy, co-director of MA program 'Artificial Intelligence & the Future', and research fellow/treasurer at the Hong Kong Catastrophic Risk Centre. He works in the philosophy of artificial intelligence, economics, and political philosophy, and has published in Philosophy of Science, European Journal for Philosophy of Science, Studies in History & Philosophy of Science, and Journal of Economic Methodology on topics ranging from the usage of models in physics in economics to universal basic income studies, artificial intelligence in misinformation studies, and social well-being. He is currently working on ethical issues in automated weapons systems, attention economics, and poverty studies.

When
Where
Room 3401 (Lift no. 17-18), Academic Building
Language
English
More Information

Zoon Link:  https://hkust.zoom.us/j/92597835225?pwd=ZTF1Skd6YWJGeUk2Njg4clJFb1Zpdz09

Meeting ID:  925 9783 5225

Passcode:  178505

Speakers / Performers:
Prof. Adrian K. Yee
Lingnan University