Fri Oct 25 (City Center Campus, B1096): Artificial Intelligence
Pertti Huuskonen: Death to Black Boxes! Call for explainable AI
AI (Artificial Intelligence) systems are currently very popular. They enable computers do things that we once thought only humans can perform. For instance, in medical diagnosis of cancer patients, machine learning systems now can identify cancer better than human doctors. AI has been developed since 1950’s, with relatively slow progress. It has proved very difficult to transfer “intelligence” into machines. Nevertheless, AI systems built over the years have achieved significant results in limited domains (e.g. document search, movie recommendations). Such systems are powerful, yet fall short in one key respect: they are black boxes. They don’t know what they know, and can’t explain their functions.
People can usually identify concepts they are familiar with, explain how things work, or estimate the extent of their knowledge. Software systems (including AI) cannot. Much effort in AI research focuses on building systems that can explain how they work, what data they use, and why their processing should be trusted. This talk outlines some key issues in the emerging field of Explainable AI, with examples of current research and upcoming interaction possibilities with machines.
Riku Roihankorpi: Hypernormality and the (Post)human Ethics of Bots
According to Lennard Davis’s seminal work The End of Normal (2013), the era of dismodern ethics – the era of modernity in the West that struggles to come to terms with a margin of disability its own politics has induced – still envisions ethics as the site of welcoming, sensitivity, and diversity. The problems arrive with the mentioned politics of dismodernity, which is based on diagnoses (of what/who to welcome, what to sensitize onself to), inclusion by choice, and the concept of the normal.
Extending itself from Sir Francis Galton’s Darwin-based eugenics to the hybrid cultures of the present – where the body and the intelligence of technology have become unavoidable questions – the politics of dismodernity advocates a conception of the human that needs to be immune to the abnormal body, and thus to the body of technology. This appears to be the case even with technologies that rely on human languages and communication – the learning machines and algorithmic assistants named as bots. Consequently, this immunity is likely to reduce the ethical questions concerning bots to political diagnoses of what counts as human, relevant to humanity, normal and hypernormal (a desired, new normal) in the digital lifeworld.
Mary Nurminen: Users of machine translation
Machine translation (MT) is one of the types of artificial intelligence that has been around the longest and its use is very widespread, with Google Translate along having an estimated estimated 500 million users in 2016. I use it, you use it, our parents and some grandparents even use it, mostly as a personal tool for getting a basic understanding of texts in languages we don’t understand.
However, there are also professional groups who use it on a regular basis. This presentation focuses on one of those groups: people who help inventors and research & development teams to create and manage patents. Patent professionals have been reading other-language patents via raw, unedited MT for approximately a decade. It is used in their decision-making processes and is considered a very legitimate method for understanding patents. This presentation describes how MT is used in decision-making and what factors make it workable in the patent environment. Many of the factors may lead us to a better comprehension of how people use and understand raw MT in general.