A.I. Event Reports

The project was able to support the attendance of a number of early career researchers at each workshop. These event reports were submitted by these attendees, as part of the conditions of their bursaries.

Olivia Belton

The British Academy funded Artificial Intelligence and the Posthumanities workshop, which was held at Royal Holloway’s central London campus on the 29th of March 2019, brought together researchers from industry and academia to discuss how posthumanism can be usefully applied to our understandings of artificial intelligence. The organiser, Royal Holloway’s Danielle Sands, noted that discourses around artificial intelligence in the public sphere tend towards the apocalyptic, and the potential of new technology often provokes a socially conservative backlash. The goal of this workshop was to think critically about the challenges AI poses to society, but also to consider the positive social and cultural possibilities it brings.

The event was organised into panels  and the convivial atmosphere led to a great deal of collaborative dialogue by all the participants throughout the day. The first presenter was Andreas Vlachos, from the University of Cambridge, discussing applications of artificially intelligent fact-checking. He argued that these programs must transparently justify their decision-making – that is, explain why they have classified something as true or false. A common theme of the workshop was the difficulty of widening artificially intelligent systems from an initial specific application to a more generalised purpose. For example, a political fact checker cannot easily transfer its knowledge to healthcare-related content. However, fact-checking algorithms could potentially improve upon the speed of human fact-checkers, which is vitally important when disinformation spreads so quickly on the internet. The second presenter was Chris Dyer, who is working on teaching computers human language. While language programmes can impressively mimic human speech, they run into similar problems as algorithmic fact checking, in that they need a huge amount of data in order to learn, and they struggle to move from specific tasks to more generalised intelligence.

The second panel focused on how posthumanism could be usefully applied to the arts and humanities. Joanna Zylinska, based at Goldsmiths, University of London, discussed the public interest in AI-created art, which troubles notions of human creativity and the individuality of artistic expression. Matt Hayler, a senior lecturer in Contemporary Literature at the University of Birmingham, discussed differing definitions of posthumanism and transhumanism. The terms used to describe posthumanist and transhumanist ideas are often poorly defined and overlapping. Hayler set out a more precise lexicon in order to understand posthuman ideas within popular discourses. Finally, Olga Goriunova from Royal Holloway, presented on the concept of digital subjects. As humans become more digitally mediated via various networks, their digital profiles become regarded as synonymous with the person (at least under the logics of surveillance capitalism).

Following lunch, the workshop resumed with a consideration of the philosophy of the posthuman age. Constantine Sandis, from the University of Hertfordshire, discussed principles of AI intelligibility and our right to explanation. He made the argument that it is not sufficient for algorithms to be transparent, as this does little to elucidate their purposes and mechanisms to non-specialists. Instead, he argues that algorithms should be broadly intelligible in terms of what their effects are. After this, Henry Shevlin, a postdoctoral associate at the Leverhulme Centre for the Future of Intelligence, discussed his research into animal consciousness as a way to understand potential AI consciousness. Shevlin argued that perceptions of what counts as a ‘conscious’ being is actually closely associated with general intelligence.

The final panel consisted of David Roden from the Open University, as well as Daniel Allington, who works in Social and Cultural Artificial Intelligence at King’s College London. Roden discussed his work in speculative posthumanism, imagining beings that could alter themselves at will. Roden put forth the provocative argument that such a being would be radically alien to us and could not be understood by conventional theories of the mind, because of its constant alteration. Allington, who works on artificial intelligence but has a background in linguistics and history, detailed his project about algorithmic recognition of hate speech. While these programmes are potentially incredibly useful, they have serious problems. For example, these algorithms can detect ‘bad words’ such as racial slurs but do not understand subtext or context, which can radically alter whether something is classified as hate speech or not. Allington stressed the need for computer scientists to collaborate with humanities scholars in order to improve these technologies.

Overall, the workshop fostered a rich interdisciplinary dialogue. We identified several recurring concerns, such as the limitations of artificial intelligence’s highly specialised abilities and lack of generality. The workshop clearly demonstrated the value of fostering communication between specialists in different disciplines, as many interesting connections were made between seemingly disparate topics. Furthermore, the workshop provoked a great deal of debate about how we can understand the human in relation to technology, and what risks and possibilities there are for a future where artificial intelligence plays a greater role in people’s lives.