Wednesday, September 24, 2025

What is Even Real Anymore? – The Case for Personal Agency Being at the Forefront of What it Means to be Literate #ECIL2025

 Hello, this is Pam McKinney live-blogging from day 3 of the ECIL conference David White, Dean of Academic Strategy from University of the Arts London, gave the keynote presentation this morning. The University of the Arts London is the largest creative arts-focused institution in Europe, and they are launching online master's courses. He is chair of the university AI group, which brings together people in various roles from across the university to discuss AI and the implications for the university. He expressed ambivalence to AI, and enjoys creating his own art as "an antidote to email". The idea that the mainstream large language models have stolen the web, and any framework for IL must include the idea that it's Ok to refuse to use AI on ethical grounds, as ion it is a technology of plagiarism. Not just that people can use it to plagiarise, but that the whole basis of the technology is plagiaristic.

Framing is really important, so we need to explore how we conceptualise AI is as important as exploring how we use it. Being human is a finite thing, and technology gradually gets better at things we thought only humans could do, so being human gradually shrinks. But it's better to think that beiung human is constantly expanding and changing. AI forces us to reflect on the value of what we try to do, and in particular, what it means to learn - if it isn't hard work and doesn't require effort, then it probably isn't learning. Learning is hard! So if a technology appears that seems to make everything easy and no effort, then this is not learning. David is interested in IL as a "surface' of learning, and what AI means for learning. 

The nature of the discourse around AI is the same as it has been around other technologies. Humans love thinking about what it means to be human, and AI is just the latest thing that allows us to ask that question. We are fascinated by the boundary between living and not living (e.g. Frankenstein's monster, a golem, the Turing test, AI). AI is a technology of cultural production, and others previously have been language, printing, libraries, the web, wikipedia. The conversation around AI is similar to the conversation we had about Wikipedia 20 years ago. So we've sort of been here before. There is often a moral panic around new technologies. Every time we have a new technology of cultural production, we experience a rupture, with defined stages e.g. a "kill" or "save" polarisation, AI will either kill us or save us. It is assumed that new technology will create jobs or make jobs disappear, and this decentres education institutions. There are 2 schools of ideas about literacy: Teleological, which is output-based, everyone should become a useful, productive, skilled worker (i.e AI is everywhere, we just need to use it). The second is ontological, which is about developing the attributes of a person (AI must be critically engaged with). 

David is interested in metaphors and myths. There is a proliferation of metaphors about AI, which demonstrates how difficult it is for us to explain exactly what we mean by AI. The digital environment has been described as a "place" where people connect with others, but AI is never a place; it doesn't bring us together, it separates us. Generally, we think of AI as a person rather than a place. asking people to declare the metaphor they use for AI is an important first step to discussing the value of AI. There's a relief and enjoyment when we realise that AI is a bit stupid. It would be better if we could get AI large language models to stop referring to itself as "I", as this encourages us to think about the AI  as a person. Its apparent intelligence comes from its ability to hide its own mechanisms, e.g. that it has been trained by humans. AI can be useful, but we need to see it as a machine. Some people think of AI as a "phantom", magical or sacred: it is omniscient, omnipresent, unknowable, powerful and has the promise of immortality. Let's discuss this before getting into the question of what AI can do. Free will is hard work, so it can be relieving to offload some of this thinking / learning onto AI, but really, the human has to always be at the end of the process; we need to take control. 

David posed us some questions: questions that he had asked ChatGPT, and also his team. He asked the question  "what do you think the cockney phrase "that's a right old bucket of frogs' means, and the audience was able to spot the AI answer from the three options given on the screen. 

David posted a list of alternative terms for "information literacy" and his favourite is "always think about it literacy". Rapanta et al. 2025 pose three new skills for the generative AI context: interrogation, adaptation and epistemological reasoning. Really the question for us is which part of a larger process can AI be applied to? If all you do is in the AI, then that is not good. Figuring out when not to use AI is an important literacy. There is a hierarchy to the practice of academic writing from purely skills-based approaches (e.g. referencing) through to making a contribution to a subject, and a hierarchy of information seeking from discovering facts to creating meaning and new knowledge. So we need to think about at what level we apply AI use.  Using AI at the bottom of the skills hierarchy can mean that we lack the skills basis to move up the hierarchy - AI can disrupt skills building in a very unhelpful way. 

There is a concept of "workslop": the AI has produced a lot of content, but it's all rubbish and takes a lot more work to make it make sense and to improve the quality. The expectation is that AI will help, but there is a risk of "poverty of meaning". There is also a problem of "Hyponiscience" which ios the false assumption that you have access to all knowledge - AIs do NOT have access to the sum of all knowledge. AI is good at taking a guess on something that doesn't exist - it is good at inference, but it is not good at reasoning.

AI allows us to be unintentionally productive: you put a prompt in and get loads of content back. In terms of IL and AI, we just need to be intentional and think about what we're doing 


Photo: Frogs in a toy shop window in Bamberg (Pam McKinney)


No comments: