Pages

Monday, March 25, 2024

#LILAC24 Keynote: Artificial intelligence panel discussion

Pam McKinney here live-blogging from the first day of the LILAC conference in Leeds.  The first keynote of the conference is a panel discussion about Artificial Intelligence with Erin Nephin, Sam Thomas, Josh Rodda, Masud Khokhar and Martin Wheatley. Josh Rodda, a learning development librarian at the university of Nottingham was asked to put together some guidance for students about how to use AI tools such as chat GPT, which is now available as an online guide and the basis of a teaching session. His position is as an "AI sceptic" as there are serious questions about the ethics of AI and how it can be used effectively.  Martin Wheatley works as head of digital education and innovation for a group of Further Education colleges in Leeds, this includes the information and digital literacy offer for students. There is a lot of interest in AI from teachers and students, so Martin has done a lot of experimentation with AI to inform himself.  AI is now a strategic priority for the organisation. Masud is university librarian at the University of Leeds: he is an AI optimist but acknowledges the challenges of using AI.
There is a need to de-bunk some of the misinformation shared about AI, and to identify best practice. Erin Nephin works here at Leeds Beckett University, and described their pragmatic approach to the use of generative AI, in that AI is part of our everyday life and it's not always easy to identify AI in systems and services. They advocate a mindful approach, for example, to use AI ethically and to use it to support scholarship not replace it. She mentioned the environmental concerns of AI, the ethical concerns around the training of AI and the costs of the AI.
The first audience question was "Is this a new issue for information literacy, can our existing IL practices address AI?" Martin Wheatley said that AI use in FE is very sporadic, and the use of AI depends on the various accrediting bodies for courses. He sees AI as an opportunity, to ensure that IL teaching is taken up across the different years of courses, and bringing in IL teaching earlier in the curriculum, particularly content around evaluating information. 
Josh spoke about the difference in speed and quantity of AI-generated information, and the implicit bias of the white western perspective of much of the data used to train AI.  These biases exist in literature and academic sources too, so some of the same approaches can be used to address these biases, so encouraging students to be critical of the sources, and reflect on those biases. Masud Khokhar spoke about the "hype cycle" of AI adoption, and the role of information literacy in addressing AI.  It's important to acknowledge that we are all learning together about AI, but we can apply the principles of information literacy to this new technology.  We need to be more comfortable working in a "messy" world, and work with others to try to find a path. 

Josh spoke about the value of AI as a conversation partner, and as a tool to support engagement with learning materials that might otherwise be impenetrable. Masud reflected on the cost of AI, and whether the library should subscribe to a generative AI service on behalf of students, as otherwise there is a problem of unequal access if some students can afford it and some can't. Large Language models are not environmentally sustainable, because of the power required to run them. but they have made it possible to understand what AI can do for us. The future is in small models that can run locally that are not so resource-intensive. Martin Wheatley spoke about the need for people to be digitally literate, and this is important to address before thinking about AI literacy e.g. some teaching staff are still printing materials rather than using Google Classroom, and some students don't have computers at home. 

A question was posed that focused on how to involve all staff at an institution in a conversation about AI and information/digital literacy. Martin Wheatley spoke about this issue in the context of very vocational courses e.g. Bricklaying, but even with these courses it is important that students and staff can engage with digital information, and by extension AI. So you need to work up to AI, through laying a digital literacy foundation. Masud spoke about the need to have open discussions about how technology of all kinds can improve teaching and improve the experience of staff and students.  With any programme of change you need to bring staff through a process of change and transition, this is vital, and you need to recognise that staff are at very different levels of ability with technology. Josh spoke about the need to acknowledge disciplinary differences in the use of AI, so a computer science lecturer who would be incredibly comfortable with AI vs a law academic who is concerned about ethics and accuracy of information, vs an English academic who is sceptical about it. AI development has to be tailored. Erin spoke about the need to remind people about the pervasive nature of AI in our lives, as a route through to supporting AI literacy. 

The next question was about the potential of AI to disrupt traditional research practices. Masud spoke about the disruption to the process of research, so how to improve the experience of researchers and improve the accessibility and reach of research. AI can really help make sense of unstructured data, e.g. transcribing audio data, providing meta-data summaries, translating the transcriptions into other languages, and connect data with relevant images. This could be very powerful, but researchers need to be taught how to do this.  This is not helped by structures in HE support services where IL experts are siloed in particular departments (e.g. the Library). 

Erin spoke about how AI is best used to "high-time low-stakes" activities, e.g. asking AI to help you write a CV, or prepare for job interviews.  But t the same time it's important that students are able to develop summarising and writing skills themselves rather than turning immediately to AI for academic work.  Josh spoke about the need for people to be able to make informed decisions about whether to use AI in a particular situation and the need to consider assessment design. 

Masud finished by encouraging us to think about how AI can augment us, we need to adapt, and think carefully about how we can complement AI with human expertise. Martin spoke about the excitement of AI, and the need for IL professionals to be in this space and re-think how we push the critical thinking agenda in education.

No comments:

Post a Comment