Pam McKinney continuing live blogging from the second day of the LILAC conference. Maha Bali is based in Cairo, and is a professor of practice in teaching and learning at the American University of Cairo. She advocates for care, compassion and kindness when working with learners. She inspires the value of taking a critical view of learning technologies, and she has been integral to conversations about openness, inclusion and equality in higher education. Maha ran some Mentimeter polls to understand how the audience is feeling and what we have learned so far from the conference. Then we took part in some polls about AI and the extent to which it is familiar to us, and whether we think it is a seismic shift (3.8 out of 5) or a fad (2.1 out of 5) and an opportunity (4 out of 5).
Maha then began her presentation and reminded us that AI can't be considered inevitable, beneficial or transformative, and we need to acknowledge the risks of AI and stay vigilant. Maha had a visually impaired student who makes use of BeMyAI to provide textual descriptions of pictures, and saying how useful this was to make information more accessible and understand visual information. AI is not a neutral technology, so it's important to think about social justice and take a critical stance towards AI. Learners in different situations have different needs when it comes to being critical towards AI e.g. university students vs school students. AI is very oriented towards white western knowledge, and less good with knowledge from the global south. We need to adapt, AI is a shock, so we need to be reflective about the impacts of AI and how to adapt creatively and not just apply knee-jerk reactions. We took part in a Mentimeter poll to share our metaphors of AI (see photo!), and it was clear that there was a lot of ambiguity about our views of AI, people could see the benefits but also the potential challenges, e.g. "double-edged sword" and "a rose with thorns". Maha co-authored a paper "assistant parrot or colonizing loudspeaker" which explores metaphors of AI. It explores how metaphors are used, and then the audience was invited to explore some metaphors using a scale developed in the paper to analyse the extent to which the metaphor is critical or positive and the extent to which it is human or inanimate. Different metaphors will give you a different impression of what AI can do, and whether it is negative or positive.
Then Maha introduced the metaphor of cake for AI. There are situations where we accept that you don't bake from scratch, sometimes you buy the cake. With AI it's similar - what are you actually teaching, what do they need to know how to start from scratch, and what can they use AI for, that isn't vital to their understanding. For example, it might be acceptable to get students to use AI to create titles or help with engineering modelling, but there is still critical input. Assessment design is very important - if students are all using AI to write their assignments then maybe the assessment design needs to change.
Every AI is racist, sexist and abeleist, and riddled with assumptions. Critical AI literacy helps people understand how AI works, not in detail but the basics of how AI is trained from data. Students need to recognise inequalities and biases in the use of AI, and how the data produced by AI can be inaccurate. There are ethical issues in the design of AI, including unethical employment practices, the data is taken without the consent of the person who has actually created that data. Sources are often inaccurate, and it can't judge the quality of the source it's taken. Crafting good prompts is difficult, and students need to understand this as part of critical AI literacy. Students need to understand when it's appropriate to use the AI, and how to adjust the output from AI. Maha recommended the AI pedagogy site to help develop activities to do with students.
Maha spoke a little about some AI tools - ChatGPT is not the only tool available! Maha asked Google Gemini to create a table of text and pictures about prominent Egyptians, and it found information about Mohamed Ali, but provided a picture of Mohammed Ali the boxer instead of the Egyptian personage.
One resource that Maha uses with students is called quickdraw which asks users to draw objects and then "learns" how people draw. It exists in multiple languages, but this is challenging e.g. in the western world a hospital has a cross on it, but in the Arabic world a hospital typically has a crescent. So it is very western-oriented.
Maha recommended this paper which discussues the concept of "Botshit" which is defined as follows "This means chatbots can produce coherent sounding but inaccurate or fabricated content, referred to as ‘hallucinations’. When humans use this untruthful content for tasks, it becomes what we call ‘botshit’"
It's important to discuss when and why it is suitable to use AI. If it is an unfamiliar area with nuanced answers, it's really difficult to judge the quality of what the AI has produced. A good strategy is to encourage students to use AI and have a critical discussion with them about their use of AI.
No comments:
Post a Comment