This is Sheila. It's the 2nd day of the ECIL conference in Bamberg, Germany and Ute Schmid gives a keynote on AI Literacy – Why Basic Understanding of AI Methods is Relevant for Save, Efficient, and Reflected Use of AI-Tools.
As usual, this is my impression of her talk as it was delivered, so mistakes are mine. To start with I'd like to say that this was a wonderful example of a lecture from an informed expert who is really good at explaining complex things. This was evidenced when Schmid said she would have to skip over something to keep to time and the audience immediately made "oh no" noises indicating how they wanted more!
She started by defining AI Literacy as knowing and understanding core concepts, being able to evaluate AI applications and reflecting on the effects of AI adoption in society and your own setting. Schmid introduced the dagstuhl triangle - thinking how does it work? how do I use it? what are the effects? Using the example of a car, which has been around for a long time, some of the basics of how they work and their effects have become common knowledge, but that isn't the case yet with AI. She said that a lot of workshops were focused on the "how do I use" perspective, which is narrow.
Schmid went on to talk about the origins of the name Artificial Intelligence (1956) as part of informatics, based on the assumption that many/all aspects of human intelligence can be formalized by algorithms and simulated by computer programmes.
She emphasised that most computer applications are not based on AI methods, though they may have AI add-ons (e.g. email using a spam filter). When you use AI you give up correctness and completeness. Why are we using them in critical areas like medicine then? You use AI in situations where you cannot otherwise solve the problem, it cannot be solved by other types of computation. This leads to heuristic methods and approximation. Next there are the knowledge based methods (which I know have been around for years!). Then finally for problems cannot be described explicitly there is machine learning, with the replacement of explicit algorithms by black box models, which are created (I think inductively) from data.
Schmid pointed out that learned models can be useful even if not always correct e.g. if a request for a "kitten on red sofa" produces 9 images that are incorrect and 1 that is correct (so you use the 1). She went on to describe the difference between discrimniative and convolutional AI (I won't try to reproduce this!) with the latter bringing the breakthrough to generative AI. Schmid discussed the "anthropomorphization trap". She said that AI systems are not "self learning" in the way that is often talked about. The big million-item training sets are labeled by humans, and humans are still used in bulk to label data to improve the AI systems. Also, AI systems good in one domain are not going automatically be good in another. There was also the misapprehension that AI systems understand what we say (named the Eliza effect, after the first functioning chatbot)
Schmid then talked about how amazing human learning was, and how you can learn from a very few examples. Although the problems of this "inductive bias" (e.g. over generalisation) occur in humans as well as AI systems, humans are normally quicker at spotting and correcting these problems. She explained how neural networks work and developed - I won't try and reproduce her execellent explanation, but I might insert the lide here later. Schmid explained how the earlier neural networks were good with tabular data but not images, and this is where convolutionary approaches improved things. She used a diagram from this source https://medium.com/thedeephub/convolutional-neural-networks-a-comprehensive-guide-5cc0b5eae175
Schmid turned to generative AI and understanding. She mentioned the "Winograd Challenge" "What does it refer to in this sentence? The trophy doesn't fit into the brown suitcase because it is too small". For a human this is easy, because of our commonsense knowledge, whereas without intervention a gen AI is likely to give an incorrect answer. The word "intelligence" is used because we have had to fall back on terms that apply to humans originally, but it is something that leads to the anthropomorphism and thinking if a system is "intelligence" it must have this very high cognitive ability (which it doesn't). She mentioned the problem of loading a dishwasher, which has still not been solved in robotics.
Schmid reflected on whether there can be artificial general intelligence. Although AI systems can simulate, they also need intentionality, critical ability, self-awareness, metacognition and "qualia" in order to have general intelligence, which Schmid felt they could not have.
Schmid talked about Explainable AI (XAI) with an example of how an image could get misclassified. This focuses on explaining the output of AI focusing on who wants the explanation and why they want it. She talked a little about trustworthy AI (a quick google found this article https://link.springer.com/chapter/10.1007/978-3-031-45304-5_10 ) and issues on the combination of AI and humans. She cited a systematic review https://www.nature.com/articles/s41562-024-02024-1 which highlights the dangers of assuming that combining himan and AI is the best solution. Deskilling leads to loss of critical abilities and means that we could lose the ability to judge the quality of our own content as well as losing the ability to judge the quality of AI output. Schmid saw grave dangers ahead if we continue down a path of overtrustingtrusting gen AI and pointed to this article https://arxiv.org/abs/2506.08872 ("Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task")
She called for new interfaces, supporting understanding with tailored feedback, supporting resilience against disinformation (e.g. to uncover manipulative argumentative structures), and giving "targeted support for human control and oversight and calibrated trust". From the questions, I will pick out her advice that "you should not teach at the tool level alone" (e.g. just how to use ChatGPT, how to do prompts) but need all 3 parts of the dagstuhl triangle. Altogether I found this talk a high quality learning experience (much better than a talk from an AI!)
Curating information literacy stories from around the world since 2005 - - - Stories identified, chosen and written by humans!
Tuesday, September 23, 2025
AI Literacy – Why Basic Understanding of AI Methods is Relevant for Save, Efficient, and Reflected Use of AI-Tools #ECIL2025
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment