Wednesday, October 11, 2023

#ECIL2023 Artificial Intelligence and Information Literacy: Hazards and Opportunities

photo by Pam McKinney of Krakow

Michael Flierl from Ohio State University began by saying that chatGPT has some impressive features and has only got more powerful. [This is a delayed-due-to-poor-wifi post from day 2 of ECIL] Flierl presented a summary of Microsoft AutoGen that has a network of AIs that interact with each other in order to answer a user query. ChatGPT now has the ability to understand image information and can analyse images provided by users and use the information to respond to user queries.
AI is a dual-use technology in that it can be used for good and ill, it is in-built to the technology. Some researchers used AI for drug discovery, and used it to drive down drug toxicity. However it could also be used to create a very toxic drug, and overnight the AI discovered thousands of very deadly compounds. There are concerns that AI can be used to create convincing fake news and inundate social media platforms with machine generated misinformation.
There is lots of interest in the interaction between the ACRL framework and AI, but AI has the potentially to fundamentally disrupt society, so perhaps we need instead to adapt IL to AI. There is a likelihood that AIs will achieve sentience before the end of this century. We need to engage in some tough questions about the future of humanity. There are scenarios in which humans will come to a point where we cannot control an AI. Should IL professionals advocate for the control of advanced AI? AI systems are already being used to streamline hiring practices that perpetuate existing biases such as sexism and racism.
There is a black box problem in that even researchers do not understand in detail how the AI works. ChatGPT 4 has the tendency to “hallucinate” and produce information that is clearly false. Social media has been shown to negatively influence teenagers’ mental health, but this has been largely ignored by social media companies. State actors (e.g. China, but not only China) have been using AI to monitor individuals, and create a surveillance state.
However there is some good news. AI could transform learning, and tailor learning for the best student outcomes. It is possible for LIS professionals to advocate for new AI systems that are safe, reliable, transparent, controllable, explainable and predictable
Photo by Pam McKinney

No comments: