Pages

Monday, October 30, 2023

Global AI Initiatives: From Theory to Practice @asist_org #ASIST23

ai generated picture of a womans head

I'm liveblogging Global AI Initiatives: From Theory to Practice a session at the from the ASIS&T conference that I'm currently attending. These are my immediate impressions of the session.
Andrew Cox (University of Sheffield, UK) presented on AI in Libraries, starting by identifying that there are a lot of strategy documents concerning AI, so AI has to be acknowledged as a strategic priority in libraries. There are recurrent themes such as regulation, ethical application and developing human capital. He referenced a paper that identified typical strategic themes, e.g. in some countries focusing on control, in some focusing more on what the market decides, or development by the state. At the moment AI is not mentioned much in library or institutional strategies (Cox referred to a recent study of the UK and China).
However there are many applications of AI in libraries: examples are using AI to create metadata for large collections; in writing documents; and in promoting AI literacy. This raises the question of whether libraries have AI capability? You need material, human and intangible resources (the latter being things like willingness to take risk and ability to change).
National and research libraries tend to have these capabilities/resources, whereas it is more doubtful with other types of libraries. Three ways that libraries can contribute are as follows. National library projects can be beacons of responsible AI (if they undertake required steps such as deciding priorities, respecting the rights of those represented in the collections, sharing the code and training materials they produce etc.) The second way that librarians can contribute is by contributing to institutional capability (using knowledge and skills to do with organising, finding etc. data). The third area of contribution is in developing AI literacy: some frameworks are being developed, but AI literacy can be hard to define and achieve (because it can be hidden, is changing etc.)i (My thought at this is that information literacy frameworks should be a starting point!)

Jesse Dinneen (Humboldt-Universität zu Berlin, Germany) talked about Global AI initiatives: from theory to practice: European practice. He noted that GLAMR [Galleries, Museums, Libraries etc.] Institutions have been quick to leverage AI (digitisation, data analysis etc.) and European universities have also been quick to respond (developing guidelines for use in academia, incorporating into courses etc.). This leads to twin challenges: issues of ethics, and issues of regulation. There has not been so much GLAMR-specific research on AI risks. Wheras there are numerous guidelines etc. emerging, they mostly haven't been tested, so it isn't clear which principles, guidelines etc. are effective or feasible. Bringing together stakeholders/experts is a good start, but still hasn't addressed what works in practice.
Dinneen identified that since AI issues are about tech, information and people, LIS and GLAMR should be well positioned to help. In terms of AI regulation, there are different initiatives in different countries. There is an EU AI Act in process, derived from 7 European Commission-made ethical principles. They, for example, distinguish between different risk levels of applications. He spoke about some problems, in that those in teh industry have problems such as assuming people's literacy (e.g. to engage with user manuals and instructions), throwing AI into many products. In the EU there should be opportunities for research with the documentation that emerges from the EU Act. This could be used as a guide for those outside the EU.

Dania Bilal (University of Tennessee-Knoxville, USA) talked about iSchool leaders' vision of Information Science curricula in the age of AI. She was talking about members of the iSchools Association. She looked at the 54 North American iSchools covering AI and related content, searching for the occurance of mentions of AI (or related topics such as machine learning). 39% did not offer courses (i.e. a module or class) related to AI. 9 iSchools had AI certificates and concentration programmes. As a next step iSchool leaders will be asked about why AI was not integrated larger scale, what vision they had for the topic, and how well they are preparing future professionals.

George Hope Chidziwisano (University of Tennessee-Knoxville, USA) talked about AI initiatives in Africa. He highlighted the biases in AI systems, such as difficulty in understanding bilingual speakers. Human-centred approaches have been proposed, stressing that diverse populations must be involved in AI development. An example of bias was that ChatGPT only included Egypt as a representative of African information. Chidziwisano used an example of asking ChatGPT about Nsanje in Malawi, and pointed out the major inaccuracies in the "information" provided. He pointed out that there were means to collect data using the resources and infrastructures that were actually used in the countries (instead of only using the tools and infrastructures that are used in Western countries). Chidziwisano used an example of using audio data from chickens in Malawi to predict poultry disease in other countries, noting that it was important to collect data from different countries to develop a more generalisable model.

Finally Vandana Singh (University of Tennessee-Knoxville, USA) talked about AI in the technology industry. She started with a Deloitte survey about companies engaging with AI and almost 100% were doing something. 33% of tech, media and communications companies had "active AI solutions". (I think she was referring to this report) Challenges include employees understanding of AI, and AI ethics. Singh then talked about what ethics meant in the AI industry (at a basic level this means - not doing harm to people), with challenges such as opacity of AI systems, bias, manipulation of behaviour, privacy.
These challenges are not easy to fix, for example there are differing definitions of fairness and bias. Singh talked about developments such as the group DAIR and the issues they are concerned with. She noted there are numerous companies engaging with these issues, giving some examples, and that these are evolving very rapidly, and it was important to engage with them in discussion. She also mentioned specific initiatives such as StereoSet and this article. Singh also talked about transparency of AI - and identified a role for iSchool educators in teaching about transparent AI.
Following this there were interesting discussions in groups about various aspects of AI and information science/libraries.
Image by Sheila Webber using Midjourney AI. It took me a while to stop it showing me very spooky wired female heads in response the the prompt Artificial Intelligence, Information Science. In . the end I specified "in the style of Gwen John" so it lost a bit of the spookiness.

No comments:

Post a Comment