Pages

Tuesday, October 10, 2023

Using Early Responses to Wikipedia and Google to Consider ChatGPT #ECIL2023

Photo by Sheila Webber window in Krakow

At the ECIL conference (Sheila here) I just attended a talk on Using Early Responses to Wikipedia and Google to Consider ChatGPT by David A. Hurley (University of New Mexico, USA). He identified a number of questions that are being asked about generative AI, such as: where is the authority? what should be the heuristics for assessment? should we teach prompt engineering? should we prohibit students from using AI? Hurley identified the similarity between these fears and questions, and the questions raised on the introduction of Google/the web and Wikipedia.
Hurley had done a search for commentary on Google and Wikipedia to identify how they were being talked about when they were introduced. For example in 1999 Google was still being talked about as something you may not have heard of. Hurley identified the number of workshops, advice about how to search it well, noting structural issues (such as the way in which popular pages get brought to the top, and implications of that). As I was giving frequent workshops about how to search search engines at that time I got quite excited at this point.
Hurley noticed that catastrophising that went on, and there were also debates about the issues around having one interface to multiple databases, and the impact of users not having to make so much effort to search. Hurley identified librarians seeing critical evaluation of information being less central to information literacy. Hurley referred to A Librarian's 2.0 Manifesto and had found references to Wikipedia being framed as evil, not just unreliable. Hurley identified three pedagogical approaches
(1) Rejection (e.g. notices forbidding Wikipedia use; exercises designed to show Wikipedia/Google were not as good as books and journals);
(2) Reinforcing - incorporating Google/Wikipedia into existing ways of teaching (e.g. using poor websites to teach information evaluation)
(3) Revolutionising e.g. using Wikipedia's talk pages to show a process of consensus; encouraging students to edit Wikipedia entries. There were also more questionable activities involving spreading misinformation.
Hurley noted that some of the early criticisms no longer apply e.g. it indexes pdfs etc. that were originally not accessible, also the criticism that "Google doesn't give you an answer" and "Google doesn't know you like a librarian does" are no longer true (indeed the idea that Google doesn't know you raised laughter in the room). Differences to this earlier time include:
That Google/Wikipedia were more democratising - anyone can edit and create; however ChatGPT is centralising.
On Wikipedia/Google you can usually see who has written something and when; In ChatGPT the origins and authors are submerged.
There is a different user context: whereas in the early days of Google etc. librarians were the search experts, now that has changed - librarians may not be the best prompt engineers.
There is a different environmental context, with more awareness of the impact of these tools. Conclusions included - not to confuse curiosity with a mandate to teach; scope for collaborative exploration (with students) of these tools. 

This was part of a session on algorithmic literacy: the abstracts for the session are here. Unfortunately I missed most of the first two talks, but you can see the informative abstract on Algorithms, Digital Literacies and Democratic Practices: Perceptions of Academic Librarians by Maureen Constance Henninger, Hilary Yerbury and Algorithmic Literacy of Polish Students in Social Sciences and Humanities and Łukasz Iwasiński, Magdalena Krawczyk from that link above I will mention that the in the latter talk, the researchers were using Dogruel et al's(2022)  scale, and they identified that some items on the algorithm literacy scale were unclear e.g. "Humans are never involved when algorithms are used" and so they recommend reviewing these questions and adding some examples. Also the researchers felt that an algorithm literacy scale should not be restricted to considering algorithm literacy in the internet. They thought that it would be better to take the Framework catalogue of digital competencies and add items to do with algorithm literacy. You identify area of life/benefit/competency needed to achieve the benefit.
Reference: Dogruel, L., Masur P., & Joeckel, S. (2022) Development and validation of an algorithm literacy scale for internet users. Communication Methods and Measures, 16(2).
Photo by Sheila Webber window in Krakow

No comments:

Post a Comment