I will be liveblogging from the LILAC information literacy conference, which started today and which is at my university, Sheffield University. Dr Pam Mckinney may also do some liveblogging, but not as much as usual as she was a key member of the organising committee and is super busy with that!
My first session is Links as Evidence, Ads as Clues: Undergraduate Source Evaluation Strategies, presented by Alyssa Russo and Lori Townsend (University of New Mexico, USA), also authored by Amy Jankowski and Stephanie Benee. The abstract is here. The slides are at https://bit.ly/ccg_lilac As I'm liveblogging, these are my immediate impressions.
They explained they were grappling with the problem that students wanted to plunge into content without paying attention to the "packaging" - what kind of information it is e.g. article, report (the "container conundrum"). It was easier to identify information genres when pre-digital and gen-AI has intendified the container conudrum, with faked information getting more difficult to detect.
They wanted to find out what the students were doing, so undertook some research. Research questions included how students perceive info online, how do they decide what to trust, what do they consider when making evaluative decisions. The participants were 15 18-23 year old undergraduates. They were 60% female and the demographic mix was a reasonable match to the UNM population. They carried out semi structured interviews, participants were prompted with possible search topics, then they did google searches and followed up websites from that, speaking aloud and responding to questions as they went along. The researchers did thematic analysis with 2 rounds of coding - some data was collected several years ago before gen AI as life, COVID etc. intervened and the analysis is not yet complete.
The presenters showed emerging findings. This included presenting some video clips e.g. Evergreen was saying why they thought an item on the National Geographic website was scholarly (that it was on a site with reputation, that it had quotes, that it was giving data points, that it gave direct links to sites that were .gov) Townsend noted that the presence of links was often treated as like references, and participants saw .gov (or international equivalents) as being more trustworthy. The researchers wanted to know whether students could recognise genres - one student identified something as like a "yelp page for a dog park" which was pretty accurate.
Another example was a participant talking about how they judged sites by the type of adverts e.g. if the adverts were off topic or loud, that put them off, but if the ads were related to the content on the page then that was seen as more trustworthy. Townsend said that participants reacted adversely if there were prominent, loud, pop-up, bait & switch etc. The participants were thinking about WHY the adverts were in that website. The participant Fir gave reasons for why they would trust the website's creator, based on the detailed information (e.g. personal details, a calendar) and the design quality "I think she's actually trying to help people". Hemlock similarly said "Their main goal is making a difference rather than like drawing the eye".
Participants saw the lived experience, with personal stories, as being more trustworthy. Evidence of purpose was therefore important. Talking about an equality website, looking at the list of board directors, participant Hemlock felt that a foundation that drew on personal experiences was more reliable. Hemlock had less trust for a site that emphasised data (bar charts etc.) rather than personal experience. Elm trusted a website that "provides an area for people who are interested to be able to share their stories" - this was seen as a community which was less biased. This issue of bias detection came up, and "opinions" were seen as bad (e.g. one gave an example of not trusting posts from Twitter or Facebook; another thought that big national news site were biased, trying to attract viewers). Another participant talked about why they trusted an article from the Guardian, but said they thought that complete impartiality wasn't possible.
Students reported that they did sometimes check up on things, and whether did depended on the situation. For example Cottonwood said they would check up on things if it was for work, because it might affect people, whereas if it was a university project they might not. However, Birch said that they would check up if it WAS for a university project. Russo noted how the students also brought their own experience (e.g. Birch recognised the Mayo clinic as trustworthy, as her grandmother had been a patient at a Mayo clinic).
In terms of teaching implications, the presenters advocated starting with "what kind of thing is this?" "what is it trying to do?" type questions. Knowing that "scholarly is good" does not help them tell whether something is scholarly or not. Acronyms like SIFT or CRAAP are rather too simplistic, and you have to acknowledge that source evaluation is complex. However, it can be fun working with the students - as they found in this project.
Photo by Sheila Webber: registration desk before registration started
No comments:
Post a Comment