Wednesday, April 01, 2026

Matteo Bergamini keynote #LILAC26

on one side some cherry trees in blossom and on the other a small figure with sky behind
The keynote for day 3 of the LILAC information literacy conference is given by Matteo Bergamini (CEO of Shout Out UK (SOUK). This is Sheila liveblogging it. I'll give the usual caveat that these are my immediate impressions whilst liveblogging. 
He started by describing the work of SOUK which "provides impartial political and media literacy training and campaigns focused on democratic engagement and combatting disinformation online". He then talked about the labels used: misinformation (false information shared by mistake, which can lead from e.g. not knowing that "verified" accounts may have just bought that verification), disinformation (false information shared deliberately, using the example of information shared in the Southport riots), malinformation (distorting actual information for harmful purposes e.g. taking it out of context) and false information (e.g. AI generated text, images, videos).
Bergamini showed us some images of people and asked us to guess which are real and which AI, the results of which showed that often it was really difficult to tell. He went on to talk about how algorithms are designed to keep us engaged, therefore feeding us what we might like, to keep hold on us, often showing increasingly emotive and extreme material, leading to desensitisation and seeing such material as normal. Social media can thus become a shield against ideas and feelings which are different to oour own. Bergamini talked about how this process is exploited by religeous extremism and incel/manosphere ideologies, feeding on people's insecurities and reinforcing negativity. This includes fascination with extreme violence, which formerly wasn't seen as a form of ideology. He also highlighted that the number of children who were arrested for terrorism-related incidents was increasing in the UK (20% in 2024 and 4% in 2019). Bergamini said that there is not mass radicalisation, but radicalisation can happen very quickly: it used to happen over months, but now it might happen within 24 hours, so the the time period for potentially intervening is much shorter.
Bergamini presented the solution as being media literacy - the ability to use, understand and create. [Obviously I would say you could also mention information literacy at this point!] He identified the cross curricular initiative in Finland for media literacy, which does seem to have an impact. In France he highlighted a programme with 30 coordination centres for media literacy and also the Welsh digital competency framework. He also mentioned the curriculum review in England which does include requirements for media literacy education (though, I would add, sadly not as a subject in itself and also Bergamini mentioned the lack of specific resources for teacher training so far).
Then he highlighted the Dismiss initiative and the other work of SOUK itself. He explained the ideas of prebunking (aiming to prevent spread of harmful information before the event) and debunking (work after the event - e,g, fact checking). SOUK focuses on prebunking, in particular technique-based prebunking (looking on the different techniques that are used for spread of disinformation etc.) An example of teaching with prebunking is showing a video with someone giving health misinformation and then discussing the techniques that are being used in the video. SOUK has resources to support educators on its website https://www.shoutoutuk.org/ including a podcast and lesson plans.
One question from the audience was about whether SOUK had material to work with adults as well as young people, and the answer was yes. Another issue that came up in the questions was developing scepticism without people becoming cynical and distrusting everything. Things that got discussed including examining the different actors producing the information, also working on this continously, it isn't a one off thing. One useful tip about spotting fake AI was looking at the context of AI generated material rather than looking for "tells" that it was fake (e.g. not how realistic is this picture of a person, but rather how likely is it that they would be doing this in this setting). Another question was about having positive examples as well as negative ones [I was thinking here that highlighting information creators who were open about how they checked and reflected on their practice would be useful].
Photo by Sheila Webber: cherry blossom behind Sheffield train station tram stop, March 2026

No comments: