A final liveblog from Pam McKinney at the LILAC Conference. This keynote from Elinor Carmi, a senior lecturer in Data Politics and social justice at City St Georges University (@elinorcarmi.bsky.social), began her keynote by reflecting on why we need to build data literacy in this turbulent world. Elinor has looked at the underlying technologies behind modern digital services and identified a need to challenge the big technology companies. It's impossible to talk about these issues without talking about AI, and it is now becoming apparent that tech companies have been selling user data to train AI. In the UL, there has been a concerted effort from content providers, e.g. newspapers, authors, and musicians, to challenge the use of their data to train AIs, and celebrities seem to have more influence to change how the government deals with this issue. Elinor reflected on the privacy paradox - people say they care about their privacy but actually don't take action to protect their data privacy. People often don't understand how cookies work, and so people can't make informed decsions about protecting their data.
Elinor shared some findings from a research project she undertook and found that digital harms and abuses are seen as distant, complex and abstract, even though there have been some really high-profile cases of technology harming people, for example, the Post office scandal in the UK. If people experience privacy breaches, bullying, and harassment as part of their online lives, this mobilises them to explore improving their data literacy. People worry about things that are unimportant, such as the emergency alarm test in the UK, but critical risks are being ignored. We need to look at who is responsible for creating and solving the problems. Big tech companies seem to have a lot of power in the US, and platforms are reducing the protections available to users. Ultimately, the business model of online platforms is to sell you as a product and sell you advertising - they are only motivated by profit.
People should be able to negotiate with these platforms and should be better informed. There are few avenues to challenge big technology companies, but there are a few organisations that do this, it takes huge amounts of time to take technology companies to court. Citizens can use mechanisms in platforms such as reporting and blocking to protect their rights. Elinor worked with Simeon T+yates to develop a data citizenship model, which includes data doing: practical data skills, data thinking, which includes critical skills to analyse privacy, and problem-solving, and thirdly data participation: how we can use data positively to improve our communities. People are generally unaware of just how broadly their data is being traded, so can't make informed decisions about how to protect themselves.
A data and AI citizenship model focuses on "learn" - learning about what is happening with data at the moment and being aware of issues in your own country and also in other countries. secondly, to "network", noting that libraries are important community spaces where people can meet each other and develop networks of digital literacy. Thirdly, to "act", for example, nurses have developed a patient and nurse bill of rights on how AI is used in healthcare. Archivists are trying to rescue digital information that is being removed by the Trump administration, and actors have created guidelines about how AI should be used in their industry. So where do we go from here? Yes the news is depressing, but we need to be hopeful, and create our own new reality.
Governments must have legislation to protect us, but they also need to enforce these laws. They need to encourage other types of business models that are not based on surveillance capitalism. They need to make sure that any new technology must have mandator testing and community consultation, they need to provide non-digital options for citizens. The majority of people are not aware of the Information Commissioner Office, which is the body that regulates information used in the UK.
Big tech companies must provide transparent policies that are user-friendly. The media needs to inform citizens and ask people in power hard questions. fictional shows such as black mirror can support people to recognise online harms and take action. NGOs need to raise awareness of harms and risks, e.g. the Good Law Project has challenged advertising on Meta platforms. Society needs to think about new forms of data governance and actively participate in challenging big tech.
No comments:
Post a Comment