Spotlight on Science: Session 1

NERC Digital Gathering 23

Main Auditorium, Monday 10th July 2023, 12:00

Chair: Dr. Alex Bush                    To see the abstract for the talk, click the small ‘+’ button to the right.

12:00

Usage of artificial intelligence tools to prebunk and debunk misinformation in social media

Nadeja Komendantova
IIASA
CDE International Expert Network member
 web
Komendantova, N., Erokhin, D., Vicari, R.

Any intentionally misleading or inaccurate information that is disseminated with or without intent is referred to as misinformation. Misinformation can spread through well-known companies, fake news websites, modified content, and parody content. This misinformation can include fake news, rumours, and inaccurate facts. Misinformation is a part of human reasoning and has been for a very long time. It is influenced by social and cultural variables as well as individual factors, such as different cognitive and behavioural biases, including risk perceptions.

Tools utilising artificial intelligence (AI) have become an essential instrument in the fight against the spread of false information on social media platforms. It has been harder and harder to differentiate accurate content from false information online due to the quick spread of information. But there is considerable potential for AI to identify, examine, and solve this issue. The textual information uploaded on social media networks is analysed using Natural Language Processing (NLP) methods. In order to recognise misleading or fraudulent information, AI models can recognise patterns, context, and linguistic nuances. In order to help identify potential sources of false information, NLP algorithms can categorise articles or comments according to their reliability. Machine Learning (ML): ML systems can be trained to distinguish between different kinds of misinformation and to classify it. Large datasets of valid and inaccurate information can be analysed so that ML models can learn to spot trends and other indicators of misleading content. Then, these models can be used to instantly identify dubious posts or forecast the possibility of misinformation. Sentiment analysis: Sentiment analysis is a technique used to evaluate the overall emotional tone and sentiment of social media messages. AI systems can tell whether a piece of content is purposefully false or intended to elicit strong emotions by examining the feeling it conveys. Sentiment analysis is a useful technique for spotting possible disinformation operations and deceitful strategies. Analysis of images and videos: With the prevalence of fraudulent photos and films, AI-based systems may examine visual content to spot alterations or manipulations. Deepfakes and other visual deceptions can be recognised using advanced computer vision techniques that can spot anomalies like artificial lighting or inconsistent pixel patterns. Network analysis: AI algorithms can look at how social media accounts are connected to one another and how they interact with one another to find patterns that may point to coordinated disinformation efforts. AI can assist in identifying and tracking the sources of misleading information and their dissemination techniques by examining network topologies, user behaviour, and information propagation patterns. Bots that automatically analyse and verify statements made in social media posts are known as fact-checking bots. These bots can assess the accuracy of the content by comparing it to reliable sources and databases. Real-time feedback from fact-checking bots can assist users in spotting false information before it spreads further. AI systems can identify possible misinformation spreaders by analysing user behaviour patterns, such as engagement metrics and sharing preferences. AI systems can assist social media companies in taking the proper actions, such lowering the visibility of accounts that are renowned for spreading misleading information or flagging dubious content, by tracking their activity.

All these tools help in prebunking and debunking misinformation in social media. This presentation brings evidence on how such AI tools effect awareness, confidence and the level of trust. The results from the ongoing research also shows why it is important to prebunk and to debunk misinformation. Prebunking and debunking are two distinct strategies used to address and stop the spread of false information. Prebunking seeks to proactively immunise people against incorrect information before they encounter it, as contrast to debunking, which entails disproving erroneous claims after they have been spread. Both approaches are essential for reducing the consequences of false information and fostering critical thinking.

Nadeja Komendantova

12:15

Homes under the microscope: counting indoor airborne microfibres using a citizen science approach

Dr Kirsty Pringle
University of Edinburgh
CDE Expert Network member
1. Pringle, Kirsty; Software Sustainability Institute, Edinburgh Parallel Computing Centre, University of Edinburgh
2. Williams, Ben; University of the West of England
3. Lamb-Riddell, Kathryn; University of the West of England
4. Sardo, Margarida; University of the West of England
5. Hansen, Mark; University of the West of England

Airborne microfibres are a potentially important type of indoor air pollution, yet relatively little is known about their origin, composition or concentration.

They are of particular importance in the home environment, where we typically spend significant amounts of time, but it is difficult for researchers to access a broad range of home environments to take measurements.

In this poster we present results of a pilot participatory citizen science project where 40 participants took airborne microfibre measurements using a simple and cheap passive sampling approach. The project was fairly challenging for participants, so we aimed to recruit participants likely to be “highly engaged” in the topic. Of the 40 people recruited, 29 completed the project which is a high rate of return for a challenging project.

The average number of microfibres collected was 1943 fibres m-2 day-1. Participants selected four rooms within their homes to measure; bedrooms had the highest average concentration (2861 fibres m-2 day-1), closely followed by bathrooms (2482 fibres m-2 day-1), other rooms were lower (Kitchen 1225 fibres m-2 day-1 and Living Room 923 fibres m-2 day-1).


Graphical Abstract

Dr Kirsty Pringle