CURRENT PROJECTS
TOXICITY AND DIGITAL LITERACY

Operationalization of Toxicity and Media Literacy:
Patterns of Toxic Symbology in Memes
This research direction seeks to understand the complex and dynamic phenomenon of toxicity online. This research technically and theoretically explores the operationalization of concepts like toxicity, hatefulness, harmfulness, ethics, moderation, and extremism. By combining data-driven approaches with media studies insights, I aim to develop a deeper understanding of the ways in which AI models learn to recognize and mitigate toxic content.
​
One key area of focus for this research is the development of software that can detect and explain toxic symbology in online platforms. This includes not only identifying toxic content, but also providing insights into the underlying reasons why it is toxic. By examining cases of memes and toxic symbology, such as the extremist, racist, and other hateful memes that spread on platforms like 4chan and Reddit, we can gain a better understanding of how toxicity is perpetuated and how it can be mitigated. This research also involves experimental computer science work, including the development of Retrieval Augmented Generation (RAG) based systems for toxicity detection and explanation.
​
This research direction is also focused on the development of digital literacy skills, particularly in relation to online platforms and social media. By studying patterns of toxic symbology in memes, I aim to identify common themes and motifs that contribute to online virality. This information can be used to develop educational programs and tools that help users recognize and resist toxic content, promoting a more positive and respectful online culture.