AI Bias Bounty in Partnership with Humane Intelligence

We are excited to be partnering with Humane Intelligence for their second Bias Bounty: A Counterterrorism Challenge in Computer Vision Applications.

NYC, New York State — Humane Intelligence, a nonprofit organization dedicated to evaluating the societal impact of AI, has teamed up with Nordic counterterrorism group Revontulet to launch Bias Bounty 2, the second of 10 algorithmic bias bounty challenges, unfolding over the next year. This latest challenge focuses on counterterrorism in computer vision (CV) applications and will run from Thursday, September 26, 2024, to Thursday, November 7, 2024. The winners will share $10,000 in prizes and may have their work considered for Revontulet’s solutions suite. 

Contestants in Bias Bounty 2 are asked to develop computer vision models capable of detecting, extracting, and interpreting hateful image-based propaganda, which is often manipulated to evade detection on social media platforms. With the rise of AI-generated content, Bias Bounty 2 will test participants' ability to build robust solutions that address the manipulation of visual media in counterterrorism contexts.

"Hidden online white nationalist imagery is a global concern, yet many of our detection models are fine-tuned to US content. Our partnership with Revontulet, a Nordic counterterrorism group, addresses this problem for an underserved part of the world that faces rising radicalization," said Dr. Rumman Chowdhury, Founder and CEO of Humane Intelligence. 

As extremist groups increasingly adopt sophisticated techniques to avoid detection online, Bias Bounty 2 seeks to crowdsource innovative solutions to this evolving challenge. The event will provide critical insights into the role AI can play in curbing the spread of extremist ideologies through visual content.

“With the evolution of generative AI, we’ve seen a marked increase in generated content from extremist and terrorist networks around the globe. This content serves to recruit, spread propaganda, and inspire violence. Extremists gamify generative AI to circumvent established content detection and moderation methods and localize content to global languages and cultural contexts often underserved by existing technologies and expertise. This, in combination with the sheer volume of content made possible through generative AI, poses novel challenges for platforms, moderators, and experts. We at Revontulet are excited to partner with Humane Intelligence to address some of these challenges through the Bias Bounty and to work to 

develop more equitable and globalized models for detecting extremist imagery,” said Bjørn Ihler, Founder and CEO of Revontulet. 

Learn more and sign up here

About Humane Intelligence:

Humane Intelligence is a tech nonprofit that builds a community of practice around algorithmic evaluations. Its mission is to develop measurable methods for real-time assessments of the societal impacts of AI models.

About Revontulet

Revontulet provides intelligence and analysis to help clients around the globe mitigate risk and prevent harm caused by terrorism and violent extremism. By combining its comprehensive database on networked extremist and terrorist behavior, computer-assisted analysis, human expertise, and in-depth knowledge, Revontulet keeps clients, users, and communities safe.

The company has assisted municipalities in keeping citizens safe, tech platforms in dismantling vast networks of extremist users violating terms of service, policy, and law, and has worked with law enforcement to prevent planned acts of terror. In an increasingly complex global threat landscape, Revontulet works with clients in sectors ranging from infrastructure and shipping to tourism and hospitality to keep their teams, clients, and interests safe from the threat of terror.

About Bias Bounty 1: 

Bias Bounty 1, the inaugural challenge, focused on fine-tuning an automated red-teaming model based on topics from the  Generative AI Red Teaming Challenge Transparency Report. Participants developed methods of diversifying datasets in Factuality, Bias, and Misdirection, and Advanced participants created prediction models to help proactively identify malicious prompts. 


The winners of Bias Bounty 1 were:

Advanced: Yannick Daniel Gibson (Factuality), Elijah Appelson (Misdirection), Gabriela Barrera (Bias)

Intermediate: AmigoYM (Factuality), Mayowa Osibodu (Factuality), Simone Van Taylor (Bias)

Beginner: Blake Chambers (Bias), Eva (Factuality), Lucia Kobzova (Misdirection)

Previous
Previous

From All-American Girl to Anti-Hero: How Violent Extremists Have Targeted Taylor Swift

Next
Next

On the Southport Attack