Revontulet at the Paris AI Action Summit

AI Generated picture of a futuristic arctic fox in a cyberpunk version of Paris

AI generated picture of an arctic fox in Paris.

This weekend, Revontulet will attend the Paris AI Action Summit. The summit will gather Heads of State and Government, leaders of international organizations, and CEOs of small and large companies to discuss the future of AI. Representatives of academia, non-governmental organizations, artists, and members of civil society will also attend.

On Sunday, February 9th, we are excited to contribute to critical conversations about AI ethics and responsible technology with our partner, Humane Intelligence, at the AI & Society House. Revontulet will participate in panel discussions and have an expo booth. We will present on our previous work with Humane Intelligence's Bias Bounty to uncover visual extremist propaganda. We will also discuss our overarching work, thinking, and objectives concerning the evolving threats and opportunities AI offers.

Revontulet provides risk analysis, open-source intelligence (OSINT), and data services on terrorist and violent extremist networks and their behaviors to clients in the tech sector and beyond. While we offer conventional intelligence, the gathering, and analysis of information developed to inform actions to prevent harm, we also consider the term and its meaning more broadly in this era of Artificial Intelligence.

In the world of counter-terrorism and security, we often view AI from an adversarial perspective. We have observed how threat actors have adopted AI to produce and disseminate propaganda and content promoting ideologies, hatred, and actions. We have seen the volume of terrorist and violent extremist content (TVEC) online grow and the abuse of AI to obfuscate perpetrator-produced content to prevent detection, increase engagement, draw the attention of new audiences, and circumvent moderation. We have seen adversarial nations use AI to influence public opinion and elections, and the geopolitical impact of the global competition to develop LLMs and generative AI as stepping stones towards Artificial General Intelligence (AGI).

As a startup operating between technology and counter-terrorism, we do, however, not only consider Artificial Intelligence from the adversarial perspective but also from the perspective of what opportunities it offers our work. This approach provides more nuance as we approach AI and its current and future impact on our work.

While AI offers great opportunities to assist our work if used conscientiously, it also presents risks and challenges. These include drawing erroneous conclusions, providing false positives and incorrect or inconsistent responses that contribute to noisy and unactionable data, making wrongful findings, and further advancing bias that may lead to devastating harm. 

To avoid these risks, we at Revontulet pride ourselves on always keeping a Human In the Loop as we manually analyze and assess raw data for our clients in our Ddata-as-a-service offerings and in-depth investigations and analysis. This conscientious approach also aligns with our core “tenet” of "offering needles, not haystacks" — we always strive to provide our clients with the most actionable data and analysis, not noisy data where it's difficult to discern the signal from the noise.

AI has allowed us to recognize patterns in our data in novel ways and accelerated our ability to better analyze the content and context of text, audio, and images at a significant scale. These advances have markedly impacted internal workflows and procedures, allowing our global impact to grow with a small team. We think about the capabilities of AI in more complex ways, not just as generative and Artificial General Intelligence (AGI). We think about the more significant impact of AI and how developing dedicated and internal models broken down to tackle specific challenges can circumvent the risks of "black box" analysis by offering greater transparency while ensuring higher accuracy.

Dedicated and internal AI allows us to run models locally and protect highly sensitive data.

While our database, developed in-house to facilitate the analysis of the networked behaviors of terrorists and violent extremists both online and offline, offers unique opportunities as we strive to build a safer world for all, we also always consciously strive to ensure that the use of our data aligns with our ethics and values. By developing in-house models and ensuring that data is stored and analyzed securely and locally, we retain more control and ownership to prevent external access, limiting the risk of abuse and harmful actions taken with our data.

The Paris AI Action Summit offers a unique opportunity to gather stakeholders from across sectors to explore further the risks and opportunities Artificial Intelligence presents. We look forward to learning, sharing our experiences, contributing to these discussions as we tackle challenges, and ensuring the considerate and conscientious applications of AI for the best of society as we continue exploring the risks and exciting opportunities offered by the advancement of AI.

Next
Next

From All-American Girl to Anti-Hero: How Violent Extremists Have Targeted Taylor Swift