Greater frequency. More complexity. Higher costs. These trends sum up the current state of high-impact physical threats. In the U.S., there were 28 separate billion-dollar natural disasters in 2023 alone – a record number. Globally, there were 66 separate billion-dollar natural disasters last year – another record.
Meanwhile, 99 percent of executives and 100 percent of U.S. government leaders say their organization or agency experienced a physical threat in the last 24 months, according to the 2024 OnSolve Global Risk Impact Report.
For agencies and organizations, protecting people and operations from the impact of physical threats requires around-the-clock monitoring and the ability to quickly make sense of millions of data points. Yet, the scale and enormity of this task means adding people won’t be enough to keep up with these trends. That’s why many risk professionals are turning to technology – specifically artificial intelligence (AI) – as an initial filter for the detection of threats as part of their critical event management (CEM) strategy.
What is AI? It Depends on the Context
AI is now part of our everyday vocabulary. But what does it really mean? After all, it’s a broad term. In short, AI is anything that mimics intelligence, but its definition changes somewhat based on the context in which it’s used.
One way to think about this is how we use AI in our daily routines. We interact with AI when we ask a chatbot to resolve an issue with an order. We recognize it when we hear the new warehouse nearby uses robots to sort items for delivery.
Yet, these applications are unique and designed for a specific purpose. The same AI that powers a warehouse robot can’t answer customer questions about an order. A chatbot trained on retail customer support can’t identify an incoming hurricane – or if it will impact the warehouse or the customer.
The bottom line: Not all AI is the same or appropriate for critical event management. Right now, we’re seeing rapid growth in the capabilities of AI, and the field is expanding with new subsets and types. Some of these, when used responsibly, are especially valuable for critical event management.
The Risk and Resilience Professional’s Guide to Responsible AI
As risk becomes more complex, AI has the potential to help risk professionals manage and mitigate risk – but only if it’s used responsibly. Learn more about responsible AI in our ebook.
AI for CEM
AI can help improve the speed of threat detection, the relevance of information that’s received and the ability of risk professionals to act quickly. Although the field of AI for critical event management has evolved over time, there are three AI technologies that risk leaders may be considering to effectively detect and respond to critical events. You’ve probably heard these terms before.
- Natural language processing (NLP) recognizes the context in which terms or phrases are used in natural language in any form. For critical event management, it can be used to identify threat terms – such as tornado or shooter – and mark it as a type of event.
- Machine learning (ML) is a technique used for natural language processing in which machines are trained to recognize patterns without being explicitly programmed. Let’s say a report references high winds. The machine can be trained to recognize this phrase or other language associated with it and identify it as a weather event.
- Generative AI is machine learning on steroids. It’s trained on such large data sets that it can recognize and replicate almost any pattern in human language. However, this model is pretrained on data not provided by the user, which can lead to mistakes.
What Responsible AI Looks Like
Before implementing AI for critical event management, risk professionals need to consider whether they’re using the application and model for the task and if the models are trained on the right data. They also need to ensure they have quality assurance processes in place to evaluate performance and detect bias.
For instance, if risk managers want to know whether it’s safe to travel to Ukraine, asking Generative AI that’s been trained on data through January 2022 isn’t likely to yield the right answer. That’s because Russia invaded Ukraine in February 2022. It’s not the right application or the best model for the task.
A better option for critical event management is extractive AI, which reduces risk because it’s limited to specific questions related to the report being evaluated. It avoids broader questions – like “Is it safe?” – that lead to the AI making a judgement call.
In the case of an active shooter, for example, responsible AI for CEM can be used to process a large volume of primary and secondary sources – such as police reports, local news coverage and social media – at very high speeds. It can surface important details quickly and connect them across many different reports. This is responsible AI in action. Even with less content to sift through, you know you’re being alerted to events that may impact your business or community.
In today’s risk landscape, nearly everyone is likely to experience a physical threat. But as we can see from the example above, with responsible AI for critical event management, it’s possible to keep up with a changing world. With the right AI, you’ll be able to adapt along with it.
For a deeper dive into responsible AI, download our ebook: The Risk and Resilience Professional's Guide to Responsible AI.