It’s safe to say 2023 marked the beginning of the era of AI. Although different forms of AI have been in use since the 1950s, the accessibility and innumerable applications of large language models (LLMs) like ChatGPT have spurred the technology’s rapid rise.
That AI has changed the world is not hyperbole: Leaders in all industries spent 2023 racing to figure out how to integrate AI into their workflows. In 2024, we’ll see the mass proliferation of generative AI as users begin to fully take advantage of its capabilities – for better or worse.
In the coming year, security and resilience professionals need to understand how AI fits into their critical event management (CEM) strategies. In the wrong hands, AI can pose a major threat to operational resilience, but it also has the potential to improve risk response by providing targeted, actionable intelligence to risk, security and business continuity leaders.
The Dangers of Generative AI
The past few years have been marked with increasing claims of disinformation, reaching an inflection point in 2023. The growing availability of the latest generative AI models allows for the creation of compelling and convincing content in mere seconds, making it easy for bad actors to pollute the already congested information environment.
One example is cybercrimes, which have skyrocketed since the introduction of generative AI. Malicious phishing emails increased by an astounding 1,265 percent since the launch of ChatGPT, with an average of 31,000 phishing attacks sent daily. Criminals are leveraging tools like ChatGPT to create sophisticated, plausible scams on a large scale, even creating convincing images, websites and virtual storefronts to steal sensitive information.
In a time when many people get their news from social media and other unverified sources, disinformation can proliferate. For example, a coordinated campaign spread a conspiracy theory about the August 2023 wildfires in Maui, taking advantage of a natural disaster to create more chaos.
But bad actors aren’t the only way generative AI can spread dangerous misinformation. An LLM is only as good as the data it’s trained on. And it’s prone to hallucinations, a phenomenon where it “perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”
Agency and organizational leaders will be challenged to define a “source of truth” as misinformation and scams are more pervasive and difficult to detect than ever. When it comes to information about potential threats to their people and operations, risk and security leaders need real-time data they can trust. AI can help filter and manage raw data, but it’s not yet a substitute for the expertise and discernment of human analysts.
Data Quality is the Difference in Actionable Intelligence
Detect threats faster with actionable, trustworthy real-time data from OnSolve Risk Intelligence.
Generative AI’s Positive Side
Of course, generative AI is not all bad. We’ve seen many ways AI can improve all aspects of life, from lifesaving applications like medical diagnostics to more lighthearted uses like generating bedtime stories.
For example, generative AI has helped doctors take online medicine to the next level. Chatbots like ChatGPT can instantly communicate with patients, even bridging language barriers with real-time translation. Doctor-AI collaboration is also promising when it comes to remote diagnostics, with AI providing the correct diagnosis 88 percent of the time.
Generative AI can write code, generate personalized content and even compose music. IT and business leaders are incorporating custom LLMs into their workflows to maximize worker productivity and streamline routine procedures.
Governments are developing regulations for the use of all forms of AI, with the goal of making it safer for humans to use. Organizations are also writing internal guidelines to clarify and standardize the deployment of AI in their own operations. Ideally, this will help mitigate some of the dangers outlined above.
Improving Critical Event Management with AI-Powered Risk Intelligence
For the reasons already discussed, generative AI still has severe limitations when it comes to risk management. The current iteration of this technology still has a long way to go before it can be solely relied on in critical situations.
The artificial intelligence used in risk management is different from generative AI. It’s based on solid research into models and technology tailored to specific industry needs. Using specialized algorithms, AI-powered risk intelligence can ingest thousands of verified data sources to rapidly analyze events and deliver targeted, actionable information. This trustworthy, AI- and expert-backed intelligence empowers resilience and continuity professionals to respond confidently and quickly to the threats posing the biggest risk.
The best critical event management technology empowers risk and resilience professionals to make more informed, strategic decisions with key features like:
- AI developed specifically for risk management. Generic, generative AI isn’t reliable – especially when lives are on the line. AI for risk management is tailored to industry-specific needs, providing accurate, relevant data to allow for expedited decisions and swift actions. It considers historic threats and delivers updates in real time as events unfold. It can also understand linguistic nuances specific to risk management (for example, distinguishing a bath bomb from a pipe bomb).
- Trusted data sources verified by analysts. AI-powered risk intelligence must be continuously fed quality data vetted by expert data scientists who specialize in machine learning and risk intelligence. The technology should be monitored and trained by humans to ensure it provides only the most accurate and trustworthy information.
- A tested process. High-quality risk intelligence doesn’t happen by accident. All data should go through a vigorous process of authentication, correlation and analysis to produce targeted, real-time information about threats while filtering out the "noise."
- One single source of truth. Information overload is real, especially when the news can come from so many different sources. The right risk intelligence software delivers actionable information on an intuitive dashboard so users can easily visualize and filter critical event intelligence.
OnSolve Risk Intelligence is the AI ally you need to enhance your critical event management, with features that enable your organization to:
- Focus only on relevant threats to your operations.
- Identify threats in real time.
- Respond faster to emergencies.
- Mitigate the impact of critical events.
- Improve business continuity.
- Protect your people and assets, no matter where they are.
- Strengthen operational resilience.
AI has changed the risk management landscape and will continue to evolve as technology improves. Security and resilience professionals must be aware of the ways AI – generative and other models – will impact their practice.
To learn more about how AI-powered risk intelligence combined with human expertise can help you make faster, more informed decisions, download our ebook.