AI in Risk Management

Opinion from Our Co-Founder & CEO

A series of lightbulbs signifying gaining insights

Highlights

Artificial Intelligence (AI) is reshaping risk management by accelerating task completion, enhancing reporting, and enabling faster decision-making.

  • AI tools like LLMs assist in drafting reports, building frameworks, and connecting risk insights across data sets.

  • Key limitations include lack of context, hallucinations, and the need for strong human oversight.

  • Embedding AI successfully requires governance, team upskilling, and a risk-aware organisational culture.

AI in Risk Management: An Editorial Opinion from Our Co-Founder & CEO

Published: 12 May 2025

Most of us by now, let’s face it, have accepted that AI is here to stay and will slowly encroach on the lives and work of all of us. The steady rise in the implementation, use, embedding, augmentation, and eventual takeover of AI in ‘white collar’ jobs is glaringly obvious. It’s not just imminent, it is happening right now.

The pace of advancement of AI since the public launch of OpenAI’s ChatGPT 3.5 has been astounding, to say the least, and has evolved how we work, interact, and communicate with others. From drafting and redrafting an email you want to write in a ‘more polite manner’ to crafting a comprehensive Enterprise Risk Management (ERM) Framework using COSO ERM principles, AI has induced notable leaps in individuals’ ability to complete tasks at faster speeds with generally higher levels of accuracy and capability.

As we all are generally aware by now, in its present state, AI isn’t perfect. The dreaded hallucinations that creep into responses and land employees in hot water are well documented, leading to those face-palm and ‘lol’ moments when reading about them in the news. From legal ‘experts’ quoting fictitious case citations to employees uploading confidential, non-public company information that gets incorporated into AI training, the schadenfreude moments live rent-free in our minds for some time, but serve as a stark warning to those of us who use AI on a daily basis to augment our working lives.

In this opinion piece, I discuss my views on AI in risk management: an overview of its current use cases and applications, what’s working, what isn’t (and could be better), and where we go from here.

An Anecdotal Overview of AI in Risk Management

When OpenAI released ChatGPT 3.5 in November 2022, every white-collar job, whether we knew it yet or not, became at risk of becoming obsolete. At the time, I was working as the Head of Risk Management & Controls for a corporate services firm.

A (relatively junior) direct report of mine at the time had been working on producing training material for the organisation on a number of risk management-related topics, such as Risk Management 101, ERM, Risk Culture, and so forth, on top of our regular grind of work managing the group risk committee and top risks of the organisation. I had given the direct a reasonable amount of time to complete these training materials, given that they were new to enterprise risk management, so that the topics could be thoroughly researched and to facilitate their learning and understanding of the discipline.

When I stumbled upon ChatGPT, I began to explore its capabilities. Seeing its potential right away, I asked the LLM to produce training material on ERM. I was immediately amazed that, within a few seconds, it had produced (relatively) comprehensive, albeit not perfect, training content.

Although some edits were necessary, as certain aspects were either not 100% accurate or used repetitive verbiage, it saved hours and days of research, structuring, and construction.

I sent a video recording of ChatGPT creating the training material to the direct, saying that a) he needed to learn how to use this tool (safely), and b) upskill himself (e.g., go get your master’s degree, which in fairness he was already planning to do) as fast as possible.

I saw the writing on the wall, not only for my direct, but also for myself. Even in its infancy, AI could undertake basic tasks that a junior enterprise risk manager would be asked to complete.

At the time of writing this opinion piece, the pace of advancement in the capabilities of LLMs has been notable. Augmenting their knowledge base with that of experienced risk managers can supercharge their ability to complete tasks at a higher level of accuracy and effectiveness.

What’s Working Well With AI in Risk Management

AI has become a formidable workhorse for handling repetitive or labour-intensive tasks. Whether it's sorting through risk registers, populating control libraries, or mapping incidents to key risk indicators, AI dramatically reduces the time and effort it takes to get from “blank page” to “workable output.” It's especially useful for those administrative but necessary processes that would otherwise eat up a risk professional's time, freeing us up to focus on insight and decision-making.

One of AI’s most immediate wins has been in generating first drafts—whether for board reports, risk committee papers, or control assessments. The quality is good enough to serve as a starting point, saving hours of writing and editing. It helps get us out of the “what do I write?” stage and into the higher-value “how do I refine and present this?” stage. Even formatting and tone consistency, often a pain point, can be reasonably well managed through AI prompts, particularly when paired with a structured prompt library tailored to your organisation’s needs.

This is perhaps one of the most underappreciated superpowers of AI in the risk space: pattern recognition. AI models can rapidly scan multiple sources, policies, incidents, audit findings, control failures, and flag emerging themes or linkages that might otherwise be missed. While still evolving, this ability to synthesise large amounts of information and suggest correlations is a serious accelerant for root cause analysis and thematic risk reviews. It doesn't replace human analysis, but it makes it a lot faster and more structured.

What Isn’t Working With AI in Risk Management

For all its power, AI lacks intuition. It can’t walk into a room, read body language, or sense that something’s “off” when someone gives a vague answer about a control breakdown. It doesn’t understand organisational politics, or when a risk owner is downplaying an issue for fear of reputational blowback. These are subtle signals that seasoned risk professionals are trained to pick up on, and that simply can’t be replaced by code. Context matters, and AI isn’t always great at grasping the nuance that sits behind the risk data.

The term sounds amusing, but in practice, hallucinations are a serious concern. An LLM might generate a plausible-sounding answer that is factually incorrect, or worse, confidently misleading. This is especially dangerous in regulated environments where accuracy and traceability are non-negotiable. Risk managers can’t afford to blindly trust the output, no matter how polished it looks. The risk of taking AI-generated analysis at face value, especially under time pressure, is very real and calls for strong review processes and clear accountability.

Where We Go From Here

The future of risk management is becoming clearer the further we advance in the AI technology space. If the last 18 months have shown us anything, it's that AI will not just support how we manage risk. It will reshape it entirely.

Risk professionals will need to evolve from administrators and framework builders to strategic advisors and quality controllers. AI will automate the busywork: drafting reports, summarising risk assessments, flagging potential regulatory mismatches. But it’s up to us to ensure those outputs make sense in our specific business context.

The role of the risk manager will shift from “doing the work” to “validating the output.” It's about leveraging AI to do 70 to 80% of the legwork, then applying experience, judgement, and nuance to bring it home. We become curators of insight rather than just collectors of data.

And that means building new competencies. Understanding how prompts work, how to review AI-generated results critically, how to identify when something feels off or misses a cultural or reputational nuance. These are now must-have skills, not nice-to-haves.

This also means AI literacy will need to be baked into the DNA of tomorrow’s risk teams. Just like we once trained up on Excel or GRC platforms, now we need to get fluent in leveraging LLMs effectively, ethically, and safely.

But this won’t be about throwing out the old entirely. The frameworks, the controls, the governance structures, they still matter. In fact, with the rapid acceleration of AI adoption, they matter even more. AI must be wielded within a robust risk and governance structure to avoid a false sense of confidence in its abilities.

A Call to Action for Risk Leaders

If you're a Chief Risk Officer, Risk Manager, or even an Operational Leader, now is the time to experiment, fail fast, learn faster, and set the tone for how AI should (and should not) be embedded in your organisation’s risk culture.

Ask yourself and your teams:

  • Where can AI give us leverage today?

  • What manual or low-value tasks are ripe for automation?

  • Where does the risk of hallucination or misinterpretation outweigh the benefit?

  • How do we train our people to think like AI copilots, not AI dependents?

This is not about replacing humans. It’s about augmenting our decision-making capability and building faster, smarter, more agile risk functions that can genuinely support strategy, not just compliance.

At Risk Llama, we’ve begun embedding these principles into our own AI risk tools, such as Lluma , our AI-powered risk manager. The intent is not to create a silver bullet, but a smart AI Risk Manager that works with you, not instead of you. If you’re curious to see what this looks like in practice, or want to bounce around ideas for your own risk roadmap, we’re always happy to chat. Get in touch with us at info@riskllama.com

Daniel Wolfsheimer - Co-Founder and CEO

Daniel Wolfsheimer brings 20+ years of risk management expertise across diverse industries to Risk Llama. He excels in designing and implementing risk frameworks and strategies that drive organisational success, developing risk appetite statements and integrated approaches to achieve objectives while mitigating threats. With experience in financial services, corporate services, and consulting, he leverages technology to improve data systems and efficiency, helping organisations navigate complex landscapes, enhance risk controls and governance, and transform risk management into a strategic advantage.