Artificial Intelligence (AI) is the current hot topic. Since the emergence in the last few months of ChatGPT, AI based applications have appeared to leap forward in both their capabilities and their potential.
The potential for using AI in general and in risk management specifically is vast. Numerous risk management applications have claimed to make use of AI for some time now and with, at least, theoretical justification. Where the technology is used to re-use appropriate programme information on historical projects, the data can really help contextualise and inform new projects that are largely repeats of previous projects.
However, what are the real, practical advantages of using AI in the risk management of large-scale programmes and projects which are setting new boundaries or for which there are no previous examples? Is there a danger in these circumstances that AI’s functionality is over-stated and potentially misleading and “dangerous”?
Firstly, we must be very clear that the danger is in using AI specifically for programme and project risk management. Most other “types” of risk management rely on historical data to predict and prevent failures. So, for example, operational risk management will look at measures such as mean-time between failures. Health and safety will look at accident rates and down-times. Where large data sets of historical results exist, AI is perfectly suited to analyse and predict risk and risk mitigation. So, in the areas of financial risk, operational risk, H&S etc, it can be argued that AI has already come of age.
However, where the project to be completed is unique in some way, ie it aims to achieve something that has not been attempted before by the organisation, and in fact where the project involves even more novel technology and processes, the higher the risk that the project will fail to meet expectations; eg be delivered late and/or go over budget. As a consequence of this novelty, appropriate data sets, both quantitative and qualitative, are rare or non-existent. The only useful knowledge regarding the challenges that will emerge is likely to be in the heads of the stakeholders involved in the project.
Therefore, there is a danger that using AI functionality is misleading because it will be working from inaccurate or misinformed data sets.
For example, De-RISK was involved in a rail infrastructure programme that was aiming to electrify a large section of a mainline. This would look like a good candidate to employ AI to predict the risks. But it wasn’t for one very good reason. The electrification process was to be driven by robot technology for, probably, the first time ever. The significant risks generated by this novel approach could only be identified by capturing the knowledge at the interface between the traditional infrastructure aspects of the programme and the novel aspects of the robotics applications. This means that AI has a role but, for now, the efficient capture of risks and the planning of risk mitigation still requires a human steering the risk process.
In De-RISK, this is exactly what we do. For many years we have partnered with Softools to develop the De-RISK Assure “zero-code” application. De-RISK’s Assure software completely supports the SDA process to ensure accuracy and efficiency while providing complete visibility and control for senior management. In fact, the AI techniques can be even further embraced to make use of the toolset even more effective:
- AI data validation models: Helps to ensure that the assumptions and risk data in the platform are in a “best practice” format (ie clear and concise) = quality of content
- AI GPT search and suggestion engine: The platform uses an AI search engine to review databases of known assumptions, risks and action plans. This enables users to quickly access relevant information and solutions to avoid “reinventing the wheel” = increased efficiency
- AI-developed coaching avatars: The platform also features AI-developed coaching avatars that deliver user support and learning at the point of use. These avatars are designed to provide personalized guidance and support to users, helping them to navigate the platform more effectively and simultaneously achieve learning objectives = roll-out of the process and tool in any sized organisation can be achieved without the overhead of mass training programmes
This results in the optimum balance between risk management expertise and machine learning enhancement and unites the process and toolset in a fully integrated learning environment.
There is no doubt that AI has the potential to completely revolutionise risk management in the future. But for now, we must be careful to understand the strengths and weaknesses in current AI capabilities to ensure that we get the right answer quicker rather than the wrong answer immediately.
Click here for more on De-RISK’s Assure toolset and how it supports our SDA methodology.