Why Canada's new AI code of conduct is an important first step for Canada and the world
Ontario's securities regulator says it is evaluating the potential role it can play in overseeing and guiding responsible adoption of artificial intelligence to protect investors and the integrity of capital markets.
In a report released Tuesday, the Ontario Securities Commission said use of AI in capital markets is focused on three key areas — improving the efficiency and accuracy of operations, trade surveillance and detection of market manipulation, and supporting advisory and customer service.
It said AI can improve the ability to gather information and detect patterns, or anomalies, from large volumes of data by automating processes normally handled manually — leading to better market forecasting and hedging.
"Advancements in AI techniques can significantly enhance market liquidity prediction, showcasing superior performance relative to other methods, especially in extreme market conditions," the report said.
"Financial services firms in North America have capitalized on AI's improved predictive capabilities for stock liquidity forecasting."
Large firms are also currently using AI to provide automated customer support and help for client-facing advisers, but the report said its use for trading, asset allocation and risk management are so far limited.
It cited a survey report published by the Economist Intelligence Unit that found financial services providers consider customer and stakeholder engagement as their most effective use of AI.
The regular warned data constraints and competition for tech talent are major challenges for AI adoption in the sector.
It said implementing AI in capital markets requires more research and investment, but that such costs act as a barrier to AI adoption, especially for smaller players.
The report also cited obstacles related to corporate culture, saying market participants may have trouble adapting their operating models and culture to benefit from AI, and that issues related to privacy, bias and fairness can play a role.
"Any major technological disruption challenges current operating models and organizational culture. The introduction of AI is a case in point," stated the report.
"Beyond the issues of organizational change, there exist strategic challenges of replacing core functions that are battle-tested and serve their purposes with new processes and tools that can potentially put those core functions at risk. The resulting risk aversion can inhibit the adoption of new processes and functions."
It highlighted the complexities of AI can also be a hinderance to establishing trust and is therefore preventing broader adoption of the technology.
This report by The Canadian Press was first published Oct. 10, 2023.