At a Senate Banking Committee oversight hearing, Chopra called for further scrutiny of generative AI and advanced computational models as potential financial stability threats, saying he sees two main "vectors" through which the technology could shake confidence in financial markets and institutions.
"One is where there is extremely opaque AI that magnifies disruptions in a market that turn tremors into earthquakes," Chopra said, adding that a similar dynamic has already been seen with automated high-frequency trading, which has occasionally been blamed for setting off sudden "flash crash" market plunges.
But Chopra cautioned that this phenomenon could be "dramatically magnified" with AI — particularly if numerous firms are relying on the same underlying model to power their trading, a scenario he described as likely.
Generative AI could also create market instability if used for "mimicry of human communication, or other ways of fakery and fraud, to create a financial panic at a particular financial institution, or at a financial market utility [or] an exchange," Chopra told senators, "even a credit reporting agency."
"There are many ways this could happen," he said. "I think we have to look very hard about the financial stability effects of this because this may not be an accident — this may actually be a purposeful way to disrupt the U.S. financial system."
Chopra went on to say that FSOC might need to get involved. The council — which comprises top agency officials from across the financial regulatory sphere, including the CFPB — was created after the 2008 financial crisis to help spot and address major systemic threats to the financial system.
Among the council's most powerful tools are its designation authorities, which can be used to require heightened regulatory standards for specific firms or certain activities that the council deems systemically important. The council can also make policy recommendations to its member agencies and provide support for interagency coordination.
At Thursday's hearing, Chopra said generative AI and similar technologies would be "very worthy" of closer consideration by FSOC to see "whether it needs to use any of its tools."
Although he did not call for designation specifically, Chopra said he is wary of FSOC functioning as a mere "book report club" instead of "using the tools that were passed into law to protect the system."
The comments from the CFPB director, who was responding to questions from Sen. Mark Warner, D-Va., echoed concerns that U.S. Securities and Exchange Commission Chair Gary Gensler has repeatedly raised in recent months about AI potentially triggering a future financial crisis. Gensler also serves on FSOC.
Artificial intelligence has emerged as an increasing focus for Chopra and the CFPB as the technology has grown in its capabilities and found an ever-widening array of applications in financial services, where firms are looking to AI to improve fraud detection, underwriting, risk management, customer service and more.
Chopra has often cast himself as an AI skeptic and has previously warned of its potential to worsen inequality, perpetuate societal biases, facilitate cybercrime and devolve into a marketing gimmick or cost-cutting measure that leads to compliance violations, among other things.
Although using AI-assisted customer service chatbots could, for example, help banks save money on call centers, Chopra has said they can also frustrate customers and mislead them with erroneous information if not carefully managed.
But for more widespread harms like an AI-induced market crash, Warner questioned Thursday whether difficulties associated with proving intent — an element of many financial misconduct claims — might limit financial regulators' ability to hold companies liable for wayward AI deployments.
"You could have some of these AI tools that someone could hide and say, 'I had no intent, I simply told the tool to make money,'" Warner said.
Chopra told the senator that he agrees "intent-based standards will be almost useless when it comes to certain uses of generative AI."
"It's one of the reasons why the U.S. has had for over a century prohibitions on things like deception and unfairness that have multiple prongs but don't necessarily require intent, because you can create a huge amount of harm," he said.
Under federal law, the CFPB has broad enforcement authority against unfair, deceptive and abusive acts or practices, or UDAAPs. Earlier this year, the agency flagged the possibility of UDAAP violations arising from customer service chatbots that provide inaccurate answers or lack adequate privacy safeguards.
--Editing by Alanna Weissman.
For a reprint of this article, please contact email@example.com.