AI can outperform some financial analysts, draft study finds


Artificial intelligence could soon find itself an ongoing role in the financial services sector.

Forthcoming research from the University of Chicago Booth School of Business suggests that Large Language Models (LLMs), a type of AI that’s trained to understand and generate content, are able to outperform some financial analysts in “predicting the direction of future earnings.” The researchers shared their results early as an unreviewed draft.

Using chain-of-thought prompting, which helps language models carry out complex reasoning tasks by breaking them down into smaller steps, these models — of which generative pre-trained transformers (GPTs) are one kind — apparently have an accuracy of 60.4%. That’s 7 percentage points higher than the average analyst prediction, per the study.

This is noteworthy because the researchers didn’t provide the language model with any narratives or context beyond the balance sheet and income statement.

With simple prompt instruction, the model’s ability to analyze financial statements and predict the direction of future earnings yields was on par with the first-month consensus forecasts made by analysts, the study found.

“Taken together, our results suggest that GPT can outperform human analysts by performing financial statement analysis even without any specific narrative contexts,” the researchers wrote.

They added that the results highlight the importance of “human-like step-by-step analysis” that helps the model follow steps that analysts typically perform.

The language model’s forecasts added more value when human biases or inefficiencies, such as disagreements, were present, the report found.

Like humans, GPT’s predictions were not perfect. They are more likely to be inaccurate if a firm is smaller, has a higher leverage ratio, records a loss, or has volatile earnings, because context tends to matter more when making predictions for smaller or more variable firms.

While both GPT and analysts have more trouble making predictions when firms are smaller or report a loss, analysts tend to be better at dealing with complex financial circumstances, likely because they factor in soft information and context found outside of financial statements.

“Our findings indicate the potential for LLMs to democratize financial information processing and should be of interest to investors and regulators,” the authors of the report concluded, noting that language models can be more than just a tool for investors, playing a more active role in decision-making.

The report cautioned, however, that AI performance may look different in the wild. “Whether AI can substantially improve human decision-making in financial markets in practice is still to be seen,” the authors wrote, adding that “GPT and human analysts are complementary, rather than substitutes.”


Leave a Reply

Your email address will not be published. Required fields are marked *