AI can’t do it all: Where ChatGPT falls short in financial planning

Written by

Published 27 Dec 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free Ai Generated Man photo and picture

Artificial intelligence (AI) might not be a good replacement for human financial advisors, according to a study by Washington State University (WSU) and Clemson University. The study, published in the Financial Analysts Journal, tested AI models, including ChatGPT 3.5 and 4, Bard, and LLaMA, on more than 10,000 financial exam questions.

While the results were impressive in some areas, the models struggled with personalized financial planning.

“It’s far too early to be worried about ChatGPT taking finance jobs completely,” said DJ Fairhurst, an associate professor at WSU’s Carson College of Business. “For broad concepts, ChatGPT can do a very good job synthesizing information. But for specific, idiosyncratic issues, it’s really going to struggle.”

What the research revealed

The study focused on licensing exams like Series 6, 7, 65, and 66—tests that financial professionals must pass to work in the securities industry. ChatGPT 4 outperformed the other models, answering 84.5% of the questions correctly. This marked a big improvement compared to earlier versions, such as ChatGPT 3.5. However, when the free version of ChatGPT 3.5 was fine-tuned, it came close to matching ChatGPT 4’s accuracy and even did better at explaining its answers.

Despite these commendable results, the study highlighted areas where AIs still struggle. This includes complex tasks outside of summarizing financial topics and monitoring markets. Its accuracy dropped significantly to 56.6% when asked to evaluate client-specific financial details like tax considerations and insurance coverage.

“Passing certification exams is not enough. We really need to dig deeper to get to what these models can really do,” Fairhurst emphasized.

Growing interest but lingering doubts

Public interest in AI-driven financial tools is on the rise. According to Finder.com, around 18.6 million people in the UK are now open to using ChatGPT for financial advice, up from 14 million last year. Still, skepticism persists. Nearly half (43%) of surveyed Brits aren’t comfortable trusting AI for important financial decisions.

“More than a third of Brits are now happy to use ChatGPT for financial advice, but the question remains, should they?” wrote George Sweeney, a financial advisor, at Finder.com. While there’s curiosity about what AI can offer, many people are hesitant to rely on it completely.

Experts believe AI is best used as a tool to support financial advisors rather than replace them. Its strength lies in repetitive tasks like tracking markets or analyzing basic data. This can free up advisors to focus on complex issues that require human judgment. However, human oversight remains essential, especially in an industry where mistakes can have serious consequences.

“The practice of bringing a bunch of people on as junior analysts, letting them compete and keeping the winners—that becomes a lot more costly,” said Fairhurst. “So it may mean a downturn in those types of jobs, but it’s not because ChatGPT is better than the analysts, it’s because we’ve been asking junior analysts to do tasks that are more menial.”

While AI is evolving rapidly, it still has a long way to go before it can handle the responsibilities of human financial professionals.