There is a ton of buzz about ChatGPT and the technology’s potential for applications in customer service, writing and research — especially given the recent launch of GPT-4. It reminds me of the earlier days of artificial intelligence (AI) and the excitement around its potential. In a lot of ways, the excitement was well-earned and the predictions about the ways companies could apply AI were spot-on. Machine learning (ML) and AI are helping to deliver more tailored recommendations for ecommerce, supplementing customer service teams with chatbots in places like LinkedIn, and helping us all stay a little safer on the road with lane guidance and emergency braking.
But as with a lot of innovations, some of the hype went far beyond reality. Robots are not taking over in the classroom, and to my dismay are still not able to take on all of our more manual, tedious tasks at home or in the office. AI has not ruined the classroom or replaced the need for people to build product strategies, design tools and provide a human layer on top of those chatbots when more complex issues arise.
So when I started reading the hype around ChatGPT, and now GPT-4, I was intrigued but skeptical.
After getting a chance to play with it, I’ll admit it’s impressive. Just last week, our CFO was playing around with it to help provide some context about our financial projections, and it was pretty spot-on. There is a ton of potential when it comes to applications of this technology that I am excited to see materialize.
That said, we are still a long way from handing things over to AI while we all go sit on a beach somewhere.
When it comes to financial services, there are still a lot of things neither ChatGPT nor GPT-4 can solve, at least not yet. This is because financial products come with a great deal of risk. Financial institutions (FIs) are responsible not just for ensuring the safety of their customers’ assets, but also for meeting legal obligations around know-your-customer (KYC) and anti-money laundering (AML) requirements. FIs also have a vested interest in minimizing risk and, consequently, fraud because any lost funds will be subtracted from their bottom line. ChatGPT/GPT-4 aren’t yet prepared to meet these critical risk priorities. Here’s why.
1. Compliance checks
Compliance is a critical part of every financial services business. As it should be, given that companies are handling money for consumers and businesses. AI can help when it comes to monitoring suspicious activity. However, to ensure compliance with confidence, companies also need experts to evaluate evolving rules, determine strategies and oversee the compliance program to ensure companies are meeting those requirements.
2. Making credit underwriting decisions
Data analysis has long been a part of the credit underwriting process, but determining the right policies to use to inform what data goes into those decisions requires human insight. FIs need to evaluate their risk priorities to determine what credit thresholds are suitable for their business. Then, they can use credit bureau data to evaluate if a customer meets their credit policies.
3. Providing a seamless user experience
When opening an account, customers expect a seamless experience that can be completed in 10 minutes or less. To facilitate a frictionless process without increasing their risk, FIs have relied on things like phone-based identity verification and document verification, which can automatically verify a customer’s identity based on information they’ve entered during the onboarding process.
However, when addressing issues post-account opening, customers expect a more immersive experience. Though many FIs use chatbots to help customers address basic inquiries, if a customer suspects they may have been the victim of a social engineering scam, they expect to interact with a bank representative directly to report the problem.
4. Designing new financial products
Developing new financial products requires a deep understanding of market trends, customer needs and the regulatory environment. It also involves making strategic decisions that go beyond what data alone can tell us. While ChatGPT/GPT-4 can provide insights and suggestions based on data analysis, it cannot replace the creativity and intuition of a human designer.
5. Handling a crisis like a fraud attack
While ChatGPT/GPT-4 can help with customer interactions, quick questions, directions to support materials, and documents when a company is experiencing something like a high-velocity fraud attack, they want direct human expertise to guide them through the process.
The same goes for preventing fraud attacks. Fraud models are helpful tools, but to really move at the pace of fraud, companies need AI/ML teams to help ensure their policies are up-to-date, they have the right datasets in place, and they are able to test and make updates to their workflows to handle attacks when they arise.
The future of ChatGPT and GPT-4
ChatGPT, GPT-4 and any future updates will be powerful tools that can help financial services companies in many ways. However, these products aren’t able to replace some of the higher-touch, more nuanced parts of running a financial services business.
That said, companies that are able to strike the right balance between automation and human touch will be best positioned to achieve long-term success by quickly and consistently delivering value to their customers.