The Gen AI Hype Train Rolls On... Sigh.
I'll say this right up front - I love generative AI for a lot of things:
I can make fun pictures with it, I can quickly generate content outlines for plans and frameworks that I need to create, I can use it to create certain types of content, it speeds up the programming I do significantly, I can train it on my own information and make powerful agents that can answer my questions, and more. Yet there are still so many things that I would not trust it to do for me, and these are the tasks that are constantly hyped up by technology pundits and columnists 24 hours a day, 7 days a week, 365 days a year. To the point that whenever I hear a technology vendor start talking about how they use AI, I immediately become skeptical about whatever they are trying to sell me. It's exhausting.
First, let's ground this conversation (you like that AI terminology use there?) by stating that when I talk about AI in this post, I am talking about generative AI technology like ChatGPT, Claude, Mistral, etc. I'm not talking about AI applications that have been specifically built and modeled to solve very specific types of problems with a narrow scope. Now that this is out of the way, on to the commentary.
Outside of the types of use cases that I mentioned in my opening paragraph, I have serious reservations about the use of AI when it gets applied to situations where accuracy is paramount. Where not providing the right information or guidance could result in user frustration, confusion, financial harm, or even worse, physical harm. If you are still with me so far, now let's look at where AI is being most hyped up in the financial services industry:
- Robo investing
- Customer service
- Qualifying loan applicants using alternatives to credit score models (yes, I know these are not the best either)
Each of these functions of AI are areas where incorrect information, wrong advice, or the wrong decision can have significant material impacts for the person on the receiving end.
What happens when your robo investor isn't behaving rationally? What happens when the AI chatbot gives you incorrect information about how to dispute a charge on your credit card? What happens when the AI evaluating lending decisions starts to decline loans that are concentrated within a certain demographic group?
Not only will the consumer, customer, or member be adversely impacted in these scenarios, but so will the company that deployed the technology. At best, those companies will lose a customer because they decided they don't like the service they received. At worst they are going to be facing lawsuits and regulatory action because they have violated the law.
The crux of the issue in all these types of use cases is that AI is fallible. It was trained on content and decisions that people made who themselves have made mistakes, have their own biases, and their own agendas. You can't expect a tool trained on data that is not completely accurate and neutral to provide responses that are completely accurate and neutral. Logically this is not possible. We need to stop pretending that the tools we build and are trained on imperfect data can provide consistently correct information in highly regulated industries.
Therefore, we as technology professionals and an industry need to start to be truthful to ourselves and acknowledge that:
- AI has a role in certain domains where the need for 100% correct information and highly vetted decisions is not a critical requirement.
- In those areas where information must be correct and the decisions must be reviewed due to their sensitivity and impact, AI should not replace the requirement for human involvement and review. At least not in its current form.
To claim that humans in the second scenario can be removed from the equation and the processes can be completely automated is false. This is the equivalent of the proverbial snake oil salesman of times gone by selling a new AI tool instead of random tonics and tinctures. The sense of distrust about AI will continue to grow if the promises being made in these areas go unrealized, or worse yet, the AI features prove to be harmful. Most technologists I know that understand at least some of what is going on behind the scenes are already skeptical of these AI claims. If the non-technical management within organizations begins to lose trust as well, the companies selling these products will face a very rude awakening.
Until then, I will continue to use AI for what it is good at while advising people to stay far away from the tools that overpromise and will ultimately under-deliver.
Random Acts of Technology Newsletter
Join the newsletter to receive the latest updates in your inbox.