In his 1966 book The Psychology of Science, psychologist Abraham Maslow wrote: "I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." This rings true today more than ever as "AI", "LLM", and "GPT" get thrown around corporate conference rooms like there's no tomorrow – with CEOs trying to plug the power of the silicon mind into every blender, hoover, toaster (see: "AI Controlled Toasting Appliance", patented in 2019 by one Philip Michael Davies of Southsea, patent number GB2587788), or whatever other arbitrary product they want to pump some more "cutting-edge" into.
But if we look into the past, we find that artificial intelligence is not the first fad to be grossly misapplied en masse by overzealous entrepreneurs. Cast your mind back to the start of the decade as Web3 (as well as a certain, less figuratively viral thing) took the world by storm. Blockchain technology seemed to be the answer to everything and NFTs were going to make everyone rich. Or what about the Metaverse, where everybody could come together on the internet and leave behind the outside world, living their lives in a vast virtual ecosystem – almost like some moderately successful 2018 Spielberg-directed motion picture?
And look at the turn of the century: the peak of the infamous dot-com bubble. Hundreds of companies spawning almost overnight to capitalise on the rapid emergence of the then brand-new World Wide Web. From AltaVista to Amazon, the dot-com bubble was huge... until it burst. On the 10th of March of the new millennium, the dot-com bubble reached its highest ever point, and then spent the next few years nosediving. Many companies went bankrupt and many more lost massive chunks of their value, some up to 75%. And the market is starting to catch up to AI too.
Profit maximisation is as natural as the trickle of a stream or the blooming of a flower. Typically, firms will be motivated by a desire to grow and make money, and if that means hopping on the next buzzword bandwagon, then so be it. But this does not come without risk. In recent months, insurance companies have taken the limelight and whilst the whole discussion about the obligations of insurers would be better suited to the Humanities and Social Sciences section of this publication, this increased attention has raised awareness of some less-than-stellar practices regarding the application of generative artificial intelligence in the processing of insurance claims.
An article from 12 months ago by the Boston Consulting Group claims "[generative AI] has the potential to revolutionize insurance claims" and encourages firms to adopt these experimental technologies with the promise that AI has the potential to reduce their claims payouts by up to 4%. It should be noted that Bank of England reports show, in 2023, insurers paid out over £45 trillion on claims for accidents occurring in the last decade, of which AI could theoretically shave off just shy of 2 trillion.
It's easy to treat every new technology as a plug-and-play solution, but when experimental technologies are being placed in charge of trillions of pounds worth of healthcare decisions – such as a decision to condemn someone to a lifetime of suffering by refusing anything from vital medication to life-saving surgeries – it's time for us to step back and ask ourselves the cliché question that spy films have been asking for decades:
"How do we stop this technology from falling into the wrong hands?"
Comments