Artificial Intelligence, Friend, Foe or FAD for the FD?

15th Jun 23

Ai friend foe or fad for fds equity
Image created using DALL·E 2

Is this blog being written by a machine? How could one tell? Should you be worried if it is? Would you find it disturbing to discover that Equity FD had dispensed of the human being who once turned out these pieces and replaced him with ChatGPT as a money saving exercise? Should the typical FD welcome the arrival of this latest manifestation of artificial intelligence as the chance to transform their business, or be fearful that the machine might be poised to take on the FD responsibility itself?

Even the individuals most responsible for this technology are now queueing up to insist that it be regulated.


These were not, let us face it, questions that any of us were much concerned with even six months ago? Until very recently, the whole AI space seemed like the sphere of science fiction, a theoretical notion with little practical application or implications. Not so any longer. It has moved to dominate the front pages (when neither Phillip Scofield nor Prince Harry are around to do so) with predictions that humanity has allowed something to be invented on its watch which will consume us. Even the individuals most responsible for this technology are now queueing up to insist that it be regulated. What on Earth is going on? How should business respond? Is this an opportunity or Armageddon?

The tricky starting point is that the majority of people have no clue about what any of this means. Even the name is something of a mystery. What is the GPT in ChatGPT in the first place? Is it a sort of cross between GDP and LBGT? Apparently not. It stands for a Generative Pre-Trained Transformer.

This, in turn, is a type of LLM. What is an LLM? Is that a cross between REM and an LLP? No, an LLM is a Large Language Model. How many people could offer even a basic and a crude explanation as to what a Generative Pre-Trained Transformer is or indeed how it may differ from a Generative Trained Transformer or a Generative Post-Trained Transformer for that matter? Which could leave many of us in the awkward situation where the only way we could find out what ChatGPT really is would be to ask ChatGPT itself to tell us, but if it truly has sinister intentions towards us then how can we trust the answer that it provides to us? The 2020s have been bad enough what with the pandemic, Russia invading the Ukraine and a cost-of-living crisis, now we have to worry about AI being RIP for humans.

If all that ChatGPT involves is the capacity to produce compelling answers to enquiries such as “How do I make Apple Pie?” then we would not need anything at all like an IAEA to make sure that it did not later run riot.

According to Open AI, the comparatively start-up company backed heavily by Microsoft, there is not much to be concerned about. In essence, what ChatGPT can do, is search gigantic amounts of data at incredible speed and intelligently synthesise it to order and at whatever length is requested. This is impressive but not threatening, and in many senses could be liberating. What, even with a search engine, might take many hundreds or even thousands of person hours to do can be achieved vastly faster and with confidence in its accuracy. This is not fantastic news if you are Google (which made a somewhat botched attempt to respond to the arrival of ChatGPT through its own rival Bard earlier this year) but the rest of us can relax. It is more War of the Words than War of the Worlds territory.

Yet if that were true why are the founding fathers of AI turning up all over the place asserting that we impose some rules on their own creation? Their preferred blueprint seems to be an entity akin to the International Atomic Energy Agency (although its record in holding nuclear proliferation in check is far from perfect as the likes of Iran, Pakistan and North Korea aptly illustrates). If all that ChatGPT involves is the capacity to produce compelling answers to enquiries such as “How do I make Apple Pie?” then we would not need anything at all like an IAEA to make sure that it did not later run riot.

There is a strong sense that (a) we do not know quite what we have unleased, (b) we did not know if there is a tipping point at which ChatGPT starts to ask itself questions rather than answer our ones and (c) even if it turns out to be essentially benign, how large an impact will it have on our society?

As an example, take employment recruitment. When hundreds of CVs come in for a job application then it appears that AI could be used to avoid the tedium of having to read them all and if told the most important criteria to be aware of, it might be able to come up with a short-list at light speed. That sounds rather useful. But as the technology develops might not ChatGPT come to its own view as to what should be the saliant qualities in the ideal candidate? Could it not determine that actually a human being was unnecessary in that position, a fellow machine would be far more effective and considerably more cost efficient? On what basis would it be rational to reject that recommendation? Yet is AI capable of evaluating the more subtle factors that come into employment interaction such as a capacity for collegiality? Are we destined to delegate these decisions to a complex algorithm?

Who will be able to stop the likes of Microsoft, Google and Meta (a.k.a. Facebook) aspiring to obtain a competitive advantage by experimenting with ever more cutting-edge versions of AI?

The argument that regulation is some form of Get Out Of Jail Free card certainly looks a suspect one Exactly how is any IAIIA (International Artificial Intelligence Inspection Agency) to be established? Who is going to be clever enough to understand what the machines might or might not be capable of? Who will be able to stop the likes of Microsoft, Google and Meta (a.k.a. Facebook) aspiring to obtain a competitive advantage by experimenting with ever more cutting-edge versions of AI? What is to bar certain countries engaging in an arms race through AI to boost their geopolitical standing? Who can insist on “this far and no further”? Suppose the machines themselves ignore instructions?

The AI genie is, alas, too far out of the bottle to push it back in now. In boardrooms across the UK in the next few months one can confidently predict (even without the aid of ChatGPT) that there will be Chairs intoning to CEOs “What is our AI strategy then?”, with said CEOs turning to the FD and repeating “What is our AI strategy then?” and the FD will pivot to the CTO (if one exists) and try to pass the buck that way, but may find themselves landed with producing a paper on an AI strategy for the next meeting, which no one will have a prayer how to compose, so will buy time by asking a teenage child to come up with one, which they will probably do by hitting on ChatGPT for ideas. This may (temporarily) make an FD look good but if there is an AI problem then it will compound it.

The brutal truth is that we are not well placed to determine whether AI is a friend, a foe or a fad but the corporate world will have to get up to speed on this one. There is certain to be a booming new industry of AI consultancies and AI conferences ready to extract hard cash from the likes of Walsall Widgets by offering (a probably false) reassurance that the astute adoption of AI will save a ton of money (once their expensive fees have been dealt with). Talk about beware of Geeks bearing gifts. We will need some ChatFD concerning ChatGPT PDQ to ensure AI is a P&L asset, and not a liability.

FD Direct Limited (trading as Equity FD, Equity FC, Equity Interim and Equity Chair) has ceased trading.

Please direct any enquiries to swm-team@btguk.com