Skip to main content
AI for Trading is Here: But ChatGPT is the Wrong Approach
Back to Blog

AI for Trading is Here: But ChatGPT is the Wrong Approach

Luke·

Long before Asimov, I Robot, ChatGPT, and Tesla, during the European Renaissance machinists built intricate "automatons". Life-size, doll-like machines that could write, play music, and mimic human behaviour.

Today, there is a fundamental change happening in how people interact with financial information, and how AI and LLM's are used. Increasingly, individuals are turning to ChatGPT not just for explanations, but for direction. Questions that would previously have gone to an advisor, a broker, or hours of research are now condensed into a single prompt: “What should I invest in?”

On the surface, this feels efficient. The answers are immediate, articulate, and often sound well reasoned. Indeed, data from organisations such as Pew Research Center and Statista suggests that a growing share of users are already using AI tools in this way, including for financial decision-making. You may have even tried it yourself (we know we have!).

But there is a fundamental issue here, and a dangerous one.

This type of interface creates a sense of capability that the underlying system simply does not have. ChatGPT is extremely capable for many things (and we like using it too!) but not as a financial advisor. Fundamentally, an LLM is natural-language based and not quantitative like an ML would be, and more importantly it can be biased. Like an automaton, it shouldn't be confused with the real thing.

What ChatGPT actually is, and why that matters

At its core, ChatGPT is a language model. It has been trained on vast amounts of text and is exceptionally good at producing responses that are coherent, structured, and contextually relevant. That is its strength. However, that strength is also the limitation. It operates in the domain of language, not in the domain of financial modelling. It does not ingest structured price data and optimise decisions based on statistical relationships. It does not construct portfolios, evaluate risk in a quantitative sense, or test how a strategy would have performed over time.

When you ask it for an investment view, what you are receiving is not a model output. It is a synthesis of what is commonly written and said about a particular topic. In effect, you are getting a refined version of the consensus. There is a subtle but important consequence to this. In markets, value is rarely found in consensus. If anything, consensus is where opportunities have already been priced in.

The problem with “summaries of summaries”
Most of the financial content that exists publicly is already one step removed from raw data. Analyst reports are interpreted by journalists, those interpretations are further simplified into articles and blog posts, and those in turn become part of the training data for language models. By the time that information is surfaced through ChatGPT, it has been abstracted multiple times. What you receive is not primary insight, but a compressed narrative of existing narratives.

This creates a kind of informational echo. It sounds informed, but it is ultimately derivative. From an investment perspective, that is not where edge comes from.

Bias is not removed, it is aggregated

Another issue that is often overlooked is bias.
Financial information in the public domain is not neutral. A significant portion originates from sell-side research, which operates within its own set of incentives.

Institutions have relationships to maintain, deals to win, and positions to manage. As a result, there is a well-documented skew towards positive recommendations. Bodies such as the Financial Conduct Authority have highlighted this imbalance over time.

When ChatGPT aggregates this information, it does not correct for that bias. It compounds it.

The result is a tendency to surface well-covered, widely discussed assets and reinforce prevailing narratives. Less visible opportunities, or views that sit outside the mainstream, are naturally underrepresented. Those with the deepest pockets, biggest PR spend, and most analysts push the prevailing narratives.

The absence of evidence

Perhaps the most important limitation is the lack of empirical validation in a natural language based "summary of summaries".

In any serious investment process, ideas are tested. You look at how a strategy would have performed historically. You examine volatility, drawdowns, and consistency across different market conditions. You stress test assumptions.


None of that exists in a ChatGPT response.


An answer may sound compelling, but it is not accompanied by a track record. There is no indication of how that idea would have performed over the last five or ten years, no measure of risk-adjusted return, and no evidence of robustness.

What you are left with is a narrative. And narratives, however well written, are not a substitute for data.


Why this leads to poor decision making

The risk here is not just technical, it is behavioural and psychological.

The fluency of the responses creates an impression of authority. It is easy to mistake clarity for correctness. At the same time, because the outputs reflect broadly accepted views, they often align with what users already believe.

This combination is powerful, and dangerous. It reinforces confirmation bias while giving the impression of independent validation.

In practice, this leads to capital being allocated based on what sounds reasonable, rather than what has been rigorously tested. Over time, that is not a sustainable way to operate in markets that are highly competitive and increasingly efficient.

Conclusions

None of this is to say that ChatGPT is not useful (we like it and use it for certain tasks!). It is extremely effective as a tool for summarisation and education.

If you want to break down a complex financial concept, get a high-level overview of a strategy, or quickly navigate a new area, it is genuinely valuable. It can accelerate learning and help structure thinking in a way that would otherwise take significantly longer.

The issue arises when it is used as a substitute for a proper investment process, rather than as an input into one.

While ChatGPT can be a great tool for summarisation and concepts, for building and testing proper quantitatively-valid, backtested, transparent strategies, stick with the process in Q314. And to help your AI-powered quant strategy journey, your first 3 months are on us; https://q314.ai?ref=Q314READER