Artificial intelligence (AI) has now closely matched and even surpassed humans in what were previously considered unattainable areas. These include chess, arcade games, Go, self-driving cars, protein folding and rather more. This rapid technological progress has also had a huge effect on the financial services industry. More and more CEOs within the sector declare (explicitly or implicitly) that they run “technology firms with a banking license”.



There can be a rapid emergence and growth of the financial technology industry (fintech), where technology startups increasingly challenge established financial institutions in areas corresponding to retail banking, pensions or personal investments. As such, AI often appears in behind-the-scenes processes corresponding to cybersecurity, anti-money laundering, know-your-client checks or chatbots.

Among so many successful cases, one seems conspicuously absent: AI earning profits in financial markets. While easy algorithms are commonly utilized by traders, machine learning or AI algorithms are far less usual in investment decision-making. But as machine learning is predicated on analysing huge data sets and finding patterns in them, and financial markets generating enormous amounts of knowledge, it could seem an obvious match. In a brand new study, published within the International Journal of Data Science and Analytics, now we have shed some light on whether AI is any higher than humans at earning profits.

Some specialist investment firms called quant (which stands for ‘quantative’) hedge funds declare that they employ AI of their investment decision-making process. However, they don’t release official performance information. Also, despite a few of them managing billions of dollars, they continue to be area of interest and small relative to the dimensions of the larger investment industry.

On the opposite hand, academic research has repeatedly reported highly accurate financial forecasts based on machine-learning algorithms. These could in theory translate into highly successful mainstream investment strategies for the financial industry. And yet, that doens’t appear to be happening.

What is the explanation for this discrepancy? Is it entrenched manager culture, or is it something related to practicalities of real-world investing?

AI’s financial forecasts

We analysed 27 peer-reviewed studies by academic researchers published between 2000 and 2018. These describe different sorts of stock market forecasting experiments using machine-learning algorithms. We wanted to find out whether these forecasting techniques might be replicated in the true world.

Our immediate commentary was that almost all of the experiments ran multiple versions (in extreme cases, as much as a whole lot) of their investment model in parallel. In just about all the cases, the authors presented their highest-performing model as the first product of their experiment – meaning one of the best result was cherry-picked and all of the sub-optimal results were ignored. This approach wouldn’t work in real-world investment management, where any given strategy may be executed just once, and its result’s unambiguous profit or loss – there is no such thing as a undoing of results.

Running multiple variants, after which presenting essentially the most successful one as representative, could be misleading within the finance sector and possibly thought to be illegal. For example, if we run three variants of the identical strategy, with one losing -40%, the opposite one losing -20%, and the third one gaining 20%, after which only showcase the 20% gain, clearly this single result misrepresents the performance of the fund. Just one version of an algorithm must be tested, which could be representative of a real-world investment setup and subsequently more realistic.

Models within the papers we reviewed achieved a really high level of accuracy, about 95% – a mark of tremendous success in lots of areas of life. But in market forecasting, if an algorithm is flawed 5% of the time, it could still be an actual problem. It could also be catastrophically flawed slightly than marginally flawed – not only wiping out the profit, but the complete underlying capital.

Traders don’t use AI much.
Rawpixel.com/Shutterstock

We also noted that almost all AI algorithms seemed to be “black boxes”, with no transparency on how they worked. In the true world, this isn’t prone to encourage investors’ confidence. It can be prone to be a difficulty from a regulatory perspective. What’s more, most experiments didn’t account for trading costs. Though these have been decreasing for years, they’re not zero, and will make the difference between profit and loss.

None of the experiments we checked out gave any consideration to current financial regulations, corresponding to the EU legal directive MIFID II or business ethics. The experiments themselves didn’t engage in any unethical activities – they didn’t seek to govern the market – but they lacked a design feature explicitly ensuring that they were ethical. In our view, machine learning and AI algorithms in investment decision-making should observe two sets of ethical standards: making the AI ethical per se, and making investment decision-making ethical, factoring in environmental, social and governance considerations. This would stop the AI from investing in firms that will harm society, for instance.

All which means the AIs described in the educational experiments were unfeasible in the true world of economic industry.

Are humans higher?

We also wanted to match the AI’s achievements with those of human investment professionals. If AI could invest in addition to or higher than humans, then that might herald an enormous reduction in jobs.

We discovered that the handful of AI-powered funds whose performance data were disclosed on publicly available market data sources generally underperformed out there. As such, we concluded that there’s currently a really strong case in favour of human analysts and managers. Despite all their imperfections, empirical evidence strongly suggests humans are currently ahead of AI. This could also be partly due to the efficient mental shortcuts humans take when now we have to make rapid decisions under uncertainty.

In the long run, this will change, but we still need evidence before switching to AI. And within the immediate future, we consider that, as a substitute of pinning humans against AI, we must always mix the 2. This would mean embedding AI in decision-support and analytical tools, but leaving the final word investment decision to a human team.

This article was originally published at theconversation.com