The human factor in an algorithmic world

While some believe artificial intelligence (AI) and machine learning (ML) can replace human intellect, creativity, and perspective, the real objective of AI/ML is typically to identify patterns in a large-dimensional space. ML algorithms search through enormous data sets looking for patterns and correlations; a scoring function is then used to assess how accurate the interpretation is. Similar to how humans pick up new skills through repeated experimentation and responding to their touch, smell, taste, and other senses, AI models do a similar exercise, but with sophisticated hardware and numbers instead of sensory feedback. In the process of learning, it is critical to remember that a high correlation does not always imply a causal relationship, and if one exists, it will result in high model weights, which can be a significant source of error.

It is advised that human intervention be involved when a trading model produces a profitable algorithm through backtesting on historical data. Generally speaking, a real person with knowledge of market dynamics should be able to understand why a trading model is operating as it is. Before using real money, it is essential to manually determine the correlation weights and what makes sense or, more importantly, what doesn't pertain to what a model “thinks”.

Choosing the data inputs for a model from the perspective of causation is typically the responsibility of a human. If executed correctly, it can yield some highly fascinating and lucrative outcomes. When a high degree of correlation is discovered, a heavily data-driven model is specifically trained to scrutinize those data points more thoroughly. This can be used to identify underlying patterns that lead to profitability, but it can also be a significant source of error because correlation does not imply causation. A few amusing examples are illustrated below. Although these are not applied to any datasets in Staga, they do illustrate how, in a live model, they might have a strong bias that favors these correlations and influences the behavior of the model, most likely resulting in errors over the long term.


Even though the correlations illustrated above may reflect an underlying relational factor, it is an important reminder to isolate trading models from these indirect causal relationships to ensure that trends and other market complexities are removed from the picture. 

Can you picture trading energy futures in real time while keeping an eye on railroad mishaps? If this question was entirely up to a computer, it potentially could heavily influence the choice. When creating complicated trading models, it is of the utmost importance for humans to take a step back and say no to indirect causal relationships. This critical lens is an essential component of running a profitable trading desk where human oversight is still required, and arguably vital. 


Previous
Previous

AI Revolution and Finance

Next
Next

Building the factory, not the car