Blue Horseshoe Picks

Issue #3 • November 16, 2020

A curated collection of my most recent reads and stumblings.
Covering finance, algorithmic trading, ML, crypto, Go, and software engineering.

Hi, everyone – it's that time of the month year again when I email blast you another newsletter in the hopes some investment banker somewhere spots it (and gives me a job) and Hasbro does not. And no better time, I figure, in light of the resurgent pandemic, stalling stimulus talks, and mercurial stock market. But worry not! Together we will navigate these troubled waters, or die trying.

Also, apologies in advance for the mixture of old and new articles. I had originally started writing this issue in the Before Time prior to the pandemic, and discovered some really great content that I think you'll still find relevant today. 

As always, replies are open to both feedback and article submissions, so please give me a shout!  – Lane

Lessons learned building an ML trading system that turned $5k into $200k
Tradientblog.com

An older article, but one of my favorites, and truly embodies all of the topics Blue Horseshoe Picks is in search of. If you're interested in algorithmic trading, as I am, but only have the time to read a single article, let it be this one.

Building a more accurate time service at Facebook scale
Oleg Obleukhov • Facebook Engineering

Any software engineer will tell you accurate time keeping is important. A high-frequency trader will tell you it's mission-critical.1

From Facebook's own engineering blog, this article is a fascinating look into how a global-scale social media company manages to keep the clocks of its fleet of servers accurate to within 100s of microseconds (or less in some cases). And I learned something new for my next project: ditch ntpd for chrony! I also recommend checking out their open source NTP tools (written in Go! 😍) if you're serious about time keeping in your own projects.
1It's worth noting that when it comes to high frequency trading, the goal is usually less about having accurate time so much as consistent time with your counterparties (e.g. the broker). Generally this will mean ensuring you're synchronizing with the same NTP servers your counterparties are using.

Building AI Trading Systems
Denny Britz • dennybritz.com

Building systematic trading systems is my favorite thing to talk about, and adding AI into the mix is just icing on the cake. This article isn't a How-to list of instructions to get rich quick, however. No such thing exists. But it is a collection of lessons and wisdom from someone who – ostensibly – made money doing that very thing, and incredibly consistent with my own experience. In particular, I think Britz's most important piece of advice is this: "specialization is often where your advantage in trading comes from." ML models are commoditized, data and infrastructure is not.

P.S. Take a peek at the comments section. I found the discussion almost as valuable as the article itself.

SPY Short Call 45 DTE Cash-Secured Options Backtest
Spintwig.com

An older article, but new to me, and brought to us by the fine folks over at Spintwig. Don't let the title deter you: the basic premise of the research is the question 'if I consistently sell covered call options, will I make money over time?'. I think this article is an important read, because THe anSWer may SURpRISe YoU. OK but seriously, the answer is no: the research reveals that this strategy over time will actually lose money, an important revelation in my opinion.

Humans Who Are Not Concentrating Are Not General Intelligences
Sarah Constantin • srconstantin.wordpress.com
With the recent release of GPT-3, I want to take a look back at this article from the GPT-2 era when the world first realized Artificial General Intelligence (AGI) was not as far off as we had thought. Although the premise is simple, this article has a lot to offer both in terms of understanding the limitations of even our most sophisticated, state-of-the-art language models as well as how much we've yet to decipher about the depths of human intelligence. When it comes to models like GPT-3, I believe there is a necessary realization that their entire model of the world is language, and language alone is not understanding. Similarly, the author points out that without concentration, humans' reading comprehension is almost indistinguishable from that of these models. It leaves me wondering – how can we bridge the gap to achieve AI that can reason about the world in the way that we do? 

Renaissance hedge fund loses 20% this year
Laurence Fletcher & Ortenca Aliaj • Financial Times

Much to Dave Portnoy's delight, the famous hedge fund founded by Jim Simons, which manages over $110bn and has enjoyed an annualized return of 66% (before fees) since 1988, has reportedly yielded a loss of nearly 21% in 2020 thus far as of June. That said, I think this all deserves a large grain of salt considering Renaissance's other flagship fund, the Medallion fund, was up 39% YTD as of April.

Quants Discover A Guaranteed Source Of Alpha: Just Trade Based On The Growth Of The Fed's Balance Sheet
Tyler Durden • zerohedge
To the shocked silence of no one, quant researchers from Société Générale have discovered that the systematic tracking of the Fed's balance sheet may be a potential consistent generator of alpha. I mean, it's not like Jerome Powell literally prints money or anything. The researchers found that their strategy should create 2.5% in incremental overall returns versus their portfolio benchmark, but only a modest bump in its Sharpe ratio. 

Whitepapers

Language Models are Few-Shot Learners
Brown, et al.
Of course I had to include the GPT-3 whitepaper. What's most impressive about OpenAI's latest language model release is that it excels in tasks for which it was never trained, having been given instructions only through text interaction with the model. These "few-shot" tasks evaluated by the researchers included translation, question-answering, unscrambling words, and 3-digit arithmetic, among others. 

Cool Github Discoveries

facebookincubator/ntp
A collection of Facebook's NTP libraries.

hashicorp/memberlist
'Member? I 'member. Memberlist is a minimal, reusable Go library for managing cluster membership and failure detection built on the SWIM gossip protocol. I've yet to try it out myself, but this could be massively useful for building out your own distributed systems.

deepmind/reverb
Reverb is an experience replay system built and open sourced by Deepmind. Deepmind claims what they have designed is not only performant and easy to use, but scales to meet the demand of a large distributed reinforcement learning system. I have noticed some downsides, however. The first is that Reverb scales horizontally through sharding, but seems to have no support for data replication across those shards, meaning the operator assumes some risk of data loss. The other is that while Reverb supports checkpointing (a good thing), this is a stop-the-world operation that blocks all incoming insert, sample, update, and delete queries while it is running.

Blue Horseshoe Picks by Lane Shetron
Flint, MI USA
Delivered by
TinyLetter