To Whom Does Capital Efficiency Belong?
What Monte Carlo simulations and capital efficiency in DeFi have in common
I wrote in my previous piece about how DeFi’s most straightforward value-add to the financial sector is interoperability and efficiency. But, as I noted, this promise won’t be fully realized until essential features of the financial sector are replicated in DeFi.
There is an old Jewish legend about a Roman who approached the great sage Hillel and said that he would like to convert to Judaism on the condition that he is taught the entire Torah while standing on one foot. Hillel responded, “what is hateful to you, do not do to your neighbor: that is the whole Torah while the rest is commentary; go and learn it.”
I propose a new version (a derivative, if you will) of this legend. A tradfi hedge fund manager approaches Edgar and says he’ll start trading on DEXs on the condition that he is taught DeFi’s value in one sentence–no highfalutin rhetoric or meaningless buzzwords. Edgar responds, “atomic interoperability and capital efficiency: that is all of DeFi while the rest is commentary; go and MM it.”
In this essay I talk about portfolio margining and why it’s essential for the capital efficiency of institutions making heavy use of derivatives–MMs, HFTs, delta-neutral funds, etc. I also discuss the strange case of portfolio margining as a point of comparison between tradfi and DeFi: it’s difficult for tradfi because of poorly built abstractions that don’t track risk well, and it’s difficult for DeFi because of its immature ecosystem. While difficult to pull off, a robust portfolio margining risk engine could be DeFi’s killer app.
Portfolio margining is great and you should try it!
If you trade on FTX or Mango Markets, you’ve had experience trading in a cross-margined account. Anders wrote about this in-depth in two excellent articles last year (here and here). Essentially, the net asset value (NAV) of your account (adjusted for assets’ risk) is used as collateral for all of your margin positions combined. Note that this is actually the same thing as isolated margin if you make sure to instantly move your collateral between margin positions when needed. Cross-margining automates that process for you. It’s generally quite convenient but doesn’t technically increase capital efficiency.
Portfolio margining increases capital efficiency by using more sophisticated methods of accounting for risk. Under the vanilla cross-margining system on FTX, for example, going long 5x on spot SOL and short 5x on SOL-PERP in the same account would lead the risk engine to discount your total NAV to 90% of its value due to the risk of holding SOL (a “haircut”)–despite the position being fully hedged. That’s a pretty substantial hit to the capital efficiency of, for example, a market-neutral fund running a leveraged basis trade strat across multiple exchanges.
Perps are of course the simplest derivative to take into account when creating a portfolio margined risk engine. Dated options and futures present a less elegantly quantified contributor to the overall risk profile of an account.
Portfolio margining in tradfi
In the modern financial system (post-1988 for futures with the CME, post-2006 for options with the OCC), brokers expect leverage to rely on portfolio margin rather than be limited by naive margin based on NAV.
I’ll go on a quick tangent to review how settlement, trade execution, and margining work in the financial sector, though my previous essay discusses it in more detail. All trades ultimately end up in the responsibility of a broker-dealer or futures commission merchant (FCM), which are regulated entities that are members of a clearinghouse. A clearinghouse technically just means a central book to match everyone’s trades, but most clearinghouses also operate as central counterparties (CCPs). CCPs take on all counterparty risk and guarantee the integrity of all positions and trades submitted to them; they’re thus responsible for calculating margin requirements and would blow up if those requirements don’t suffice to liquidate a sufficiently large account in an orderly manner when necessary. Prominent CCPs in the US include the CME for futures, the OCC for options, and the NSCC for equities.
The CME developed a risk engine named SPAN for portfolio margining in 1988 that considers an account’s entire portfolio when calculating margin requirements for futures positions. The OCC followed suit with a similar risk engine named STANS in 2006 for options positions. These systems work via Monte Carlo simulation of positions in varying market conditions, allowing anti-correlated assets in a portfolio to reduce overall risk, or even for (say) longing the volatility of an asset via options to reduce the risk attributed to the volatility of that asset in the model. Note that this goes far beyond the simple example given above of longing SOL via spot and shorting it via perps: these risk engines would take into account the correlation between, say, COMP and UNI, allowing a UNI short to reduce the risk of a COMP position to some extent. This has clear and substantial benefits for strategies like pair trading.
To whom does capital efficiency belong?
Let’s now directly compare the risk engine capital efficiencies of, say, FTX and the CME’s SPAN.
SPAN follows a strictly more granular approach to accounting for risk than FTX. FTX only considers positions on its own exchange and doesn’t take the correlation of assets into account. It also slaps assets with simplistic, hard-and-fast haircuts: the risk engine discounts a user’s COMP holdings, for example, by 10%, and a user’s MOB holdings by 40%. This approach serves as a way to account for liquidity issues that may arise in tumultuous market conditions.
SPAN considers the client’s entire portfolio rather than just financial products held on its exchange, and allows additional positions to reduce the overall risk of an account due to their anti-correlation. These factors certainly lead to dramatically higher capital efficiency for many clients, ceteris paribus.
But all is of course not equal. These sophisticated portfolio margining risk engines are hamstrung by the embarrassingly primitive basic infrastructure of tradfi: long settlement times and lack of real-time liquidation. Despite the granularity of its risk engine, it would take days–extremely optimistically–to liquidate a counterparty. As a result, CCPs follow a stress test called 99% Expected Shortfall to calculate collateral requirements: the NAV of an account must remain positive even after each asset held in its portfolio falls in price by 99% (done via simulation, of course, so they wouldn’t consider the case of both AAPL and a short position on AAPL both declining 99% in value at once). The process of forcibly liquidating a counterparty would be so intractable and protracted that CCPs make sure that it’ll never be urgent.
I have a hard time not indulging myself by pointing out the synecdoche here. It’s a pattern that describes so many coordination failures in advanced economies. A loosely organized assortment of financiers, economists, businessmen, regulators, and academics over centuries constructed a soaring edifice of unparalleled exactitude and efficiency–the modern financial system–to allow people to trade even the most nuanced sources of risk and cashflow. This edifice reaches ever higher, compelling many of each generation’s greatest minds to lend a helping hand to the glorious project. Indeed, its siren call penetrates deep into the halls of the math and physics departments at Harvard and MIT; it may be prudent to rename Math 55 to Quant Finance 101.
Yet all that brilliance falls helpless and dumbstruck at the feet of T+2 settlement times and counterparty risk, not even daring to challenge that rather than devoting itself to increasingly sophisticated models for statistical arbitrage. As a result, FTX’s almost insultingly simple risk engine likely outshines SPAN in the majority of cases–it doesn’t need to fear a 99% drawdown in all assets because it can liquidate well before then.
But, of course, wouldn’t it be nice to have the best of both worlds?
Protocol design: towards functional abstractions underneath CCPs
A brief aside to make sense of the higher-level concepts involved here.
Abstractions are very useful when building complex systems, as programmers know well. A complex program deals with a lot of the same difficulties as the financial system; each is a grand exercise in building a highly intricate system by composing individually simple components. Indeed, programming theorists have long understood the field’s relevance to far more than just computing. The authors of “Structure and Interpretation of Computer Programs” (SICP), an iconic computer science textbook focusing on functional programming published in 1984, wrote in its preface:
Underlying our approach to this subject is our conviction that ‘computer science’ is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. [...] Mathematics provides a framework for dealing precisely with notions of ‘what is.’ Computation provides a framework for dealing precisely with notions of ‘how to.’
To serve as a useful abstraction, each function should accomplish one task well. It should make the invariants of its input and output clearly legible. Program designers should avoid the use of a global state whenever possible for efficiency reasons, or at the very least design a logically independent source of truth that functions can query and write to directly. The alternative, functions having to play a game of telephone with other functions to access the current global state, leads to a ton of code complexity and bugs.
Let’s call this phenomenon state-wariness, describing the torturous difficulty of reliably tracking state. As SICP notes, “well-designed computational systems, like well-designed automobiles or nuclear reactors, are designed in a modular manner, so that the parts can be constructed, replaced, and debugged separately.”
Arthur Breitman, a functional programming enthusiast, recently made an unrelated argument on Twitter that I feel illustrates this way of thinking about abstractions quite well. He declared that, on the topic of transaction fees for NFTs, “the fact that it's not enforceable hints at something important, it's not capturing something meaningful, it doesn't build the right economical ontology.” This example isn’t about tracking state, but rather about enforceability. If you can’t easily tell your system what to do and have to wrestle with it instead, the internals of that system are probably up to all kinds of funny business. You probably shouldn’t build a complex system that relies on such a function as an important component.
Okay, now let’s get more concrete. CCPs are actually a really good idea and DeFi should implement something that looks an awful lot like them. In some ways, doing so is akin to defeating the “final boss” for DeFi: CCPs operate at perhaps the highest level of abstraction in finance. The NSCC’s parent company owns nearly all publically issued US stock as a custodian.
But CCPs are unbelievably crippled by the horribly constructed abstractions/functions that they try to abstract over and manage. They need to distrustfully peer into the internals of each function below them to make sure it’s not misstating its risk and is a rock-solid counterparty. They’re incredibly state-wary. They also can’t enforce their prescriptions well and as a result, have to be unbelievably conservative.
For example, as I noted in my previous article, the CFTC requires that futures commission merchants (FCMs) intermediate all futures transactions before they even get to the CPP. And of course, they go through a broker before reaching the FCM. This could work quite well if they all refer to a logically centralized source of truth, but since they pass variables to each other willy-nilly instead, it’s a ton of work to make sure that risk is contained. Indeed, I noted in the previous article that one response to FTX’s proposal to cut out FCMs from the process and send user trades directly to its own clearinghouse was that it “does not work when you cannot figure out what the actual risk is.” This is a refrain you hear from state-wary financial regulators all the time.
The consequences of CCPs’ inability to enforce their prescriptions easily are more obvious. The 99% Expected Shortfall stress test tells you all you need to know about how confident they are about their ability to liquidate a counterparty promptly, and the effects of that on capital efficiency.
Finance needs an efficient single source of truth and mechanism for swift enforcement, at every level of abstraction. When I say the “best of both worlds” in the concluding sentence of the previous section, I mean using DeFi protocols to accomplish these goals–it’s what they’re best at. Most financial activity will remain off-chain but everything should refer back to a single on-chain source of truth and arbitration, where risk is seamlessly tracked through countless intermediaries and liquidations can be triggered swiftly and reliably.
Right now, DeFi is inward-looking and introverted; it needs to look outward and demonstrate its value-add to the mainstream financial sector. Many DeFi pioneers have created new, programmable versions of financial primitives that serve as far better building blocks than offerings in tradfi. They’re better functions. The ultimate test of this is to build a programmable, transparent clearinghouse (along with a robust portfolio margining risk engine) for DeFi and see what sort of zero-to-one improvements result from it.
Portfolio margining on-chain
The astute reader will have already wondered how a protocol could possibly execute portfolio margining on-chain. An analog of SPAN in DeFi would face the following challenges:
Calculation of risk via Monte Carlo simulation of a portfolio’s performance during market shocks, based on historical data, cannot be done via smart contracts.
Not everything can be done atomically on-chain. Liquidation auctions and RFQ systems should exist. It’s tricky to do this in a permissionless way.
Cross-margining across protocols exposes the risk engine to a smart contract exploit in any of the protocols included. A good abstraction would define and securitize this risk.
Achieving granularity in evaluating risk relies on a great deal of liquidity in markets (e.g. options markets to learn the implied volatility of securities). Deep markets of this sort are uncommon in DeFi.
The first problem requires one of two types of solutions: simply centralize that calculation by using an oracle (as most of DeFi has done with Chainlink and Pyth), or design another mechanism that can work almost as well while remaining fully on-chain (like what Uniswap v3 did for order books on Ethereum). The second and third problems necessitate careful design but are certainly tractable: on-chain protocols should interface with off-chain financial institutions and activities.
I view the fourth problem as by far the most thorny one. It’s a special case of the more general problem of bootstrapping DeFi from the chicken-and-egg problem of liquidity for sophisticated instruments.
I will argue in my next article, though, that improved on-chain risk engines are a uniquely effective tactic in the war to bootstrap DeFi and will likely see an unexpectedly steep adoption curve.