<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Stoic’s Substack: Lectures]]></title><description><![CDATA[Our lecture series dives deep into the design, testing, and refinement of systematic strategies, grounded in the same principles that guide our research. We embrace uncertainty, building models that are resilient to change rather than reliant on prediction. We follow disciplined, falsifiable processes that value method over outcome, ensuring every idea is tested and trusted. And we treat iteration as proof, continuously challenging our assumptions to sharpen strategies over time.

These lectures aren’t about chasing trends — they’re about cultivating a robust, scientific framework for thinking, testing, and improving in the markets.]]></description><link>https://stoicresearch.substack.com/s/lectures</link><generator>Substack</generator><lastBuildDate>Tue, 14 Apr 2026 19:51:12 GMT</lastBuildDate><atom:link href="https://stoicresearch.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Stoic Research & Technology]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[stoicresearch@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[stoicresearch@substack.com]]></itunes:email><itunes:name><![CDATA[Stoic Research & Technology]]></itunes:name></itunes:owner><itunes:author><![CDATA[Stoic Research & Technology]]></itunes:author><googleplay:owner><![CDATA[stoicresearch@substack.com]]></googleplay:owner><googleplay:email><![CDATA[stoicresearch@substack.com]]></googleplay:email><googleplay:author><![CDATA[Stoic Research & Technology]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Unified Framework for Sequential Decisions - Stochastic Control Lecture 2]]></title><description><![CDATA[Model First, Then Solve It]]></description><link>https://stoicresearch.substack.com/p/the-unified-framework-for-sequential</link><guid isPermaLink="false">https://stoicresearch.substack.com/p/the-unified-framework-for-sequential</guid><dc:creator><![CDATA[Stoic Research & Technology]]></dc:creator><pubDate>Thu, 11 Sep 2025 06:39:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1a8ac85d-2a35-41eb-bc4d-d079fec67434_2880x1639.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every day, organizations and individuals face decisions that unfold over time: managing inventory, pricing goods in a volatile marketplace, scheduling flights, or even training an AI agent. These aren&#8217;t just isolated &#8220;yes/no&#8221; choices. They are <strong>sequential decisions</strong>, where each action reshapes the future landscape of possibilities.</p><p>What makes these problems challenging is that they combine several difficult ingredients: <strong>uncertainty</strong>, <strong>dynamics</strong>, <strong>interdependence</strong>, and <strong>optimization over time</strong>. The good news is that there exists a <strong>unified framework</strong> that allows us to model&#8212;and eventually solve&#8212;these problems, regardless of the specific application.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://stoicresearch.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Stoic&#8217;s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The core principle is simple but non-negotiable:</p><p>&#128073; <strong>Model first, then solve it.</strong></p><p>Skipping modeling is like trying to build a skyscraper without a blueprint. Algorithms&#8212;no matter how powerful&#8212;are useless unless they are grounded in a clear description of states, decisions, uncertainty, and objectives.</p><div><hr></div><h2>The Anatomy of a Sequential Decision Problem</h2><p>At the heart of this framework are six key building blocks.</p><h3>1. State </h3><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;(S_t \\in \\mathcal{S})&quot;,&quot;id&quot;:&quot;JTYFWCXMMB&quot;}" data-component-name="LatexBlockToDOM"></div><p>The <strong>state</strong> captures everything we need to know at time <em>t</em> to make a good decision going forward. It is a compressed representation of history: instead of remembering the entire sequence of past events, we summarize it in a state variable.</p><ul><li><p><strong>Physical state:</strong> Tangible variables that describe the environment. Example: current inventory level.</p></li><li><p><strong>Information state:</strong> Observable signals such as prices, costs, or competitor actions.</p></li><li><p><strong>Belief state:</strong> Probabilistic estimates of hidden or uncertain factors. For example, if we do not directly observe demand, our belief about demand follows a distribution updated over time.</p></li></ul><p>&#128161; <strong>Key Point:</strong> A well-defined state includes <em>all relevant information</em> but excludes <em>irrelevant history</em>. Latent variables&#8212;hidden drivers of uncertainty&#8212;do influence outcomes but are <em>not</em> part of the state if they cannot be observed or inferred.</p><div><hr></div><h3>2. Decisions and Policies</h3><p>A <strong>decision</strong> <em>x_t</em> is the action taken at time <em>t</em>.</p><p>A <strong>policy</strong> &#960; is a rule that maps states to actions:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;\\pi: S_t \\mapsto x_t&quot;,&quot;id&quot;:&quot;SEAXMWUCUW&quot;}" data-component-name="LatexBlockToDOM"></div><ul><li><p>In inventory, the decision is &#8220;how much to order.&#8221;</p></li><li><p>In pricing, it is &#8220;what price to set.&#8221;</p></li><li><p>In ride-sharing, it might be &#8220;where to send a driver.&#8221;</p></li></ul><p>Policies can be deterministic (&#8220;always reorder to level ss&#8221;) or stochastic (&#8220;randomize prices according to a distribution&#8221;).</p><div><hr></div><h3>3. Exogenous Information </h3><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;W_{t+1}&quot;,&quot;id&quot;:&quot;SINTFOFXTA&quot;}" data-component-name="LatexBlockToDOM"></div><p>Uncertainty enters after a decision is made. This is the <strong>exogenous information</strong>:</p><ul><li><p>Customer demand revealed tomorrow.</p></li><li><p>Competitor&#8217;s pricing next week.</p></li><li><p>A machine breaking down unexpectedly.</p></li></ul><p>The exogenous process evolves according to some probability law, which may be partially known (model-based) or entirely unknown (model-free).</p><div><hr></div><h3>4. Transition Function</h3><p>The transition function glues states, decisions, and uncertainty together:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;S_{t+1} = S^M(S_t, x_t, W_{t+1})&quot;,&quot;id&quot;:&quot;BBHGUZSIXI&quot;}" data-component-name="LatexBlockToDOM"></div><p></p><p>It describes how today&#8217;s choice and tomorrow&#8217;s randomness determine the next state.</p><ul><li><p>In inventory:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;\\text{Inventory}_{t+1} = \\max\\{0, \\text{Inventory}_t + \\text{Order}_t - \\text{Demand}_{t+1}\\}&quot;,&quot;id&quot;:&quot;XKSPHWZAON&quot;}" data-component-name="LatexBlockToDOM"></div><p></p></li><li><p>In pricing: tomorrow&#8217;s market share depends on today&#8217;s price, competitor&#8217;s price, and consumer preferences.</p></li></ul><p>This function is the &#8220;physics&#8221; of the system: it dictates how reality unfolds.</p><div><hr></div><h3>5. Contribution (Reward)</h3><p>Each time step brings an <strong>immediate contribution</strong>:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;C_t(S_t, x_t, W_{t+1})&quot;,&quot;id&quot;:&quot;FOFNEOFSJV&quot;}" data-component-name="LatexBlockToDOM"></div><p></p><p>This is often a profit (revenues minus costs), but it could also be utility, service quality, or negative costs.</p><p>Example (inventory):</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;C_t = \\text{Revenue from sales} - \\text{Ordering cost} - \\text{Storage cost}&quot;,&quot;id&quot;:&quot;DDWXQMDIZL&quot;}" data-component-name="LatexBlockToDOM"></div><p></p><div><hr></div><h3>6. Objective Function</h3><p>Finally, everything ties together in the <strong>objective</strong>: maximize cumulative contributions.</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;\\max_{\\pi} \\ \\mathbb{E}\\left[\\sum_{t=0}^T C_t(S_t, x_t, W_{t+1}) \\right]&quot;,&quot;id&quot;:&quot;LXPJTXNZNG&quot;}" data-component-name="LatexBlockToDOM"></div><p></p><p>This expectation accounts for uncertainty, since outcomes depend on random exogenous information.</p><div><hr></div><h2>Example 1: Inventory Management</h2><p>Imagine managing a single product:</p><ul><li><p><strong>State:</strong> inventory level, costs, and prices.</p></li><li><p><strong>Decision:</strong> how much to order today.</p></li><li><p><strong>Exogenous information:</strong> demand tomorrow.</p></li><li><p><strong>Transition:</strong> tomorrow&#8217;s stock = today&#8217;s stock + order &#8211; demand.</p></li><li><p><strong>Contribution:</strong> profit (sales revenue minus costs).</p></li><li><p><strong>Objective:</strong> maximize profit over the season.</p></li></ul><p>This simple setup is a <em>canonical example</em> used in both operations research and reinforcement learning.</p><div><hr></div><h2>Modeling the Solution Process</h2><p>Before solving, we must <strong>model carefully</strong>. The structured process is:</p><ol><li><p><strong>Narrative description:</strong> Write the story in plain words.</p></li><li><p><strong>Identify core elements:</strong> List states, decisions, exogenous inputs, transitions, and contributions.</p></li><li><p><strong>Model in the universal framework:</strong> Formalize in words + equations.</p></li><li><p><strong>Uncertainty model:</strong> Specify distributions (e.g., demand ~ Poisson, prices ~ Gaussian).</p></li><li><p><strong>Design the policy:</strong> Choose policy family&#8212;lookup, heuristic, optimization, or RL.</p></li><li><p><strong>Evaluate the policy:</strong> Test with simulation or historical data.</p></li></ol><div><hr></div><h2>Example 2: Marketplace Pricing</h2><ul><li><p><strong>Narrative:</strong> A seller sets item prices over time while competing with others.</p></li><li><p><strong>Core elements:</strong></p><ul><li><p>State: current prices, inventory, competitor&#8217;s last price.</p></li><li><p>Decision: today&#8217;s price.</p></li><li><p>Exogenous: tomorrow&#8217;s demand, competitor&#8217;s response.</p></li><li><p>Contribution: daily profit.</p></li><li><p>Objective: maximize cumulative profit.</p></li></ul></li><li><p><strong>Uncertainty model:</strong> Demand could follow a Poisson distribution; competitor&#8217;s moves could be modeled as a stochastic process.</p></li></ul><p>The challenge here is learning a pricing policy that balances exploration (trying new prices) and exploitation (maximizing profit).</p><div><hr></div><h2>Model-Based vs Model-Free Approaches</h2><h3>Model-Based</h3><p>If we know the transition law or transition probabilities, we can write the problem as a <strong>Markov Decision Process (MDP)</strong>.</p><ul><li><p>Methods:</p><ul><li><p><strong>Backward recursion</strong> (finite horizon).</p></li><li><p><strong>Value iteration</strong> (computing optimal value functions).</p></li><li><p><strong>Policy iteration</strong> (alternating between policy evaluation and improvement).</p></li><li><p><strong>Linear programming formulations.</strong></p></li></ul></li></ul><p>Here, the transition probabilities combine both the deterministic dynamics and exogenous randomness.</p><div><hr></div><h3>Model-Free</h3><p>When we don&#8217;t know the transition probabilities, we can learn by experience: <strong>reinforcement learning (RL)</strong>.</p><ul><li><p><strong>Tabular RL:</strong> For small state spaces, we maintain explicit tables of state&#8211;action values (Q-learning, SARSA).</p></li><li><p><strong>Function approximation:</strong> For large state spaces, we approximate value functions or policies with neural networks (Deep RL).</p></li></ul><p>&#128161; <strong>Subtle point:</strong> If we don&#8217;t know <em>just</em> the transition probabilities but we do know the distribution of exogenous variables, we often still classify the approach as &#8220;model-free,&#8221; since computing probabilities directly is infeasible.</p><div><hr></div><h2>The Curses of Sequential Decisions</h2><p>Sequential decision problems face four major &#8220;curses&#8221;:</p><ol><li><p><strong>State space curse:</strong> Too many possible states (e.g., multi-product inventory).</p></li><li><p><strong>Decision space curse:</strong> Too many possible actions.</p></li><li><p><strong>Outcome space curse:</strong> Too many random outcomes of uncertainty.</p></li><li><p><strong>Modeling curse:</strong> Transition probabilities are extremely difficult to calculate.</p></li></ol><p>&#128073; Key insight: It&#8217;s usually easier to model the <em>distribution of exogenous information</em> than the full transition probabilities.</p><div><hr></div><h2>Designing Policies</h2><p>Policies are the heart of solving sequential decision problems.</p><h3>Policy Function Approximations (PFAs)</h3><ul><li><p>Direct mappings from states to actions.</p></li><li><p>Implemented via lookup tables, parametric functions, or nonparametric models.</p></li><li><p>Work best with small action spaces or when good heuristics exist.</p></li><li><p>Example: the classic <strong>order-up-to policy</strong> in inventory (always replenish up to level ss).</p></li></ul><p>Limitations: PFAs struggle with continuous/high-dimensional decisions and long horizons.</p><div><hr></div><h3>Cost Function Approximations (CFAs)</h3><ul><li><p>Use parameterized optimization models.</p></li><li><p>Modify the contribution function and constraints to make the problem tractable.</p></li><li><p>Ideal for large-scale problems such as <strong>airline scheduling</strong>, where exact optimization is impossible.</p></li></ul><div><hr></div><h3>Lookahead Approximations</h3><p>Policies can also be designed by looking ahead into the future and approximating the consequences of today&#8217;s decisions.</p><ol><li><p><strong>Value Function Approximations (VFAs):</strong><br>Approximate the long-term value of being in a state. Based on Bellman&#8217;s equation:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;V_t(S_t) = \\max_{x_t} \\mathbb{E}[ C_t(S_t, x_t, W_{t+1}) + V_{t+1}(S_{t+1}) ]&quot;,&quot;id&quot;:&quot;FOQIFRPNWJ&quot;}" data-component-name="LatexBlockToDOM"></div><p></p><p>VFAs are at the core of dynamic programming and reinforcement learning.</p></li><li><p><strong>Direct Lookahead Approximations (DLAs):</strong><br>Directly approximate the downstream value:</p><ul><li><p>Limit the horizon.</p></li><li><p>Use deterministic approximations.</p></li><li><p>Sample random outcomes (Monte Carlo).</p></li><li><p>Use simple rollout policies.</p></li></ul><p>&#9878;&#65039; <strong>Rule of thumb:</strong></p><ul><li><p>DLAs work well when uncertainty is limited.</p></li><li><p>VFAs are more effective under high uncertainty, as they better capture future risk.</p></li></ul></li></ol><div><hr></div><h2>Closing Thoughts</h2><p>Sequential decision problems&#8212;whether in operations, economics, or AI&#8212;are all variations of the same blueprint. The framework unifies them:</p><ul><li><p><strong>State:</strong> compressed history.</p></li><li><p><strong>Decision:</strong> action chosen.</p></li><li><p><strong>Exogenous info:</strong> randomness revealed.</p></li><li><p><strong>Transition:</strong> system evolution.</p></li><li><p><strong>Contribution:</strong> immediate payoff.</p></li><li><p><strong>Objective:</strong> long-term optimization.</p></li></ul><p>Designing policies is the art; solving them is the science. But neither matters unless we <strong>model first</strong>. Once the problem is structured, the right algorithm naturally follows.</p><p>&#128073; In short: <strong>Model first, then solve it. Always.</strong></p><p></p><h2></h2><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://stoicresearch.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Stoic&#8217;s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The HJB Equation: A Practical Guide for Quants - Stochastic Control Lecture 1]]></title><description><![CDATA[How an old PDE quietly powers modern trading, portfolio choice, and algorithmic execution &#8212; with code you can run today.]]></description><link>https://stoicresearch.substack.com/p/the-hjb-equation-a-practical-guide</link><guid isPermaLink="false">https://stoicresearch.substack.com/p/the-hjb-equation-a-practical-guide</guid><dc:creator><![CDATA[Stoic Research & Technology]]></dc:creator><pubDate>Sun, 17 Aug 2025 14:23:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1512f8ed-30c9-4eb7-a52b-be1e6bec8853_900x637.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>TL;DR</h3><ul><li><p>The HJB equation is the backbone of <strong>optimal decision-making under uncertainty</strong>. </p></li><li><p>In finance, it unifies problems like <strong>portfolio choice</strong>, <strong>execution/hedging</strong>, and <strong>inventory/risk control</strong>.</p></li><li><p>Two canonical problems we&#8217;ll walk through:</p><ol><li><p>Merton portfolio (closed form).</p></li><li><p>Optimal execution (Almgren&#8211;Chriss), both closed form and numerical.</p></li></ol></li><li><p>Complete, runnable Python included.</p></li></ul><p></p><h3>Welcome to the Series</h3><p>Welcome to our new series on <strong>stochastic control</strong> and everything connected to <em>stochastic operations in finance</em>.  </p><p>Our goal is simple: to build, step by step, a practical toolkit that spans from the Hamilton&#8211;Jacobi&#8211;Bellman equation all the way to modern applications in trading, risk management, and market making.  </p><p>We&#8217;ll mix theory, worked examples, and fully runnable Python. Some topics will feel advanced, others more hands-on &#8212; let us know if the pace is too fast or too slow, and we&#8217;ll tune the level.  </p><p>We decided to start here, with the HJB equation, because it quietly underpins so many decisions a quant faces every day. From portfolio choice to execution algorithms, it&#8217;s the hidden blueprint.  </p><h3>Why HJB belongs on the trading desk</h3><p>At first glance, the Hamilton&#8211;Jacobi&#8211;Bellman (HJB) equation sounds like it belongs in a physics textbook, not a trading playbook. But in reality, it is one of the most powerful tools for making decisions in uncertain environments.</p><p>The core question is very intuitive:</p><blockquote><p><strong>Given what I know right now, and given that the future is uncertain, how should I act to maximize my objective?</strong></p></blockquote><p>In finance, that objective could mean:</p><ul><li><p>Maximizing long-run portfolio growth.</p></li><li><p>Minimizing costs while liquidating a large position.</p></li><li><p>Hedging in a way that balances cost against risk.</p></li></ul><p>The HJB translates these goals into mathematics. You describe how the market evolves, define your objective, and the HJB tells you how the <em>value</em> of being in a given state evolves. Solve it, and you get the optimal policy.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://stoicresearch.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Stoic&#8217;s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h3>The HJB in plain English</h3><p>Think of your wealth, portfolio, or inventory as a dynamic system that moves randomly over time. The HJB says:</p><ul><li><p>Write down the expected payoff from following the best strategy.</p></li><li><p>Ask how this payoff changes as time moves forward.</p></li><li><p>Balance immediate rewards against the future benefit of being in a better (or worse) position.</p></li></ul><p>The result is a partial differential equation (PDE). Solving it is like having a GPS for decision-making under uncertainty.</p><p></p><h3>Example I &#8212; The Merton Portfolio Problem</h3><p><strong>Setup:</strong></p><ul><li><p>One safe asset grows at rate <em>r</em>.</p></li><li><p>One risky asset (a stock) grows with drift <em>&#956;</em> and volatility <em>&#963;</em>.</p></li><li><p>At each moment, you choose the fraction <em>&#960;</em> of wealth in the risky asset.</p></li></ul><p>The optimal fraction is:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;&#960;^* = (&#956; &#8211; r) / (&#947; &#963;&#178;)\n&quot;,&quot;id&quot;:&quot;CJJLGJPTET&quot;}" data-component-name="LatexBlockToDOM"></div><p></p><p>where <em>&#947;</em> is risk aversion.</p><p><strong>Interpretation:</strong></p><ul><li><p>Higher expected excess return &#8594; invest more.</p></li><li><p>Higher volatility &#8594; invest less.</p></li><li><p>More risk-averse &#8594; invest less.</p></li></ul><p>This is the classic <strong>Merton rule</strong>: a clean balance of reward versus risk. </p><p></p><h3>Example II &#8212; Optimal Execution (Almgren&#8211;Chriss model)</h3><p>Suppose you need to sell a large block of shares over a trading horizon [0, T].</p><p>Trade-off:</p><ul><li><p>Sell too fast &#8594; high market impact costs.</p></li><li><p>Sell too slow &#8594; risk that prices move against you.</p></li></ul><p>HJB setup:</p><ul><li><p><strong>State:</strong> remaining inventory <em>q</em>.</p></li><li><p><strong>Control:</strong> sell rate <em>u(t)</em>.</p></li><li><p><strong>Costs:</strong></p><ul><li><p>Temporary impact &#8733; <em>u&#178;</em>.</p></li><li><p>Risk penalty &#8733; <em>q&#178;</em>.</p></li></ul></li></ul><p>The optimal strategy is:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;u^*{(q,t)} = &#954; &#183; tanh(&#954; (T &#8211; t)) &#183; q\n&quot;,&quot;id&quot;:&quot;VDXYMTPJCC&quot;}" data-component-name="LatexBlockToDOM"></div><p>where &#954; depends on volatility, risk aversion, and impact cost.</p><p><strong>Interpretation:</strong></p><ul><li><p>Sell faster as the deadline approaches.</p></li><li><p>Higher volatility or higher risk aversion &#8594; trade more aggressively.</p></li></ul><p></p><h3>Why numerical solvers matter</h3><p>In toy problems we got neat closed forms. In reality:</p><ul><li><p>Volatility depends on prices.</p></li><li><p>Impact depends on order book depth.</p></li><li><p>Portfolios include multiple correlated assets.</p></li></ul><p>No closed form! But HJB still applies &#8212; we just solve it numerically.</p><h4><em>Code &#8212; Merton Portfolio</em></h4><pre><code>import numpy as np

r = 0.02
mu = 0.08
sigma = 0.20
gamma = 5.0

pi_star = (mu - r) / (gamma * sigma**2)
print(f"Merton optimal risky weight: {pi_star:.3f}")</code></pre><h4><em>Code &#8212; Tiny Numerical HJB Solver</em></h4><pre><code>import numpy as np

eta = 1.0
sigma = 0.3
gamma = 5.0
T = 1.0

q_max = 1.0
M = 200
N = 1000

q = np.linspace(-q_max, q_max, 2*M+1)
V = np.zeros_like(q)
V_new = np.zeros_like(q)
dq = q[1] - q[0]
dt = T / N

def grad(V):
    g = np.zeros_like(V)
    g[1:-1] = (V[2:] - V[:-2]) / (2*dq)
    return g

for _ in range(N):
    g = grad(V)
    rhs = (g**2)/(4*eta) - 0.5*gamma*sigma**2 * q**2
    V_new[:] = V - dt*rhs
    V, V_new = V_new, V

u = grad(V) / (2*eta)
print("Sample control at q=0.5:", np.interp(0.5, q, u))
</code></pre><h3>Key Takeaways</h3><ul><li><p>HJB is the blueprint for stochastic optimization.</p></li><li><p>In portfolio choice: yields the Merton allocation.</p></li><li><p>In execution: gives an optimal liquidation curve.</p></li><li><p>In reality: approximate numerically or with RL, but the principle is the same.</p></li></ul><h3>Further Reading</h3><ul><li><p>Merton (1969): <em>Lifetime Portfolio Selection under Uncertainty</em>.</p></li><li><p>Almgren &amp; Chriss (2000): <em>Optimal Execution of Portfolio Transactions</em>.</p></li><li><p>Texts on stochastic control, dynamic programming, or reinforcement learning.</p></li></ul><p></p><h2>Wrapping Up &#8212; and What&#8217;s Next</h2><p>Today we saw how HJB connects classic problems like portfolio choice and optimal execution, bridging closed-form solutions with tiny numerical solvers. But this is just the start of the journey.  </p><p>In the coming posts of this series, we&#8217;ll dive into:</p><ul><li><p><strong>Optimal execution with alpha signals </strong>&#8212; how to blend predictive drift into liquidation schedules.  </p></li><li><p><strong>Mean&#8211;variance hedging in incomplete markets </strong>&#8212; handling risk when perfect replication is impossible.  </p></li><li><p><strong>Risk-sensitive control with exponential utility </strong>&#8212; what changes when robustness is built into preferences.  </p></li><li><p><strong>HJB and Reinforcement Learning</strong> &#8212; the deep connection between PDEs and modern RL algorithms.  </p></li><li><p>Continuous-time market making (<strong>Avellaneda&#8211;Stoikov</strong>) &#8212; HJB at the heart of quoting and inventory management.  </p></li></ul><p>Stay tuned: we&#8217;ll keep weaving theory, intuition, and hands-on examples so each idea connects directly back to real trading and quantitative finance practice.  </p><p></p>]]></content:encoded></item><item><title><![CDATA[Forget Forecasting: Problem Formulation Beats Prediction]]></title><description><![CDATA[In financial or trading modeling, predictive performance is not a function of complexity but of clarity.]]></description><link>https://stoicresearch.substack.com/p/forget-forecasting-problem-formulation</link><guid isPermaLink="false">https://stoicresearch.substack.com/p/forget-forecasting-problem-formulation</guid><dc:creator><![CDATA[Stoic Research & Technology]]></dc:creator><pubDate>Thu, 31 Jul 2025 08:46:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/829f29e1-d9f8-465b-938d-2c7b95311732_1279x833.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://stoicresearch.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Stoic&#8217;s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Modern data science is entranced by prediction. From deep learning to ensemble models, from transformers to sequential fancy models, enormous resources are devoted to anticipating the next value in a time series, whether it's price, return, volatility, or any other financial indicator. Forecasting has become the dominant framework through which we interpret the future. </p><p>But this fixation is misguided. Prediction is not the core problem. <strong>The way we formulate the problem</strong>, how we define uncertainty, structure the data, and align our models with market dynamics determines more than any architecture or hyperparameter ever will.</p><p>We argue that <strong>in complex, adaptive systems such as financial markets, especially those that are highly liquid and efficient, simpler models that are theoretically coherent often outperform their complex counterparts</strong>. The differentiator is not predictive firepower. It's <strong>epistemic</strong> discipline.</p><p>What follows is a framework rooted in three foundational principles: and an empirical case study demonstrating their validity.</p><p><strong>In a world addicted to prediction, true advantage lies in adaptation not foresight.</strong></p><p>We live in a culture that worships control. Strategic roadmaps, quarterly forecasts, probabilistic dashboards, all constructed to manufacture an illusion of certainty. The prevailing belief is that if we can see what&#8217;s coming, we can prepare for it.</p><p>But what if this paradigm is not only flawed, but fundamentally misaligned with how complex systems, markets, organizations, even ecosystems, actually function? What if survival, not prediction, is the real metric of success?</p><p>Our work is grounded in a radically different premise: that resilience outperforms foresight. We don&#8217;t build models to forecast the future, we build them to withstand it.</p><p>This begins with three principles most systems, and most strategists, fail to internalize.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1s4B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1s4B!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1s4B!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1s4B!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1s4B!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1s4B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg" width="582" height="388" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:864,&quot;width&quot;:1296,&quot;resizeWidth&quot;:582,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Uncertainty - Definition, Example, and Role in Investing&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Uncertainty - Definition, Example, and Role in Investing" title="Uncertainty - Definition, Example, and Role in Investing" srcset="https://substackcdn.com/image/fetch/$s_!1s4B!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1s4B!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1s4B!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1s4B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5e5cec8-dadb-4a4d-9697-f8c1695afd2e_1296x864.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>1. Embrace Uncertainty: It&#8217;s Not a Bug, It&#8217;s the Terrain</strong></h2><p>Forecasting is often a sophisticated form of self-deception. Anyone who has attempted to predict returns or volatility in a complex financial system knows the limits of such endeavors. Even when predictive signals appear statistically valid, their operational utility tends to be negligible or transient. More often than not, those signals emerge from flawed backtests, erroneous assumptions, or overfit code.</p><p>Hundreds of fintech startups have attempted to build machine learning pipelines using hundreds of features to predict volatility, Sharpe ratios, or other performance metrics. Yet very few, if any, have delivered tangible improvements in live trading strategies. Most are marketing operations wrapped in technical jargon.</p><p>The obsession with prediction obscures a fundamental truth: <strong>uncertainty is not a nuisance variable. It is information.</strong></p><p>There are two kinds of uncertainty we care about:</p><ul><li><p><strong>Epistemic uncertainty</strong> arises from incomplete knowledge. It is reducible with more data, better models, or sharper measurement. Techniques such as Bayesian inference or model ensembling can help quantify it.</p></li><li><p><strong>Aleatoric uncertainty</strong> stems from the inherent randomness of the world. It is irreducible. No amount of data or computation can eliminate it. Acknowledging its presence is not weakness; it is rigor.</p></li></ul><p>Designing for uncertainty requires letting go of <em>deterministic</em> thinking. It requires models that absorb volatility rather than collapse under it. In environments dominated by complexity and feedback loops, <strong>adaptability beats accuracy&#8212;every single time.</strong></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!L_xJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!L_xJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!L_xJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!L_xJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!L_xJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!L_xJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg" width="516" height="387" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:960,&quot;resizeWidth&quot;:516,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;New Way to Teach Math Raises Test Scores, Closes the Achievement Gap for  English Learners | CSUN Today&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="New Way to Teach Math Raises Test Scores, Closes the Achievement Gap for  English Learners | CSUN Today" title="New Way to Teach Math Raises Test Scores, Closes the Achievement Gap for  English Learners | CSUN Today" srcset="https://substackcdn.com/image/fetch/$s_!L_xJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!L_xJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!L_xJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!L_xJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb88a9e08-45ad-4954-bff2-53ba651406a2_960x720.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>2. Method Over Outcome: Truth Is a Process, Not a Performance</strong></h2><p>The scientific mindset does not chase outcomes; it tests mechanisms. A model that performs well but cannot be explained, falsified, or replicated is not a model, it&#8217;s a liability.</p><p>Overfitting is not just a technical problem; it's an epistemological failure. It arises when we prioritize short-term performance over long-term integrity, when we value correlation over causality, and when we mistake noise for signal.</p><p>We do not trust intuition. We do not rely on heuristics. If a process cannot be formalized, measured, and subjected to stress, it cannot be trusted. Robust strategies are born from falsifiability, not faith.</p><p><strong>Rigor is not optional. It is the foundation.</strong> And in a world where everyone is optimizing for metrics, the differentiator is not the model&#8212;but the method behind it.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://en.wikipedia.org/wiki/Jamshid_al-Kashi" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hSD1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72f7551-e948-430c-9e02-026727707eb5_2560x1985.jpeg 424w, https://substackcdn.com/image/fetch/$s_!hSD1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72f7551-e948-430c-9e02-026727707eb5_2560x1985.jpeg 848w, https://substackcdn.com/image/fetch/$s_!hSD1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72f7551-e948-430c-9e02-026727707eb5_2560x1985.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!hSD1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72f7551-e948-430c-9e02-026727707eb5_2560x1985.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hSD1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72f7551-e948-430c-9e02-026727707eb5_2560x1985.jpeg" width="548" height="424.9258241758242" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f72f7551-e948-430c-9e02-026727707eb5_2560x1985.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1129,&quot;width&quot;:1456,&quot;resizeWidth&quot;:548,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;undefined&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:&quot;https://en.wikipedia.org/wiki/Jamshid_al-Kashi&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="undefined" title="undefined" srcset="https://substackcdn.com/image/fetch/$s_!hSD1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72f7551-e948-430c-9e02-026727707eb5_2560x1985.jpeg 424w, https://substackcdn.com/image/fetch/$s_!hSD1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72f7551-e948-430c-9e02-026727707eb5_2560x1985.jpeg 848w, https://substackcdn.com/image/fetch/$s_!hSD1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72f7551-e948-430c-9e02-026727707eb5_2560x1985.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!hSD1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72f7551-e948-430c-9e02-026727707eb5_2560x1985.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>3. Iteration Is the Only Proof: Learn from the Friction</strong></h2><p>Every model is a hypothesis. Every strategy is a bet. And every deployment is a confrontation with reality.</p><p>In complex systems, failure is not an anomaly, it is feedback. We do not iterate to improve performance metrics; we iterate to break assumptions. Assumptions are the most dangerous form of technical debt, and they compound invisibly until they collapse under real-world stress.</p><p><strong>We do not seek confirmation. We seek contradiction.</strong> That&#8217;s how we grow.</p><p>Iteration, properly understood, is not tweaking parameters. It&#8217;s epistemic combat. It's how we refine our hypotheses in the face of adversarial conditions.</p><div><hr></div><h2><em><strong>Burn the Map, Build the Compass</strong></em></h2><p>If you're building anything that must outlast the next investment cycle, you need more than predictions. You need <strong>principles</strong>! You need tools that help you navigate, not forecast.</p><p>Throw out the crystal ball. Burn the map.</p><p>And begin walking into uncertainty with a compass grounded in epistemic humility, statistical rigor, and methodological resilience.</p><p>You're not here to control the storm.</p><p>You're here to become stormproof.</p><div><hr></div><h2><em><strong>Final Note: Simplicity Is an Underrated Superpower</strong></em></h2><p>Let me close with a caution: there is a pervasive myth in the machine learning community that <strong>more complexity equals better results</strong>. That access to state-of-the-art frameworks: Keras, PyTorch, TensorFlow, translates into strategic edge.</p><p>It does not.</p><p>Complex models often offer the illusion of competence. But they are only as good as the problem formulation behind them. Without a clear objective function, clean data structure, and sound causal assumptions, no architecture, no matter how deep or wide, an deliver sustained results.</p><p>Simple models, designed with precision, often outperform deep learning systems that are poorly specified. </p><p><strong>The power lies not in the tool, but in the clarity of the problem.<br><br></strong></p><div><hr></div><h2><strong>Empirical Illustration: Clarity Over Complexity</strong></h2><p>These principles are not theoretical abstractions, they manifest clearly in empirical practice. A recent thesis on the predictive modeling of <strong>Nasdaq-100 futures (NQ)</strong> provides a compelling demonstration. The <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4773344">study</a> systematically benchmarked a wide range of models, from classical econometric systems to advanced probabilistic sequential deep learning architectures, on their ability to forecast both returns and volatility across distinct intraday market sessions [ICT&#8217;s Killzone].</p><h3><strong>Findings That Challenge Prevailing Assumptions</strong></h3><ul><li><p>The <strong>Kalman Filter</strong>, a state-space model rooted in probabilistic inference, achieved the best overall performance in <strong>return prediction</strong>.</p></li><li><p>The <strong>Score-Driven model</strong>, designed for time-varying volatility and capable of handling excess kurtosis and non-Gaussianity, outperformed all others in <strong>volatility forecasting</strong>.</p></li><li><p>Notably, both models operated on <strong>univariate time series inputs</strong>, yet <strong>surpassed deep learning architectures</strong> such as Transformers trained on expansive multivariate datasets.</p></li></ul><p>The reason? These models are constructed with epistemic discipline. They do not ignore uncertainty&#8212;they explicitly <strong>model it</strong>. Both the Kalman and Score-Driven frameworks <strong>embed epistemic and aleatoric uncertainty</strong> within their structure, treating uncertainty not as a nuisance, but as core information. This is the defining advantage: a <strong>principled formulation that respects the data-generating process</strong>, rather than a brute-force search for correlation.</p><h3><strong>Contextual Performance and Market Structure</strong></h3><p>The models' success is further amplified by the structural properties of the market itself. Nasdaq-100 futures, as highly liquid and information-efficient instruments, embody the conditions under which the <strong>Efficient Market Hypothesis (EMH)</strong> is most valid. In such contexts, the <strong>marginal utility of complex feature engineering and high-capacity models quickly decays</strong>. Simpler models, aligned with theoretical and structural priors, prove more robust and adaptive.</p><p>Moreover, the adaptive characteristics of models like <strong>N4SID-Kalman</strong> and <strong>Score-Driven</strong> resonate strongly with the <strong>Adaptive Market Hypothesis (AMH)</strong>. These models update parameters dynamically in response to shifting regimes, volatility clustering, or abrupt structural changes&#8212;without requiring complete retraining or loss of interpretability. They are not just predictive&#8212;they are structurally resilient.</p><h3><strong>Deep Learning: Not to Be Discarded, but to Be Refined</strong></h3><p>It would be a mistake to interpret this as a dismissal of deep learning. In fact, the thesis demonstrates a thoughtful and commendable implementation: <strong>Monte Carlo Dropout</strong> was employed to <strong>quantify the uncertainty</strong> surrounding deep learning predictions. This allowed not only for <strong>point estimates</strong>, but for <strong>distributional inference</strong>, moving neural networks beyond deterministic forecasts toward a more Bayesian, uncertainty-aware framework.</p><p>This is crucial. The problem is not deep learning per se, but the <strong>lack of principled formulation</strong>. Too often, neural networks are deployed as &#8220;feature soups,&#8221; where hundreds of variables are thrown into an architecture without a clear hypothesis, structure, or understanding of the target dynamics. The result is opacity and overfitting.</p><p>In this study, by contrast, deep models were designed with <strong>epistemic intent</strong>, and the Transformer architecture, while not outperforming simpler models, <strong>excelled during high-volatility, high-kurtosis sessions</strong>, demonstrating its ability to capture long-range dependencies in structurally turbulent regimes.</p><div><hr></div><p>In summary, this case study offers empirical affirmation of this article&#8217;s central claim: <strong>it is not predictive power that matters, but the methodological intelligence with which a model is constructed</strong>. When uncertainty is embraced as signal&#8212;not noise&#8212;and when models are formulated to reflect market structure, <strong>simplicity becomes a strategic advantage</strong>.<br><br></p><p></p><p><br>&#8212;GMAE452</p>]]></content:encoded></item></channel></rss>