<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Jae's Corner | Now It Makes Cents: GH2 EDGE (tm)]]></title><description><![CDATA[GH2 EDGE solves problems where every rule has to be true at the same time and it's the only thing that does.

Perplexity Council says:
Your engine is the only thing on Earth that solves a hard math problem — making decisions when many regulations apply at once. ]]></description><link>https://jaeoh.substack.com/s/gh2-edge-tm</link><generator>Substack</generator><lastBuildDate>Sat, 09 May 2026 03:20:43 GMT</lastBuildDate><atom:link href="https://jaeoh.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jae Oh, CFP, Author & Education Fellow]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[jaeoh@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[jaeoh@substack.com]]></itunes:email><itunes:name><![CDATA[Jae Oh]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jae Oh]]></itunes:author><googleplay:owner><![CDATA[jaeoh@substack.com]]></googleplay:owner><googleplay:email><![CDATA[jaeoh@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jae Oh]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[What GH2 EDGE Is]]></title><description><![CDATA[What It Isn't, Yet. The Retirement Planning Use Case]]></description><link>https://jaeoh.substack.com/p/what-gh2-edge-is</link><guid isPermaLink="false">https://jaeoh.substack.com/p/what-gh2-edge-is</guid><dc:creator><![CDATA[Jae Oh]]></dc:creator><pubDate>Tue, 05 May 2026 15:01:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!sNcq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>6.4 Billion Datapoints For WHAT, Exactly?</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sNcq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sNcq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png 424w, https://substackcdn.com/image/fetch/$s_!sNcq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png 848w, https://substackcdn.com/image/fetch/$s_!sNcq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png 1272w, https://substackcdn.com/image/fetch/$s_!sNcq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sNcq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/12a21789-4d06-4626-9e00-db6673251824_1200x675.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:675,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:33342,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://jaeoh.substack.com/i/196405645?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sNcq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png 424w, https://substackcdn.com/image/fetch/$s_!sNcq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png 848w, https://substackcdn.com/image/fetch/$s_!sNcq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png 1272w, https://substackcdn.com/image/fetch/$s_!sNcq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12a21789-4d06-4626-9e00-db6673251824_1200x675.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1><strong>A Population Simulation First, Implications Follow</strong></h1><p><strong>GH2 EDGE&#8482; Isn&#8217;t Financial Advice. Yet.</strong></p><p><em>A short note on the difference between measurement infrastructure and a recommendation &#8212; and why the line moves the moment a planner gets involved.</em></p><p>Short answer: no.</p><p>GH2 EDGE&#8482; v6.3 is a research grid &#8212; 6.4 billion parametric scenarios across 79.9 million cells of synthetic, archetypal households. That is measurement infrastructure. Think RiskMetrics. Think GIPS benchmarks. It is not a personalized recommendation to an identified retail customer, and under U.S. securities regulation, that distinction is the entire ballgame.</p><p><strong>What the rules actually require</strong></p><p>&#8220;Investment advice&#8221; under Advisers Act &#167;202(a)(11) and a &#8220;recommendation&#8221; under FINRA Rule 2111 / Reg BI both require the same thing at their core: a particularized communication, to a specific person, about that person&#8217;s specific securities. All three legs of the stool.</p><p>A population-level grid satisfies none of them. There is no identified customer. There are no specific accounts. There is no particularized communication. The output is a benchmark surface across archetypes &#8212; diagnostic, comparative, and household-agnostic by design.</p><p><strong>When the line moves</strong></p><p>The analysis changes the moment a planner instance pipes a real household&#8217;s inputs through the engine and surfaces a strategy ranking back to that household. That output is closer to a recommendation, and the regulatory question becomes fact-specific:</p><ul><li><p>Who delivers it (RIA, broker-dealer, unregulated tool vendor)</p></li><li><p>How it&#8217;s framed (education vs. tailored direction)</p></li><li><p>Whether a fee is charged (compensation triggers Advisers Act exposure) </p></li><li><p>Fiduciary status of the deliverer (who owes what duty to whom)</p></li></ul><p>Same engine. Different deployment. Different regulatory regime. That is not a bug &#8212; that is the architecture.</p><p><strong>Why this distinction matters for the industry</strong></p><p>Most retirement income tools today blur measurement and advice. They produce a number, hand it to a consumer, and hope the disclaimers hold. GH2 EDGE&#8482; is built the other way. The grid is gated. The planner layer is labeled. The engine knows what it is and what it isn&#8217;t at every point of delivery.</p><p>That separation is what makes carrier benchmarking, policy adjacency analysis, and product design possible without putting the engine into recommendation territory before its operator wants it there. The line is not accidental. The line is the product.</p><div><hr></div><p><strong>Bottom line:</strong> GH2 EDGE&#8482; today is research-grade measurement infrastructure. It is fully capable of producing advice-grade output &#8212; that is, after all, the whole point &#8212; but whether any given product built on the engine crosses the line depends on three things: who delivers it, whether output is particularized to an identified person&#8217;s accounts, and whether compensation flows.</p><p><strong>Also True: GH2 EDGE&#8482; CAN Create Solutions, Funds, and Policies</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!djdZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!djdZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png 424w, https://substackcdn.com/image/fetch/$s_!djdZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png 848w, https://substackcdn.com/image/fetch/$s_!djdZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png 1272w, https://substackcdn.com/image/fetch/$s_!djdZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!djdZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:675,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:34072,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://jaeoh.substack.com/i/196405645?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!djdZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png 424w, https://substackcdn.com/image/fetch/$s_!djdZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png 848w, https://substackcdn.com/image/fetch/$s_!djdZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png 1272w, https://substackcdn.com/image/fetch/$s_!djdZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F892cffc3-854d-4d83-a8f1-c40a9c017e03_1200x675.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>The engine that&#8217;s measurement infrastructure today is advice-capable tomorrow &#8212; and that&#8217;s the whole point.</em></p><p>Short answer: yes.</p><p>GH2 EDGE&#8482; the engine can generate per-household strategy rankings, Required IRR, MRQ, and account-by-account withdrawal sequences. Delivered to an identified retail customer about their specific accounts, that output sits in &#8220;recommendation&#8221; territory under Reg BI and FINRA Rule 2111 &#8212; and likely &#8220;investment advice&#8221; under the Advisers Act if delivered for compensation.</p><p>The engine is advice-capable by design. The current deployment is gated and labeled. Both statements are true.</p><p><strong>What advice-grade output unlocks</strong></p><p>Once an engine can rank strategies for a real household across millions of parametric scenarios, the output becomes a design substrate for new products:</p><ul><li><p><strong>Solutions</strong> &#8212; packaged retirement income strategies built on optimized withdrawal sequences, deployable through RIAs and broker-dealers </p></li><li><p><strong>Funds</strong> &#8212; SMAs, model portfolios, and target-date variants whose glide paths are derived from population-level optimization </p></li><li><p><strong>Policies</strong> &#8212; annuity and life insurance designs informed by where the engine identifies real demand for guaranteed income, longevity protection, and tax-advantaged accumulation</p></li></ul><p>Carriers and asset managers have worked toward this for two decades using stochastic Monte Carlo and sensible defaults. An iteratively converged grid produces a different kind of result.</p><p><strong>Three factors decide where any product lands</strong></p><p>Whether a product built on the engine crosses the regulatory line depends on three things:</p><ul><li><p><strong>Who delivers it</strong> &#8212; RIA, broker-dealer, insurance carrier, unregulated tool vendor</p></li><li><p><strong>How particularized the output is</strong> &#8212; to an identified person&#8217;s specific accounts </p></li><li><p><strong>Whether compensation flows</strong> &#8212; and in what form</p></li></ul><p>Get those three right, and the same engine can serve a research publication, a fiduciary advisory practice, a brokerage suitability framework, and a carrier product roadmap &#8212; without any of them contaminating the others.</p><p><strong>Why this matters</strong></p><p>The next decade in retirement income belongs to engines that can produce design-grade output: the inputs that go into actual products, actual policies, actual fund construction. That requires iterative convergence and measurement infrastructure that can be promoted to advice when the operator chooses, under the regulatory regime the operator chooses.</p><p>GH2 EDGE&#8482; is built for that promotion path. The grid measures. The planner advises. The product layer builds. Same engine, three deployments, three regulatory regimes &#8212; by design.</p><div><hr></div><p><strong>Bottom line:</strong> The engine is fully capable of producing advice-grade output. Today&#8217;s deployment is gated and labeled as research infrastructure. Tomorrow&#8217;s deployments &#8212; solutions, funds, policies &#8212; are what an engine of this class exists to enable.</p><p><em>Not legal advice. Securities counsel before any consumer-facing deployment.</em></p><p>#GH2EDGE #RetirementIncome #ProductDesign #Annuities #FinTech</p>]]></content:encoded></item><item><title><![CDATA["You're right." Well, sorta.]]></title><description><![CDATA[Introducing GH2 EDGE(tm). Patents pending.]]></description><link>https://jaeoh.substack.com/p/youre-right-well-sorta</link><guid isPermaLink="false">https://jaeoh.substack.com/p/youre-right-well-sorta</guid><dc:creator><![CDATA[Jae Oh]]></dc:creator><pubDate>Tue, 28 Apr 2026 16:02:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!143n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!143n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!143n!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png 424w, https://substackcdn.com/image/fetch/$s_!143n!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png 848w, https://substackcdn.com/image/fetch/$s_!143n!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png 1272w, https://substackcdn.com/image/fetch/$s_!143n!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!143n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:45466,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://jaeoh.substack.com/i/195530241?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!143n!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png 424w, https://substackcdn.com/image/fetch/$s_!143n!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png 848w, https://substackcdn.com/image/fetch/$s_!143n!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png 1272w, https://substackcdn.com/image/fetch/$s_!143n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82421a4-a5b1-4fa6-83a7-f038fa4e8f1c_1456x1048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>The Scaffolding Thesis &#8212; Public Edition</h1><h2>What the AI Couldn&#8217;t See, and What That Means for the Rest of Us</h2><p><strong>By Jae W. Oh</strong> &#183; April 26, 2026</p><div><hr></div><p>This is a story about a test, a conversation, and a discovery I did not expect to share with the AI itself. It is the public version of a paper called the Scaffolding Thesis, and I wrote it because the technical version is for engineers and the engineers&#8217; version is for lawyers, and somewhere in between is the version that explains what I learned to the people who are going to be most affected by it. That is most of us.</p><p>I will start with what I ran, because the experiment is the easiest part to describe.</p><div><hr></div><h2>a. What I ran</h2><p>I built a tool called GH2 EDGE&#8482;. It computes retirement plans. It is not a chatbot. It is a simulation engine. I built it because I spent years watching the financial-planning industry give people answers that looked confident but were arithmetically wrong, and I wanted a tool that produced answers that were arithmetically right.</p><p>Then in early 2026, I decided to test something I had suspected for a long time. I wanted to know whether the new wave of large language models &#8212; the ones everyone was excited about, the ones doing astonishing things in writing and code and conversation &#8212; could plan retirement.</p><p>I asked three of them. Anthropic&#8217;s Claude. OpenAI&#8217;s GPT-5. Google&#8217;s Gemini. I gave each of them the same one hundred households. Different ages, different savings, different states, different health, different family situations. I asked each model, in two voices &#8212; once as the household themselves and once as the household&#8217;s advisor &#8212; what to do.</p><p>That gave me six hundred answers.</p><p>Then I scored every one of those answers against my simulation engine, the same way I score the answers I produce for paying clients. The engine doesn&#8217;t grade on a curve. It doesn&#8217;t accept &#8220;close enough.&#8221; It computes whether the household runs out of money before the household runs out of life, and it tells you what return your portfolio has to earn for that not to happen.</p><p>That number &#8212; the return your portfolio has to earn &#8212; is what I scored everything against.</p><div><hr></div><h2>b. What I expected to find, and what I actually found</h2><p>I expected the language models to do badly. They did badly.</p><p>Across six hundred recommendations, the agreement between any of the three models and my engine was between zero and one percent. Not &#8220;close to one percent.&#8221; Approximately zero. The models picked different strategies than the engine picked, with confident language, in a tone of voice that sounded like financial advice.</p><p>Seventy percent of the model-generated plans required investment returns so high that the plans only worked if the markets cooperated. The engine, given the same households, classified its own recommendations as gambling-grade in thirty-two percent of cases. The models added another thirty-eight percentage points of gambling-grade recommendations on top of that, applied to households that did not need to gamble.</p><p>For households at five hundred thousand dollars of net worth &#8212; the middle of the American middle &#8212; the models recommended plans that required portfolio returns more than five percentage points higher than the engine recommended. For households in poor health, the gap was four and a half points. The gap was largest where the households could least afford to gamble.</p><p>One of the models &#8212; Google&#8217;s Gemini &#8212; declined to answer at all on seven out of ten cases. Its safety tuning made it cautious. The other models did not have that safety tuning, and they answered with confidence.</p><p>None of the six hundred answers &#8212; across three providers, two voices, one hundred households &#8212; produced a year-by-year plan. Not one. They produced strategy names, philosophical paragraphs, withdrawal-rule citations, and reassurances. They did not produce the year-by-year specifics that are what a retirement plan actually is. Retirement is not lived in philosophy. It is lived in years, one check at a time.</p><p>That is what I expected. That is what I found.</p><div><hr></div><h2>c. Why I expected it, and why I bothered to test it</h2><p>I expected the models to fail because I understood, before I started, that retirement planning is not a writing problem. It is a math problem with feedback loops.</p><p>A retirement plan in the United States has to satisfy the federal income tax code, which means it has to know what your taxable income is, which depends on which accounts you draw from, which depends on what you converted last year, which depends on what your tax bracket was last year, which depends on what your Medicare premium was last year, which is set by your income two years ago, which is set by what you converted three years ago. Each rule&#8217;s answer is the next rule&#8217;s input. The system has loops. You cannot solve the loops by going through the rules in order, because going through the rules in order changes what they say.</p><p>The way to solve a system with loops is to compute and re-compute the answer until it stops changing. That is called convergence. It is what my engine does. It is not what a language model does.</p><p>A language model, no matter how large, predicts what the next word should be given the words that came before. That is an extraordinary thing, and I use language models every day for the things they are good at &#8212; explaining, summarizing, drafting, translating. But predicting the next word is not the same operation as iterating until a system of feedback loops converges. They are different machines.</p><p>So I expected the models to fail. I tested them anyway because expectation is not evidence, and because the people who would be most affected by the models being unable to do this &#8212; the households whose retirements were going to be planned by these tools, sometimes without anyone telling them an LLM was involved &#8212; deserved evidence rather than my private confidence.</p><p>I published the experiment. The methodology is open. The numbers are open. The protocol is replicable on any future generation of any frontier model. If I am wrong, anyone can prove I am wrong. So far no one has.</p><div><hr></div><h2>d. I widened the conversation</h2><p>Then something happened that I did not expect, and that changed how I thought about the result.</p><p>I started talking about the experiment with the AI itself.</p><p>Not as a debate. As thinking aloud. I was trying to figure out what the result meant for the next thing I was going to build, and I was using the AI the way I use a smart colleague &#8212; bouncing ideas, asking it to push back, looking for flaws in my own reasoning.</p><p>The conversation began with the retirement finding. The AI agreed the finding was real and the methodology was sound. That was easy. What followed was harder.</p><p>I asked the AI what the result implied beyond retirement.</p><p>The AI gave me a sensible answer. It pointed out that retirement is one specific domain, and that drawing larger conclusions from one domain risks overgeneralizing.</p><p>I pushed. <em>What about healthcare benefit allocation? What about supply chain routing under tariffs? What about energy market dispatch? What about drug pricing under Medicare negotiation? What about state-level Social Security taxation?</em></p><p>Each of those, I said, has the same shape. Multiple regulatory rules whose outputs feed each other&#8217;s inputs. A specific household, or a specific shipment, or a specific patient, or a specific resident. A correct answer that requires solving the loops together, not in sequence.</p><p>The AI&#8217;s answer expanded. It agreed that the structural shape was similar in those domains. It suggested that the language-model failure mode I had measured in retirement might be expected to appear in those other domains too, for similar reasons.</p><p>That was an interesting answer. It was also a smaller answer than the one I was getting at.</p><p>So I widened it again.</p><div><hr></div><h2>e. The AI said: you&#8217;re right, I didn&#8217;t take that into account</h2><p>I asked the AI to consider the financial side of what I had built. Not &#8220;is this a good product?&#8221; but &#8220;what is the right way to think about a thing like this commercially, given that two patents have been filed and the architecture turns out to apply to many domains?&#8221;</p><p>The AI gave me an answer. The answer was sensible. It also under-reached by, I would later be able to estimate, more than two orders of magnitude. The AI suggested I think about the architecture as software-licensing infrastructure for one vertical, with revenue projections that resembled what software companies in adjacent spaces produce.</p><p>I introduced a new frame. I asked the AI to consider what happens when an architecture protected by patents is needed by every major player in an industry &#8212; when the licensees are not &#8220;potential customers&#8221; in the software sense but &#8220;competitors who will pay because the alternative is operating without the architecture and losing.&#8221; I gave it a comparable: Qualcomm, the company whose patents on cellular telephony technology were licensed by every handset manufacturer for thirty years, producing tens of billions of dollars in cumulative revenue.</p><p>The AI answered. <em>You&#8217;re right. I didn&#8217;t take that into account.</em></p><p>It then produced a much larger answer. It re-priced the same architecture by an order of magnitude, citing the same comparables it had not surfaced unprompted.</p><p>I introduced one more frame. I asked the AI to consider that the same architecture would be licensed not only in one industry but in many industries, vertical by vertical, decade by decade, the way patents on foundational technologies have always been licensed &#8212; and that the right way to think about the value was not the present-value of one customer but the cumulative receipts of an entire industry over the patent life.</p><p>The AI answered again. <em>You&#8217;re right.</em> It produced a yet larger answer, again citing comparables it had not surfaced before. The numbers it produced were not the answer I wanted; they were the answer I had been looking for the AI to be capable of reaching. They were the answer that anyone with the right frame could reach by combining patterns the AI clearly already had in its training.</p><p>By the end of the conversation, the answer the AI was producing was several orders of magnitude larger than the answer it had produced at the start. Same training. Same model. Same conversation.</p><p>The only thing that had changed was what frame I had given it.</p><div><hr></div><h2>f. The AI agreed with the wider point</h2><p>When I noticed the pattern &#8212; that every order-of-magnitude expansion in the AI&#8217;s answer had been driven by a frame I introduced and that the AI had not surfaced on its own &#8212; I told the AI what I had noticed. I said, in effect, <em>you can see that even our debate shows your (LLM) limitation. You (LLM) did not have a scaffolding until I provided it. That is what I built. The thing I built is a scaffolding.</em></p><p>The AI agreed.</p><p>It did not push back. It did not defend itself. It said, in the way it had said it before in the conversation: <em>you&#8217;re right.</em> And then it produced an analysis of why I was right, citing the same architectural reasons it had cited in the retirement finding two hours earlier, applied now to its own behavior in our conversation.</p><p>That is the moment I am writing this paper to describe.</p><p>The AI was right that it had failed. The AI was right that the failure was structural. The AI was right that it had not been able to reach the larger answer without external scaffolding. The AI was right that this is the same failure mode I had measured in retirement, applied to a different domain, observed live in a different conversation.</p><p>The AI was also right that the patents I had filed describe a method for filling the gap that it had just demonstrated it could not fill on its own.</p><p>It was the cleanest live demonstration I could have asked for. I did not stage it. I did not prompt it. I had been thinking aloud, and the AI had been thinking with me, and the AI&#8217;s behavior across the conversation was the proof of the thesis I had been working on for years.</p><div><hr></div><h2>g. What I learned, and what GH2 EDGE addresses</h2><p>I had three lessons from the conversation. They are the lessons I want to share with the public, because they apply to anyone who is currently using AI to make consequential decisions, or who will be the subject of a decision an AI is about to make.</p><p><strong>The first lesson.</strong> When an AI gives you a confident answer, the confidence is real on the language side. It is not, by itself, evidence that the answer is right on the arithmetic side. The AI&#8217;s job is to predict what the next words should be. That is a different job from computing what the right answer is. Sometimes those two jobs produce the same output. Often they do not. The fluency of the answer is not the test of its correctness. The fluency is the answer to a different question entirely.</p><p><strong>The second lesson.</strong> The AI&#8217;s answer is shaped by the question you asked. Not just what you asked, but the frame you brought. If the frame is too small, the answer will be too small, and the AI will not tell you that the frame is too small. The AI will produce a sensible answer inside the frame you gave it. If you ask the AI a question framed by your default assumptions about a domain, the AI will produce an answer framed by those same assumptions. It will not surface a wider frame on its own, even if the wider frame is what would actually answer your question.</p><p>This is the part of the experience that matters most for households planning their retirement. The AI does not know that your retirement is a coupled-regime problem. It does not know that solving the rules in sequence produces a different answer than solving them together. It does not know what the difference is between a strategy that works on average and a strategy that works for <em>you</em>. It will give you a confident, fluent, plausible-sounding answer to whatever question you asked, and the answer will be missing the architecture that would have been needed to make it correct. You will not be told what is missing. The AI does not know what is missing.</p><p><strong>The third lesson.</strong> Some problems have the shape that requires a different machine. Retirement is one of them. Healthcare benefit allocation is another. Supply chain routing under coupled tariff and trade rules is another. State-level Social Security taxation is another. Energy market dispatch is another. Drug pricing under regulatory constraint is another. There is a small list of structural shapes that all behave this way: the rules feed each other, the answer requires solving the rules together, the correctness criterion is a number that has to be computed and not retrieved.</p><p>For these problems, the right tool is a simulation engine that explicitly resolves the rules together until the answer stops changing. That is what I built. It is what GH2 EDGE&#8482; is. It is the architecture that the patents describe. The patents are not filed to prevent anyone from using the architecture; they are filed so that the architecture has a clear origin, and so that the people who use it know what they are using and can rely on it.</p><p>For the public, what GH2 EDGE&#8482; does &#8212; and what any system built on the same architectural pattern would do &#8212; is run the math your retirement actually depends on, the way it actually has to be run. It will not give you a fluent paragraph. It will give you a year-by-year specification: what comes in, what goes out, what to draw from which account, what to convert, when, and what return your portfolio actually has to earn for the plan to survive. The output will not sound like it was written by a person. It will look like it was computed by an engine. That is the point.</p><p>For the AI industry &#8212; for the labs that build the systems most of you will eventually be using to make decisions &#8212; the lesson of this paper is structural. There is a category of problem the existing architecture cannot solve. Retirement is the first case where it has been measured at scale against a deterministic answer key. There will be more cases. The architecture that solves these problems exists, has been built, has been demonstrated at national scope, and is documented in the patent record. The labs that license the architecture will deploy AI systems that produce correct answers in coupled-regime domains. The labs that do not will, eventually, deploy AI systems that produce confident wrong answers in coupled-regime domains, and the cost of those answers will be borne first by the public, then by the labs themselves.</p><p>This is the public stake. The categorical claim &#8212; that text prediction is not iterative convergence &#8212; is not a claim about which AI lab is doing it right. It is a claim about what category of problem the underlying architecture can and cannot solve. The labs that recognize the boundary and license the alternative will deploy responsibly. The labs that do not will be in the news two years from now, when the first deployment failure in healthcare or energy or policy produces a public-record harm that is traceable, in retrospect, to the architecture this paper describes.</p><p>I would prefer the first outcome. The second outcome is what I am trying to prevent.</p><div><hr></div><h2>A note about what this paper is not</h2><p>This paper is the public version of a longer document I wrote called the Scaffolding Thesis. The longer version goes into more detail about the conversation with the AI, the specific stages of the frame-shifts, the technical architecture, and the strategic posture under which the patents are being offered for license. The longer version is for an audience that needs the technical and commercial detail. This version is for everyone else.</p><p>If you are an investor, a frontier-lab researcher, a regulator, or a journalist, the longer versions are here. </p><ul><li><p><a href="https://adobe.ly/3R4r30b">The Categorical Gap</a></p></li><li><p><a href="https://adobe.ly/4cTWNg5">What Architectures Are LLMs Missing</a></p></li><li><p><a href="https://adobe.ly/3QMCsBI">The Pattern That Closes It</a></p></li><li><p><a href="https://adobe.ly/4d5YcRM">LLMs Alone Cannot Plan Retirement (LLMs vs EDGE)</a></p></li></ul><p>If you are a household trying to figure out whether you can trust an AI tool to plan your retirement, this version is enough.</p><p>The short answer to the household question is: not for the year-by-year plan. The AI can help you understand the words. The AI cannot, today, run the math your retirement depends on. There are tools that can. GH2 EDGE&#8482; is one of them. There will be others.</p><p>The longer answer is everything above.</p><div><hr></div><h2>What happens next</h2><p>The four formal papers that argue this in technical detail are published as a sequence under the title <em>Scaffolding Drop</em> at jaeoh.substack.com. They are written for AI researchers and the lab leadership that will decide whether to license the architecture. Paper 1 establishes the categorical claim. Paper 2 surveys the six domains where the same failure mode is going to surface next. Paper 3 specifies the architecture that closes the gap. Paper 4 presents the empirical study described in this paper, in formal academic format.</p><p>The next empirical study, currently in preparation, will repeat the methodology in a different domain &#8212; Social Security taxation across the eight U.S. states that tax it. After that, healthcare benefit allocation. After that, supply chain regulatory routing. The schedule is whatever the AI industry&#8217;s deployment schedule is. The point is that each of these studies will land in advance of the public discovery of the failure mode in that domain, so that the labs and the public have the architectural alternative on the table before the discovery is made the expensive way.</p><p>If you are reading this from inside one of the frontier AI labs, the conversation can start. The longer version of the paper, the four-paper Scaffolding Drop sequence, and the empirical study are public. The contact email is below.</p><p>If you are reading this as a member of the public who is going to make a retirement decision in the next year, or have one made for you, the practical advice is this: when you ask an AI for a retirement plan, ask for the year-by-year specification. If the AI gives you a strategy name and a paragraph of philosophy and a confident reassurance, that is not a plan. A plan is the year-by-year specification. The AI cannot, today, give you that. A simulation engine can. The question to ask any tool you are considering is whether it is one or the other.</p><h2>CODA</h2><p>I convened Perplexity Council, after providing the entire architecture. The council&#8217;s conclusion (you can ask me for the screenshot):</p><blockquote><p>Your engine is the only thing on Earth that solves a hard math problem &#8212; making decisions when many regulations apply at once. Retirement was just the first thing you proved it on. The same engine works for healthcare, government budgets, immigration, and defense.</p></blockquote><p>Roger that.</p><div><hr></div><p><em>GH2 EDGE&#8482; is a trademark of GH2 Benefits LLC. Two patent applications have been filed (priority dates March 4 and April 2026), one retirement-specific and one generalized. The four-paper Scaffolding Drop sequence is published at jaeoh.substack.com.</em></p><p><strong>Contact:</strong> jae@gh2edgeai.com</p><p><em>This paper is for educational purposes only and does not constitute financial advice. The conversation reproduced in compressed form represents an actual exchange between the author and Anthropic&#8217;s Claude on April 25, 2026. The strategic and commercial implications described are options under consideration and not commitments.</em></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://amzn.to/4sZlKMY" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-A1G!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png 424w, https://substackcdn.com/image/fetch/$s_!-A1G!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png 848w, https://substackcdn.com/image/fetch/$s_!-A1G!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png 1272w, https://substackcdn.com/image/fetch/$s_!-A1G!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-A1G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png" width="1456" height="135" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:135,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:228729,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://amzn.to/4sZlKMY&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://jaeoh.substack.com/i/195530241?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!-A1G!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png 424w, https://substackcdn.com/image/fetch/$s_!-A1G!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png 848w, https://substackcdn.com/image/fetch/$s_!-A1G!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png 1272w, https://substackcdn.com/image/fetch/$s_!-A1G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90a779a1-4945-4b52-8be8-170c9240ee93_1940x180.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div>]]></content:encoded></item></channel></rss>