<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://mandar.dev/feed.xml" rel="self" type="application/atom+xml" /><link href="https://mandar.dev/" rel="alternate" type="text/html" /><updated>2026-03-22T23:29:21+00:00</updated><id>https://mandar.dev/feed.xml</id><title type="html">Mandar’s Blog</title><subtitle>Thoughts on technology, software, and artificial intelligence.</subtitle><entry><title type="html">Taste is the New Bottleneck</title><link href="https://mandar.dev/2026/03/22/taste-is-the-new-bottleneck/" rel="alternate" type="text/html" title="Taste is the New Bottleneck" /><published>2026-03-22T00:00:00+00:00</published><updated>2026-03-22T00:00:00+00:00</updated><id>https://mandar.dev/2026/03/22/taste-is-the-new-bottleneck</id><content type="html" xml:base="https://mandar.dev/2026/03/22/taste-is-the-new-bottleneck/"><![CDATA[<p><img src="/assets/images/2026-03-22-taste-is-the-new-bottleneck-hero.png" alt="An hourglass with code filtering down into a diamond — raw execution distilled into refined taste" /></p>

<p>For the last ten years, the most reliable piece of advice I gave my mentees was a cliché: ideas are cheap, execution is everything.</p>

<p>It was true. In software engineering, the gap between a good idea and a shipped product was enormous. You had to write boilerplate, debug obscure errors, argue over architecture, and grind through edge cases for months. The people who won were the ones who could put their heads down and ship. The idea was just the starting line. The execution was the marathon.</p>

<p>But in the last two years, something fundamental changed. AI made execution cheaper than ideas.</p>

<p>So what do I tell them now?</p>

<p>To answer that, we have to look at what we actually meant when we said “execution.” We meant two things bundled together: throughput and judgment. Throughput is the raw volume of typing, coding, and building. Judgment is knowing what to type, code, and build.</p>

<p>We never distinguished between the two because they always came together. The person who knew what to build was usually the same person who had to build it. But AI has unbundled them. It has driven the cost of throughput toward zero while leaving the cost of judgment untouched.</p>

<p>Anyone can now spin up a studio of agents to write code, draft copy, or run A/B tests. A junior engineer can generate in an afternoon what used to take a senior team a month. When throughput is infinite, throughput is no longer a moat. If your value proposition is “I can build things fast,” you are competing against a rising tide that will only get higher. You will drown.</p>

<p>When everyone can execute, the bottleneck shifts from <em>how</em> to build it to <em>what</em> to build.</p>

<p>The word for knowing what to build is taste.</p>

<p>People usually think of taste as an aesthetic thing — picking the right font, the right color palette, the right animation curve. But in software, taste is structural. It’s the instinct that tells you not just what <em>can</em> be done, but what <em>should</em> be done. It’s the ability to look at a complex problem and see the elegant, simple solution hiding inside it. It’s knowing which features to leave out.</p>

<p>Think of a fashion house. Karl Lagerfeld wasn’t sewing every stitch at Chanel. The execution was handled by a team of highly skilled artisans. What people paid for was the creative direction. The taste. The name on the label meant that someone with exceptional judgment had decided this specific arrangement of fabric was worth existing.</p>

<p>We are entering the fashion house era of software. AI gives everyone a studio full of people to execute on their ideas. The cost of execution has collapsed, but the returns to taste and point of view have gone way up.</p>

<p>Why is taste so hard for AI? Because AI is incredible at synthesizing patterns across millions of data points. It can run a playbook perfectly. But it can’t write the playbook.</p>

<p>It can’t sit in a room with a frustrated user and notice the thing they aren’t saying. It can’t feel the specific anxiety of a decision-maker who knows the status quo is failing but is terrified of being blamed for what comes next. It can’t look at a product and feel, in its gut, that something is off — even when every metric says things are fine. Taste comes from lived experience, from friction, from being embedded in the messy reality of human problems. That asymmetry — between what AI can process and what only human presence can accumulate — is where differentiation now lives.</p>

<p>If taste is the new bottleneck, the obvious question is: can you develop it?</p>

<p>I think you can. It’s not magic. It’s a practice.</p>

<p>I’ve noticed that the engineers with the best taste tend to read a lot of things that have nothing to do with engineering. They read history, design, psychology, architecture. The best product instincts come from cross-pollination — from importing mental models that nobody else in the room has. If you only read Hacker News, you will only build things that look like they belong on Hacker News.</p>

<p>They spend time with users, not just dashboards. AI can analyze usage data better than any human. But only you can sit across from someone and watch their face when they hit a confusing screen. Taste is built from accumulated human friction, not from metrics.</p>

<p>They study the work they admire and ask <em>why</em> it’s good. They don’t just use great products — they reverse-engineer the decisions behind them. Why did this team cut that feature? Why does this interface feel effortless while that one feels exhausting? Taste is pattern recognition, and pattern recognition requires a large training set.</p>

<p>And most importantly, they have strong opinions and let them be wrong. Taste develops through judgment, and judgment develops through making calls and learning from the misses. People who hedge everything develop no taste at all.</p>

<p>We are about to drown in a sea of perfectly executed, perfectly average software. Every app will work. Every interface will be polished. Every codebase will be clean. And none of it will matter, because none of it will be <em>different</em>.</p>

<p>So what do I tell my mentees now? I tell them to stop optimizing for throughput. Stop trying to be the fastest typist. Start reading more, observing more, and developing a point of view about what good looks like.</p>

<p>The engineers who win the next decade won’t be the ones who can execute the fastest. They’ll be the ones with the taste to know what is worth executing in the first place. Ideas aren’t cheap anymore. Good ones are the only thing that matters.</p>

<hr />

<p><em>P.S. I mentor a small number of engineers each year. If this essay resonated and you’re navigating a career transition in the age of AI, I have one open slot. Reach out on <a href="https://www.linkedin.com/in/mandarlimaye">LinkedIn</a>.</em></p>]]></content><author><name></name></author><category term="ai" /><category term="software-engineering" /><category term="career" /><category term="taste" /><category term="essay" /><summary type="html"><![CDATA[For a decade I told my mentees 'ideas are cheap, execution is everything.' AI broke that aphorism. When execution is free, the only thing that matters is taste.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://mandar.dev/assets/images/2026-03-22-taste-is-the-new-bottleneck-hero.png" /><media:content medium="image" url="https://mandar.dev/assets/images/2026-03-22-taste-is-the-new-bottleneck-hero.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">I Built a Personal AI Agent That Runs for Free, Forever. You Can Too.</title><link href="https://mandar.dev/2026/03/15/pennyclaw-free-ai-agent/" rel="alternate" type="text/html" title="I Built a Personal AI Agent That Runs for Free, Forever. You Can Too." /><published>2026-03-15T00:00:00+00:00</published><updated>2026-03-15T00:00:00+00:00</updated><id>https://mandar.dev/2026/03/15/pennyclaw-free-ai-agent</id><content type="html" xml:base="https://mandar.dev/2026/03/15/pennyclaw-free-ai-agent/"><![CDATA[<p><img src="/assets/images/2026-03-15-pennyclaw-hero.png" alt="A flat illustration of a laptop displaying green checkmarks and $0.00/mo, with a copper penny bearing claw marks beside it" /></p>

<p>I was tired of paying for servers I barely use. Like many developers, I love experimenting with AI, but the cost of a dedicated VPS to run a personal AI agent 24/7 always felt like a needless expense. GCP gives every user a free <code class="language-plaintext highlighter-rouge">e2-micro</code> VM, so I asked myself: could I build an agent that fits inside it?</p>

<p>Today, I’m excited to open-source the answer: <strong><a href="https://github.com/mandarl/pennyclaw">PennyClaw</a></strong>.</p>

<p>PennyClaw is a lightweight AI agent I built from scratch in Go. It’s designed specifically to run comfortably within the tight constraints of Google Cloud’s “Always Free” tier, which means you can have a personal AI assistant running around the clock for <strong>$0 per month</strong>.</p>

<h2 id="built-for-the-free-tier">Built for the Free Tier</h2>

<p>The biggest challenge was resource consumption. Most self-hosted AI agents can be memory-hungry, quickly overwhelming a small VM. I engineered PennyClaw to be incredibly frugal, using less than 50MB of RAM when idle. This efficiency is the key to unlocking the power of the free tier.</p>

<p>But I also wanted it to be easy. That’s why I created a one-click deployment script. It runs 24 pre-flight checks to make sure your setup is eligible for the free tier, automatically configures the environment, and gets your agent running in under five minutes. No surprises, no hidden costs.</p>

<h2 id="more-than-just-a-chatbot">More Than Just a Chatbot</h2>

<p>Despite its small footprint, PennyClaw is a capable tool.</p>

<ul>
  <li><strong>Connect to any LLM:</strong> It supports OpenAI, Anthropic, Google Gemini, OpenRouter, and any other OpenAI-compatible API.</li>
  <li><strong>Runs tools securely:</strong> It can execute shell commands, manage files, search the web, and make API requests from a sandboxed environment.</li>
  <li><strong>Persistent memory:</strong> It remembers your conversations across restarts, thanks to a local SQLite database.</li>
  <li><strong>Simple Web UI:</strong> It includes a clean, self-contained web interface for easy interaction.</li>
</ul>

<h2 id="try-it-yourself">Try It Yourself</h2>

<p>This project was a personal challenge that grew into something I believe many developers will find useful. If you’ve ever wanted a personal AI agent without the monthly bill, this is for you.</p>

<p>Head over to the <a href="https://github.com/mandarl/pennyclaw">PennyClaw GitHub repository</a> and hit the “Deploy to GCP” button. Your own free, forever-running AI agent is just a few clicks away.</p>

<p>This is just the beginning, and I’m excited to see what the community builds with it. What’s the first thing you would automate with your own personal AI agent?</p>]]></content><author><name></name></author><category term="open-source" /><category term="ai" /><category term="golang" /><category term="gcp" /><category term="side-projects" /><category term="pennyclaw" /><summary type="html"><![CDATA[Announcing PennyClaw, an open-source AI agent built in Go that runs 24/7 on GCP's free tier e2-micro VM. One-click deploy, zero cost, forever.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://mandar.dev/assets/images/2026-03-15-pennyclaw-hero.png" /><media:content medium="image" url="https://mandar.dev/assets/images/2026-03-15-pennyclaw-hero.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The Productivity Paradox: My GitHub Exploded, But My Impact Didn’t</title><link href="https://mandar.dev/2026/03/07/the-productivity-paradox/" rel="alternate" type="text/html" title="The Productivity Paradox: My GitHub Exploded, But My Impact Didn’t" /><published>2026-03-07T00:00:00+00:00</published><updated>2026-03-07T00:00:00+00:00</updated><id>https://mandar.dev/2026/03/07/the-productivity-paradox</id><content type="html" xml:base="https://mandar.dev/2026/03/07/the-productivity-paradox/"><![CDATA[<p><img src="/assets/images/2026-03-07-productivity-paradox-hero.png" alt="My GitHub contribution heatmap from March 2025 to March 2026, showing a dramatic spike in activity starting in October 2025 with the caption &quot;Productivity != Impact&quot;" /></p>

<p>This is my GitHub contribution history. See that wall of green that erupts around October 2025? It looks like the chart of a 10x developer, right? It’s the story of my coding output going supernova. But it’s also a lie.</p>

<p>That chart doesn’t show a 10x increase in my impact. It hides a story of frustration, stagnation, and a passion project that went absolutely nowhere. It’s a perfect picture of the modern developer’s dilemma: we’re mistaking activity for achievement.</p>

<h2 id="the-ai-gold-rush">The AI Gold Rush</h2>

<p>Before late 2025, my workflow was standard-issue software development: a mix of deep work, creative bursts, and the usual slog of writing boilerplate, setting up tests, and hunting down bugs. Progress was steady, but it was a grind.</p>

<p>Then, around November 2025, AI coding assistants—especially Claude Code—got good. <em>Really</em> good. They went from being a fancy autocomplete to something more like a tireless, slightly weird junior programmer. The nature of my work changed overnight. The AI handled the grunt work. It wrote the boilerplate, generated test suites, and refactored messy codebases in seconds. Hours of my day were suddenly handed back to me.</p>

<p>My personal productivity, at least by the crude metric of green squares on a GitHub heatmap, went vertical. I was a commit machine. I was, by all appearances, the most productive I’d ever been.</p>

<h2 id="a-case-study-in-going-nowhere-fast">A Case Study in Going Nowhere, Fast</h2>

<p>So what did I do with all that reclaimed time? I poured it into a side project: a commercial Android app I called “SSH Browser.” (Quick disclaimer: this is a personal project, completely separate from my day job.) The idea was simple: a mobile browser that tunnels all your web traffic through an SSH connection—private, encrypted browsing from your phone. With my AI sidekick, I wasn’t just sprinting; I was flying. I built the entire app in a few weeks—a project that would have taken me months, if not a year, just a short while before.</p>

<p>And then I hit a wall. Not a technical wall. A bureaucratic one.</p>

<p>When I submitted SSH Browser to the Google Play Store, my velocity slammed into a brick wall. The project wasn’t blocked by bugs, performance issues, or a lack of features. It was blocked by things that have nothing to do with code: an opaque review process, maddening permission policies essential for an SSH client, and a slow-motion back-and-forth with a review team that seemed to operate on a different timeline from the rest of the universe.</p>

<p>Despite the flurry of commits and the clean, AI-assisted code, the app had zero users. Zero downloads. Zero impact. My GitHub heatmap was a lush, vibrant forest of green, but the project was going nowhere.</p>

<h2 id="were-measuring-the-wrong-things">We’re Measuring the Wrong Things</h2>

<p>The story of SSH Browser is a painful but perfect example of the productivity paradox. And while this was a side project, the lesson isn’t limited to side projects. I see the same dynamic at work in my day job, too—teams shipping features at record speed while the real blockers are cross-team alignment, policy reviews, or waiting on decisions that no amount of code can accelerate. We have tools that can 10x the <em>act of coding</em>, but the real bottlenecks to making an <em>impact</em> often live entirely outside the IDE.</p>

<p>This whole experience made me realize how much we rely on vanity metrics. We love to track GitHub contributions, lines of code, and story points. They feel tangible. They give us a sense of forward motion. But they’re just proxies. They measure activity, not achievement.</p>

<p>So, what should we measure instead? I’m not sure I have the perfect answer, but I’m starting to think about a different set of questions:</p>

<ul>
  <li><strong>How fast can we get a real product into a user’s hands?</strong> (Time to first user)</li>
  <li><strong>How quickly can we learn from that user?</strong> (Speed of learning)</li>
  <li><strong>Are we shipping <em>value</em>, or just features?</strong> (Rate of impact)</li>
</ul>

<p>The most important work, I’ve realized, is often not the coding. It’s navigating the organizational, legal, market, and bureaucratic challenges—the stuff our AI assistants can’t do for us. Not yet, anyway.</p>

<h2 id="the-real-work">The Real Work</h2>

<p>My GitHub heatmap tells a story of incredible productivity. But the real story, the one that matters, is the one it doesn’t tell. It’s the story of a stalled project and the hard-won lesson that activity and impact are not the same thing.</p>

<p>As AI continues to get woven into the fabric of our work, I’m not going to urge you to do anything. But I will say this: I’m starting to look past the satisfying green squares of my contribution graph. I’m asking myself what I’m really building, and whether it’s actually going anywhere.</p>

<p>Because it turns out, distinguishing between motion and progress is the hardest part of the job.</p>]]></content><author><name></name></author><category term="ai" /><category term="productivity" /><category term="software-engineering" /><category term="github" /><category term="impact" /><category term="side-projects" /><category term="claude" /><summary type="html"><![CDATA[My GitHub contribution heatmap looks like a 10x developer's dream. But behind the wall of green is a stalled side project and a hard lesson: activity and impact are not the same thing.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://mandar.dev/assets/images/2026-03-07-productivity-paradox-hero.png" /><media:content medium="image" url="https://mandar.dev/assets/images/2026-03-07-productivity-paradox-hero.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The Ghost in the Machine: Why AI Still Needs an Engineer’s Judgment</title><link href="https://mandar.dev/2026/02/27/ghost-in-the-machine/" rel="alternate" type="text/html" title="The Ghost in the Machine: Why AI Still Needs an Engineer’s Judgment" /><published>2026-02-27T00:00:00+00:00</published><updated>2026-02-27T00:00:00+00:00</updated><id>https://mandar.dev/2026/02/27/ghost-in-the-machine</id><content type="html" xml:base="https://mandar.dev/2026/02/27/ghost-in-the-machine/"><![CDATA[<p><img src="/assets/images/2026-02-27-ghost-in-the-machine-hero.png" alt="A translucent AI ghost figure made of code and circuit patterns hovering over a laptop displaying a stock market heat map, with a human hand reaching toward the screen — representing the tension between AI speed and human engineering judgment" /></p>

<p>There is a certain magic to modern AI-assisted development, a practice that has been affectionately dubbed “vibe coding.” It’s the exhilarating experience of describing an application in plain English and watching it materialize on your screen, seemingly by magic. It promises a future where the friction between idea and execution disappears. But as I recently discovered, while the magic is real, it has its limits. The ghost in the machine can write the code, but it doesn’t yet possess a soul seasoned by experience.</p>

<p>My journey into this new paradigm began with a simple idea from a friend, <a href="https://www.linkedin.com/in/krishsund/">Krish Sundaram</a>: a market heat map. Thanks, Krish, for the suggestion! The concept was to create a web application that visualizes stock performance across pre-market, regular, and post-market hours. Fueled by the promise of rapid, conversational development, I spun up Claude. In less than fifteen minutes, I had a working prototype. It was visually impressive, functionally plausible, and a testament to the incredible power of vibe coding. The dream was real.</p>

<h3 id="the-unraveling">The Unraveling</h3>

<p>The initial euphoria, however, began to fade when I moved from admiring the prototype to verifying its integrity. When I started to dig into the data to confirm its accuracy, I found that the numbers didn’t align with the underlying data sources. Claude, for all its speed, had made fundamental errors in data processing. What followed was a frustrating back-and-forth, a conversational loop of corrections and re-generations to fix mistakes that a human engineer would likely have avoided. The experience was less like collaborating with a senior partner and more like mentoring a brilliant but naive intern.</p>

<p>This first crack in the facade revealed a crucial gap: Claude could assemble the parts, but it lacked a deep, contextual understanding of the <em>why</em> behind them. It was executing instructions without grasping the intent. Having finally wrestled the data into a state of accuracy, I decided to push further and begin the process of productionizing the application. This is where the second, more profound, trap was sprung.</p>

<h3 id="the-production-trap-a-cautionary-tale-of-premature-optimization">The Production Trap: A Cautionary Tale of Premature Optimization</h3>

<p>To prepare the application for potentially high traffic, I asked Claude to implement a caching strategy to reduce load and avoid hitting rate limits on the backend APIs. Claude complied with astonishing enthusiasm. It implemented caches at every conceivable layer of the application, from the browser’s local storage to the external API calls, creating a labyrinthine system of nested caches.</p>

<p>To a seasoned engineer, the following architecture diagram is a train wreck in slow motion:</p>

<p><img src="/assets/images/2026-02-27-ghost-in-the-machine-caching-diagram.png" alt="Over-Engineered Caching Architecture" /></p>

<p>This is a textbook case of what computer scientist Donald Knuth famously warned against when he said, “premature optimization is the root of all evil.” Claude, in its eagerness to fulfill my request, had missed the crucial lesson that most senior engineers learn the hard way: more is not always better. It created a system so complex that it would be a nightmare to debug and maintain, a brittle house of cards where a single stale cache could lead to cascading failures. It had the knowledge to implement a cache, but not the wisdom to know when—and when not—to do so.</p>

<h3 id="the-enduring-value-of-engineering-judgment">The Enduring Value of Engineering Judgment</h3>

<p>My experience with the market heat map app crystallizes the current state of AI in software development. Vibe coding is an undeniably powerful tool for prototyping, for rapidly exploring ideas and bringing concepts to life. But the journey from a working prototype to a robust, production-ready system is a road that AI cannot yet walk alone.</p>

<p>The gap between a demo and a production system is vast, and it is filled with the nuanced, hard-won wisdom of human engineering. It is the foresight to design for scalability without over-engineering, the ability to make trade-offs between performance and maintainability, and the architectural intuition to build systems that are resilient and adaptable. These are not skills that can be easily codified or learned from a training set; they are the product of experience, of seeing systems fail, and of understanding the subtle interplay of countless variables.</p>

<p>The current discourse in the software world is shifting from pure vibe coding towards more structured approaches like spec-driven development, where the engineer’s role is to provide the detailed blueprint that the AI then executes. This is not a retreat from AI, but a refinement of our relationship with it. It acknowledges that AI is a powerful force multiplier, a co-pilot that can handle the tedious and the repetitive, but it is the human engineer who must remain the captain.</p>

<p>We are moving into an era of collaboration, where the engineer’s role will evolve from a builder of code to an architect of systems and a guide for intelligent agents. The magic of AI is not that it will replace us, but that it will free us to focus on the things that matter most: judgment, wisdom, and the creative spark of human ingenuity. The ghost in the machine is a powerful servant, but it still needs a master.</p>

<hr />

<p><em>You can check out the finished market heatmap project <a href="https://projects.mandar.dev/market-heatmap/">here</a>.</em></p>]]></content><author><name></name></author><category term="ai" /><category term="vibe-coding" /><category term="software-engineering" /><category term="claude" /><category term="production" /><category term="caching" /><category term="premature-optimization" /><summary type="html"><![CDATA[I vibe-coded a stock market heat map in 15 minutes. Then I spent hours fixing data bugs and untangling an over-engineered caching nightmare. Here's what I learned about the gap between AI prototyping and production.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://mandar.dev/assets/images/2026-02-27-ghost-in-the-machine-hero.png" /><media:content medium="image" url="https://mandar.dev/assets/images/2026-02-27-ghost-in-the-machine-hero.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The Machines Are Talking to Themselves</title><link href="https://mandar.dev/2026/02/20/i-thought-moores-law-was-fast/" rel="alternate" type="text/html" title="The Machines Are Talking to Themselves" /><published>2026-02-20T00:00:00+00:00</published><updated>2026-02-20T00:00:00+00:00</updated><id>https://mandar.dev/2026/02/20/i-thought-moores-law-was-fast</id><content type="html" xml:base="https://mandar.dev/2026/02/20/i-thought-moores-law-was-fast/"><![CDATA[<div class="infographic-container">
<style>
/* ── Infographic Post: Hide redundant Jekyll header ── */
  .infographic-post .post-header {
    display: none !important;
  }

  /* Float nav bar transparently over the hero */
  .infographic-post .site-header {
    position: absolute;
    top: 0;
    left: 0;
    right: 0;
    z-index: 1000;
    background: transparent;
    border-bottom: none;
    margin-bottom: 0;
  }

  /* Ensure nav links are readable over the light hero gradient */
  .infographic-post .site-title {
    color: #1a1a2e;
  }

  .infographic-post .nav-link {
    color: #4b5563;
  }

  .infographic-post .nav-link:hover {
    color: #1a1a2e;
  }

  /* Mobile nav dropdown needs a solid background when open */
  @media (max-width: 768px) {
    .infographic-post .site-nav.is-open {
      background: rgba(255, 255, 255, 0.95);
      backdrop-filter: blur(10px);
      -webkit-backdrop-filter: blur(10px);
    }
  }

  /* Remove top padding from the post content container */
  .infographic-post .post.content-container {
    padding-top: 0;
  }

  /* Ensure site-main has no top margin so hero starts at viewport top */
  .infographic-post .site-main {
    padding-top: 0;
  }

  /* Break out of blog content-container width constraint */
  .infographic-container {
    width: 100vw;
    position: relative;
    left: 50%;
    right: 50%;
    margin-left: -50vw;
    margin-right: -50vw;
  }


  /* ── Reset & Base ── */
  .infographic-container *, .infographic-container *::before, .infographic-container *::after { box-sizing: border-box; margin: 0; padding: 0; }

  .infographic-container {
    --bg: #ffffff;
    --bg-card: #f8f9fa;
    --text: #1a1a2e;
    --text-muted: #6b7280;
    --accent: #4f46e5;
    --accent-glow: #6366f1;
    --orange: #ea580c;
    --orange-text: #c2410c; /* Darker orange for text — passes WCAG AA on white */
    --green: #16a34a;
    --red: #dc2626;
    --cyan: #0891b2;
    --font: 'Segoe UI', system-ui, -apple-system, sans-serif;
    --mono: 'SF Mono', 'Fira Code', 'Consolas', monospace;
  }

  .infographic-container { scroll-behavior: smooth; }

  .infographic-container {
    font-family: var(--font);
    background: var(--bg);
    color: var(--text);
    line-height: 1.7;
    overflow-x: hidden;
  }

  /* ── Typography ── */
  .infographic-container h1, .infographic-container h2, .infographic-container h3 { font-weight: 700; line-height: 1.2; text-wrap: balance; }

  /* ── Hero Section ── */
  .infographic-container .hero {
    min-height: 100vh;
    min-height: 100dvh; /* Dynamic viewport height — accounts for mobile address bar */
    display: flex;
    flex-direction: column;
    justify-content: center;
    align-items: center;
    text-align: center;
    padding: 2rem;
    position: relative;
    overflow: hidden;
  }

  .infographic-container .hero::before {
    content: '';
    position: absolute;
    inset: 0;
    background:
      radial-gradient(ellipse 120% 60% at 50% 110%, rgba(234,88,12,0.18) 0%, rgba(234,88,12,0.08) 30%, transparent 70%),
      radial-gradient(ellipse 80% 40% at 50% 105%, rgba(220,38,38,0.06) 0%, transparent 60%),
      radial-gradient(ellipse at 50% 0%, rgba(79,70,229,0.04) 0%, transparent 40%);
    pointer-events: none;
  }

  .infographic-container .hero h1 {
    font-size: clamp(2rem, 5vw, 3.5rem);
    max-width: 800px;
    margin-bottom: 1.5rem;
    background: linear-gradient(135deg, #1a1a2e 0%, var(--accent) 100%);
    -webkit-background-clip: text;
    -webkit-text-fill-color: transparent;
    background-clip: text;
  }

  .infographic-container .hero .subtitle {
    font-size: 1.15rem;
    color: #4b5563;
    max-width: 600px;
  }

  .infographic-container .scroll-hint {
    position: absolute;
    bottom: 2rem;
    bottom: calc(2rem + env(safe-area-inset-bottom, 0px));
    animation: bounce 2s infinite;
    color: #4b5563;
    font-size: 0.85rem;
    font-weight: 600;
    letter-spacing: 0.1em;
    text-transform: uppercase;
  }

  /* ── Hero Metadata Overlay ── */
  .infographic-container .hero-meta {
    position: absolute;
    top: 5rem; /* Clear the floating nav bar (~64px + breathing room) */
    left: 50%;
    transform: translateX(-50%);
    display: flex;
    flex-wrap: wrap;
    align-items: center;
    justify-content: center;
    gap: 0.5rem;
    font-family: var(--font);
    font-size: 0.85rem;
    color: #6b7280;
    z-index: 10;
    opacity: 0.85;
  }

  .infographic-container .hero-date,
  .infographic-container .hero-reading-time {
    letter-spacing: 0.02em;
  }

  .infographic-container .hero-separator {
    color: #d1d5db;
    font-weight: 300;
  }

  .infographic-container .hero-tags {
    display: flex;
    flex-wrap: wrap;
    gap: 0.35rem;
    margin-left: 0.5rem;
  }

  .infographic-container .hero-tag {
    display: inline-block;
    font-size: 0.7rem;
    font-weight: 500;
    color: #6b7280;
    background: rgba(0, 0, 0, 0.04);
    padding: 0.15rem 0.5rem;
    border-radius: 4px;
    letter-spacing: 0.02em;
  }

  @keyframes bounce {
    0%, 100% { transform: translateY(0); }
    50% { transform: translateY(8px); }
  }

  /* ── Prose Sections ── */
  .infographic-container .prose-section {
    max-width: 720px;
    margin: 0 auto;
    padding: 4rem 1.5rem;
  }

  .infographic-container .prose-section h2 {
    font-size: 1.8rem;
    margin-bottom: 1.5rem;
    color: var(--accent-glow);
  }

  .infographic-container .prose-section p {
    margin-bottom: 1.25rem;
    font-size: 1.05rem;
    color: var(--text);
    text-align: left;
    max-width: 65ch;
  }

  .infographic-container .prose-section .highlight {
    color: var(--orange-text);
    font-weight: 700;
  }

  .infographic-container blockquote {
    border-left: 3px solid var(--accent);
    padding: 1rem 1.5rem;
    margin: 2rem 0;
    background: rgba(79,70,229,0.06);
    border-radius: 0 8px 8px 0;
    font-style: italic;
    color: var(--text-muted);
  }

  .infographic-container blockquote cite {
    display: block;
    margin-top: 0.75rem;
    font-style: normal;
    font-size: 0.85rem;
    color: var(--accent-glow);
  }

  /* ── Chart Containers ── */
  .infographic-container .chart-section {
    width: 100%;
    max-width: 960px;
    margin: 0 auto 4rem;
    padding: 0 1rem;
  }

  .infographic-container .chart-container {
    background: var(--bg-card);
    border: 1px solid rgba(0,0,0,0.08);
    border-radius: 16px;
    padding: 2rem 1.5rem;
    position: relative;
    overflow: hidden;
  }

  .infographic-container .chart-container::before {
    content: '';
    position: absolute;
    top: 0; left: 0; right: 0;
    height: 2px;
    background: linear-gradient(90deg, var(--accent), var(--orange));
  }

  .infographic-container .chart-title {
    font-size: 1.3rem;
    font-weight: 700;
    margin-bottom: 0.5rem;
  }

  .infographic-container .chart-subtitle {
    font-size: 0.9rem;
    color: var(--text-muted);
    margin-bottom: 1.5rem;
  }

  .infographic-container svg text {
    font-family: var(--font);
  }

  /* ── Agent Comparison ── */
  .infographic-container .agent-comparison {
    display: grid;
    grid-template-columns: 1fr 1fr;
    gap: 1.5rem;
    max-width: 960px;
    margin: 2rem auto 4rem;
    padding: 0 1rem;
  }

  @media (max-width: 700px) {
    .infographic-container .agent-comparison { grid-template-columns: 1fr; }
  }

  .infographic-container .agent-card {
    background: var(--bg-card);
    border: 1px solid rgba(0,0,0,0.08);
    border-radius: 16px;
    padding: 1.5rem;
    position: relative;
    overflow: hidden;
  }

  .infographic-container .agent-card::before {
    content: '';
    position: absolute;
    top: 0; left: 0; right: 0;
    height: 2px;
  }

  .infographic-container .agent-card.simple::before { background: var(--green); }
  .infographic-container .agent-card.agentic::before { background: var(--orange); }

  .infographic-container .agent-card h3 {
    font-size: 1rem;
    margin-bottom: 0.75rem;
    display: flex;
    align-items: center;
    gap: 0.5rem;
  }

  .infographic-container .agent-card .prompt-box {
    background: rgba(0,0,0,0.03);
    border: 1px solid rgba(0,0,0,0.1);
    border-radius: 8px;
    padding: 0.75rem 1rem;
    font-family: var(--mono);
    font-size: 0.85rem;
    margin-bottom: 1rem;
    color: var(--text-muted);
  }

  .infographic-container .token-counter {
    display: flex;
    justify-content: space-between;
    align-items: center;
    padding: 0.75rem 1rem;
    background: rgba(0,0,0,0.03);
    border-radius: 8px;
    margin-top: 1rem;
  }

  .infographic-container .token-counter .label { font-size: 0.8rem; color: var(--text-muted); }
  .infographic-container .token-counter .value { font-size: 1.6rem; font-weight: 800; font-family: var(--mono); }
  .infographic-container .token-counter .cost { font-size: 1.1rem; color: var(--text-muted); font-weight: 600; }

  .infographic-container .simple .token-counter .value { color: var(--green); }
  .infographic-container .agentic .token-counter .value { color: var(--orange-text); }

  /* ── Agent Tree ── */
  .infographic-container .agent-tree-node {
    fill: var(--bg-card);
    stroke: var(--orange);
    stroke-width: 1.5;
  }

  .infographic-container .agent-tree-link {
    fill: none;
    stroke: rgba(249,115,22,0.3);
    stroke-width: 1.5;
  }

  .infographic-container .agent-tree-label {
    fill: var(--text);
    font-size: 11px;
  }

  .infographic-container .loop-arrow {
    fill: none;
    stroke: var(--red);
    stroke-width: 1.5;
    stroke-dasharray: 4 3;
  }

  /* ── Stat Cards ── */
  .infographic-container .stat-grid {
    display: grid;
    grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
    gap: 1rem;
    max-width: 960px;
    margin: 2rem auto;
    padding: 0 1rem;
  }

  .infographic-container .stat-card {
    background: var(--bg-card);
    border: 1px solid rgba(0,0,0,0.08);
    border-radius: 16px;
    padding: 1.25rem;
    text-align: center;
  }

  .infographic-container .stat-card .stat-value {
    font-size: 2rem;
    font-weight: 800;
    font-family: var(--mono);
    background: linear-gradient(135deg, var(--accent), var(--cyan));
    -webkit-background-clip: text;
    -webkit-text-fill-color: transparent;
    background-clip: text;
  }

  .infographic-container .stat-card .stat-label {
    font-size: 0.8rem;
    color: var(--text-muted);
    margin-top: 0.25rem;
    line-height: 1.4;
  }

  /* ── Top Apps Table ── */
  .infographic-container .apps-table {
    width: 100%;
    border-collapse: collapse;
    margin: 1rem 0;
  }

  .infographic-container .apps-table th {
    text-align: left;
    font-size: 0.75rem;
    text-transform: uppercase;
    letter-spacing: 0.08em;
    color: var(--text-muted);
    padding: 0.5rem 0.75rem;
    border-bottom: 1px solid rgba(0,0,0,0.1);
  }

  .infographic-container .apps-table td {
    padding: 0.6rem 0.75rem;
    font-size: 0.9rem;
    border-bottom: 1px solid rgba(0,0,0,0.06);
  }

  .infographic-container .apps-table .bar-cell { width: 40%; }

  .infographic-container .bar-bg {
    height: 20px;
    background: rgba(0,0,0,0.04);
    border-radius: 4px;
    overflow: hidden;
    position: relative;
  }

  .infographic-container .bar-fill {
    height: 100%;
    border-radius: 4px;
    transition: width 1.2s ease-out;
  }

  .infographic-container .bar-fill.coding { background: linear-gradient(90deg, var(--orange), #fb923c); }
  .infographic-container .bar-fill.other { background: linear-gradient(90deg, var(--accent), var(--accent-glow)); }

  .infographic-container .type-badge {
    display: inline-block;
    font-size: 0.65rem;
    padding: 0.15rem 0.5rem;
    border-radius: 99px;
    font-weight: 600;
    text-transform: uppercase;
    letter-spacing: 0.05em;
  }

  .infographic-container .type-badge.coding {
    background: rgba(249,115,22,0.15);
    color: var(--orange-text);
  }

  .infographic-container .type-badge.other {
    background: rgba(99,102,241,0.15);
    color: var(--accent-glow);
  }

  /* ── Multiplier Viz ── */
  .infographic-container .multiplier-section {
    max-width: 960px;
    margin: 0 auto;
    padding: 0 1rem;
  }

  .infographic-container .multiplier-row {
    display: flex;
    align-items: center;
    gap: 1rem;
    margin-bottom: 1.5rem;
  }

  .infographic-container .multiplier-label {
    width: 140px;
    font-size: 0.85rem;
    color: var(--text-muted);
    text-align: right;
    flex-shrink: 0;
    display: flex;
    align-items: center;
    justify-content: flex-end;
    min-height: 32px;
  }

  .infographic-container .multiplier-bar-track {
    flex: 1;
    height: 32px;
    background: rgba(0,0,0,0.03);
    border-radius: 6px;
    overflow: hidden;
    position: relative;
  }

  .infographic-container .multiplier-bar {
    height: 100%;
    border-radius: 6px;
    display: flex;
    align-items: center;
    padding-left: 0.75rem;
    font-size: 0.8rem;
    font-weight: 700;
    font-family: var(--mono);
    color: #ffffff;
    transition: width 1.5s ease-out;
    min-width: 42px;
  }

  /* ── Scroll Animations ── */
  .infographic-container .fade-in {
    opacity: 0;
    transform: translateY(30px);
    transition: opacity 0.8s ease-out, transform 0.8s ease-out;
  }

  .infographic-container .fade-in.visible {
    opacity: 1;
    transform: translateY(0);
  }

  /* ── Footer ── */
  .infographic-container .section-divider {
    max-width: 720px;
    margin: 0 auto;
    height: 1px;
    background: linear-gradient(90deg, transparent, rgba(0,0,0,0.12), transparent);
  }

  .infographic-container .footer {
    max-width: 720px;
    margin: 0 auto;
    padding: 3rem 1.5rem;
    border-top: none;
    font-size: 0.8rem;
    color: var(--text-muted);
  }

  .infographic-container .footer a {
    color: var(--accent-glow);
    text-decoration: none;
  }

  .infographic-container .footer a:hover { text-decoration: underline; }

  /* ══════════════════════════════════════════════════════ */
  /* MOBILE RESPONSIVE STYLES                              */
  /* ══════════════════════════════════════════════════════ */

  /* ── Tablet (768px and below) ── */
  @media (max-width: 768px) {
    .infographic-container .prose-section {
      padding: 3rem 1.25rem;
    }

    .infographic-container .prose-section h2 {
      font-size: 1.5rem;
    }

    .infographic-container .chart-container {
      padding: 1.5rem 1rem;
    }

    .infographic-container .chart-title {
      font-size: 1.15rem;
    }

    .infographic-container .stat-grid {
      grid-template-columns: repeat(2, 1fr);
    }

    .infographic-container .multiplier-label {
      width: 110px;
      font-size: 0.78rem;
    }

    .infographic-container .multiplier-bar-track {
      height: 28px;
    }
  }

  /* ── Mobile (480px and below) ── */
  @media (max-width: 480px) {
    .infographic-container .hero {
      padding: 1.5rem;
    }

    .infographic-container .hero-meta {
      top: 4.5rem; /* Clear floating nav bar on mobile */
      font-size: 0.75rem;
      gap: 0.35rem;
      padding: 0 1rem;
      width: 100%;
    }

    .infographic-container .hero-tags {
      display: none; /* Hide tags on mobile to save space */
    }

    .infographic-container .hero h1 {
      font-size: clamp(1.6rem, 7vw, 2.2rem);
      margin-bottom: 1rem;
    }

    .infographic-container .hero .subtitle {
      font-size: 1rem;
    }

    .infographic-container .prose-section {
      padding: 2.5rem 1rem;
    }

    .infographic-container .prose-section h2 {
      font-size: 1.35rem;
    }

    .infographic-container .prose-section p {
      font-size: 0.95rem;
      line-height: 1.7;
    }

    .infographic-container .stat-grid {
      grid-template-columns: 1fr;
      gap: 0.75rem;
    }

    .infographic-container .stat-card .stat-value {
      font-size: 1.6rem;
    }

    .infographic-container .chart-section {
      padding: 0 0.5rem;
      margin: 1.5rem auto 3rem;
    }

    .infographic-container .chart-container {
      padding: 1.25rem 0.75rem;
      border-radius: 12px;
    }

    .infographic-container .chart-title {
      font-size: 1.05rem;
    }

    .infographic-container .chart-subtitle {
      font-size: 0.8rem;
    }

    /* ── Apps Table: Mobile Layout ── */
    .infographic-container .apps-table {
      font-size: 0.8rem;
    }

    .infographic-container .apps-table th {
      font-size: 0.65rem;
      padding: 0.4rem 0.4rem;
    }

    .infographic-container .apps-table td {
      padding: 0.5rem 0.4rem;
      font-size: 0.8rem;
    }

    /* Hide the bar chart column on mobile to prevent overflow */
    .apps-table .bar-cell,
    .infographic-container .apps-table td:nth-child(4) {
      display: none;
    }

    .infographic-container .apps-table th:nth-child(4) {
      display: none;
    }

    .infographic-container .type-badge {
      font-size: 0.55rem;
      padding: 0.1rem 0.35rem;
    }

    /* ── Agent Comparison: Stack vertically ── */
    .infographic-container .agent-comparison {
      grid-template-columns: 1fr;
      gap: 1rem;
      padding: 0 0.5rem;
      margin: 1.5rem auto 3rem;
    }

    .infographic-container .agent-card {
      padding: 1.25rem;
      border-radius: 12px;
    }

    .infographic-container .agent-card .prompt-box {
      font-size: 0.78rem;
      padding: 0.6rem 0.75rem;
    }

    .infographic-container .token-counter .value {
      font-size: 1.15rem;
    }

    .infographic-container #agent-tree-viz {
      height: 260px !important;
      overflow: hidden;
    }

    .infographic-container .agent-card {
      overflow: hidden;
    }

    /* ── Multiplier Bars: Compact ── */
    .infographic-container .multiplier-row {
      gap: 0.5rem;
      margin-bottom: 1rem;
    }

    .infographic-container .multiplier-label {
      width: 90px;
      font-size: 0.72rem;
    }

    .infographic-container .multiplier-bar-track {
      height: 26px;
    }

    .infographic-container .multiplier-bar {
      font-size: 0.7rem;
    }

    /* ── Jevons Chart ── */
    .infographic-container #jevons-chart {
      min-height: 180px;
    }

    /* ── Blockquote ── */
    .infographic-container blockquote {
      padding: 0.75rem 1rem;
      margin: 1.5rem 0;
      font-size: 0.9rem;
    }

    /* ── Footer ── */
    .infographic-container .footer {
      padding: 2rem 1rem;
      font-size: 0.75rem;
    }
  }

</style>
<script src="https://d3js.org/d3.v7.min.js"></script>


<!-- ════════════════════════════════════════════════════════════ -->
<!-- HERO -->
<!-- ════════════════════════════════════════════════════════════ -->
<section class="hero">
  <div class="hero-meta">
    <span class="hero-date">February 20, 2026</span>
    <span class="hero-separator">·</span>
    <span class="hero-reading-time">47 min read</span>
    <div class="hero-tags">
      <span class="hero-tag">ai</span>
      <span class="hero-tag">agents</span>
      <span class="hero-tag">manus</span>
      <span class="hero-tag">productivity</span>
      <span class="hero-tag">agentic-ai</span>
      <span class="hero-tag">token-law</span>
      <span class="hero-tag">moores-law</span>
    </div>
  </div>
  <h1>The Machines Are Talking to Themselves</h1>
  <p class="subtitle">OpenRouter data shows that AI agents now consume more tokens than humans. Here's what that means for the future of computing.</p>
  <div class="scroll-hint">↓ Scroll to explore</div>
</section>

<!-- ════════════════════════════════════════════════════════════ -->
<!-- PART 1: THE NEW EXPONENTIAL -->
<!-- ════════════════════════════════════════════════════════════ -->
<section class="prose-section fade-in">
  <h2>The New Exponential</h2>
  <p>Moore's Law was the steady heartbeat of tech for half a century. The relentless doubling of transistors on a chip gave us everything from the PC to the smartphone. As a software engineer, I grew up taking that rhythm for granted—it was just the way progress worked. But I've come to realize that era is over.</p>
  <p>I'm calling the new trend the <span class="highlight">Token Law</span>. Where Moore's Law was a <em>supply-side</em> observation about the physics of silicon, this new Token Law is a <em>demand-side</em> phenomenon—reflecting the explosive growth in the complexity of tasks we are now entrusting to AI. It's not about how many transistors we can cram onto a chip, but how many "thoughts" an AI can process.</p>
</section>

<!-- Stat Cards -->
<div class="stat-grid fade-in">
  <div class="stat-card">
    <div class="stat-value">12×</div>
    <div class="stat-label">Growth in 12 months<br />(OpenRouter tokens/week)</div>
  </div>
  <div class="stat-card">
    <div class="stat-value">1.3Q</div>
    <div class="stat-label">Google tokens/month<br />(Oct 2025 — 1.3 quadrillion)</div>
  </div>
  <div class="stat-card">
    <div class="stat-value">8.6T</div>
    <div class="stat-label">OpenAI tokens/day<br />(Oct 2025)</div>
  </div>
  <div class="stat-card">
    <div class="stat-value">~3.5mo</div>
    <div class="stat-label">Doubling time<br />(OpenRouter observed rate)</div>
  </div>
</div>

<!-- Chart 1: The Two Curves -->
<div class="chart-section fade-in">
  <div class="chart-container">
    <div class="chart-title">The Two Curves: Moore's Law vs. The Token Law</div>
    <div class="chart-subtitle">Transistors per microprocessor (1971–2024) vs. AI tokens processed per week on OpenRouter (2025–2026). Log scale. Note: x-axis is compressed — 53 years on the left, 2 years on the right.</div>
    <div id="two-curves-chart"></div>
  </div>
</div>

<section class="prose-section fade-in">
  <p>The chart above tells the story. Moore's Law, the gentle upward slope on the left, delivered a 2× improvement every two years. The Token Law, the steep eruption on the right, is delivering <span class="highlight">12× in a single year</span>. And this isn't just one platform—Google went from 980 trillion to 1.3 quadrillion tokens per month in just two months. Alibaba reports its token use is doubling every few months.</p>
</section>

<!-- ════════════════════════════════════════════════════════════ -->
<!-- PART 2: THE ROBOTS ARE DOING THE TALKING -->
<!-- ════════════════════════════════════════════════════════════ -->
<section class="prose-section fade-in">
  <h2>The Robots Are Doing the Talking</h2>
  <p>My first thought was that it's just more people like you and me chatting with AI. But that's not the whole story. The primary driver is a fundamental shift in <em>how</em> AI operates. We're moving from simple, single-shot queries to complex, multi-step workflows executed by what we in the field call autonomous "agentic" AI systems.</p>
  <p>For me, the most compelling evidence was seeing which applications were consuming the most tokens. When I looked at the OpenRouter leaderboard, it wasn't dominated by chatbots. The real power users were <span class="highlight">coding agents</span>—specialized AI systems designed to write, debug, and manage software autonomously.</p>
</section>

<!-- Top Apps Table -->
<div class="chart-section fade-in">
  <div class="chart-container">
    <div class="chart-title">Top Apps on OpenRouter by Daily Token Consumption</div>
    <div class="chart-subtitle">Coding agents (orange) dominate the leaderboard. Data from February 2026.</div>
    <table class="apps-table">
      <thead>
        <tr>
          <th>Rank</th>
          <th>App</th>
          <th>Type</th>
          <th class="bar-cell">Tokens / Day</th>
          <th style="text-align:right">Volume</th>
        </tr>
      </thead>
      <tbody id="apps-table-body"></tbody>
    </table>
  </div>
</div>

<section class="prose-section fade-in">
  <p>I think of it as the difference between asking a person for directions and hiring a consultant who then makes dozens of phone calls, reads manuals, and runs tests on your behalf. A simple query might consume a few hundred tokens. But when you ask an AI agent to fix a bug, it kicks off a complex internal monologue. It has to analyze the code, replicate the error, search for solutions, write a new patch, and then test its own work. If the test fails, it starts the whole loop over again, learning as it goes. As an engineer, I find this process of "self-reflection" fascinating. It can multiply the token cost by 10, 50, or even 100 times compared to a simple query.</p>
</section>

<!-- Agent Comparison Cards -->
<div class="agent-comparison fade-in">
  <!-- Simple Query Card -->
  <div class="agent-card simple">
    <h3>💬 Simple Query</h3>
    <div class="prompt-box">"What is the capital of France?"</div>
    <div id="simple-dot-viz" style="height:120px; display:flex; align-items:center; justify-content:center;"></div>
    <div class="token-counter">
      <div>
        <div class="label">Tokens Used</div>
        <div class="value" id="simple-token-count">0</div>
      </div>
      <div>
        <div class="label">Cost</div>
        <div class="cost" id="simple-cost">$0.0000</div>
      </div>
    </div>
  </div>

  <!-- Agentic Task Card -->
  <div class="agent-card agentic">
    <h3>🤖 Agentic Task</h3>
    <div class="prompt-box">"Fix the auth bug in login.py"</div>
    <div id="agent-tree-viz" style="height: 280px;"></div>
    <div class="token-counter">
      <div>
        <div class="label">Tokens Used</div>
        <div class="value" id="agent-token-count">0</div>
      </div>
      <div>
        <div class="label">Cost</div>
        <div class="cost" id="agent-cost">$0.00</div>
      </div>
    </div>
  </div>
</div>

<!-- Token Multiplier Bars -->
<div class="chart-section fade-in">
  <div class="chart-container">
    <div class="chart-title">The Token Multiplier Effect</div>
    <div class="chart-subtitle">How different AI interaction patterns multiply token consumption relative to a simple chat query.</div>
    <div class="multiplier-section" id="multiplier-bars"></div>
  </div>
</div>

<!-- ════════════════════════════════════════════════════════════ -->
<!-- PART 2.5: THE UNSEEN BRAKES -->
<!-- ════════════════════════════════════════════════════════════ -->
<section class="prose-section fade-in">
  <h2>The Unseen Brakes on the Exponential Engine</h2>
  <p>While the demand for tokens is exploding, a parallel and equally intense engineering effort is underway to tame this exponential growth. The story of the Token Law isn't just about unchecked expansion; it's also about the sophisticated optimizations being built to manage the cost and complexity of these powerful new systems.</p>
  <p>The most significant of these is <span class="highlight">KV Caching</span>. In the iterative "self-reflection" loops common to AI agents, much of the initial context remains the same from one step to the next. Instead of re-processing this entire context each time, caching techniques allow the model to reuse the intermediate calculations, dramatically reducing the effective number of tokens processed and making complex, multi-step reasoning economically feasible.</p>
  <p>Furthermore, the AI ecosystem is not monolithic. Sophisticated agents rarely rely on a single, massive model. Instead, they orchestrate a <span class="highlight">cascade of models</span>, using smaller, faster, and cheaper specialized models for routine tasks like intent recognition or data extraction, only calling upon the powerful—and expensive—frontier models for the most complex steps. This, combined with the fact that input tokens are often 3-5× cheaper than output tokens, forms a powerful set of brakes on the runaway train of token consumption. The true challenge for engineers is not just building token-hungry agents, but architecting systems that balance their immense power with these crucial economic and computational realities.</p>
</section>

<!-- ════════════════════════════════════════════════════════════ -->
<!-- PART 3: THE PHYSICAL COST (LIGHTER) -->
<!-- ════════════════════════════════════════════════════════════ -->
<section class="prose-section fade-in">
  <h2>The Physical Cost of Thought</h2>
  <p>This exponential growth in abstract "tokens" has a very real, physical cost. As someone who works on messaging infrastructure at scale, my world is governed by the trade-offs between latency, bandwidth, and computational resources. We fight for every kilobyte saved in our data serialization and every millisecond shaved off our processing time. From that perspective, the sheer scale of token consumption by agentic AI is staggering. The Jevons Paradox is in full effect: as models become more efficient, we don't just do the same tasks for less energy; we invent entirely new, token-hungry workflows that were previously unimaginable.</p>
  <blockquote>
    "The unit that once measured text now measures energy. Moore's Law no longer governs progress because token growth does."
    <cite>— Jonathan Lishawa, illuminem</cite>
  </blockquote>
  <p>Global data center electricity use is projected to rise from roughly 400 terawatt-hours in 2024 to nearly 1,000 by 2030, with AI workloads responsible for about a third of that total. The future of AI is now inextricably linked to the future of energy.</p>
</section>

<!-- Efficiency vs Growth Chart -->
<div class="chart-section fade-in">
  <div class="chart-container">
    <div class="chart-title">The Jevons Paradox for AI</div>
    <div class="chart-subtitle">Efficiency gains (12×) vs. token growth (50×) per generation. Net result: energy use still increases 4× per query.</div>
    <div id="jevons-chart"></div>
  </div>
</div>

<!-- ════════════════════════════════════════════════════════════ -->
<!-- CONCLUSION -->
<!-- ════════════════════════════════════════════════════════════ -->
<section class="prose-section fade-in">
  <h2>The Dawn of a New Machine Age</h2>
  <p>The fifty-year reign of Moore's Law gave us the tools to connect the world. The new exponential, the Token Law, is about what happens now that the world is connected. It's a paradigm shift driven not by human-to-machine chatter, but by a vast and growing chorus of machines talking to themselves—agentic systems that write code, run experiments, and manage complex workflows with multiplying levels of autonomy.</p>
  <p>As we've seen, this new age comes with a new set of rules. The abstract "thought" of a token carries a real-world cost in energy, and the economics of AI are being rewritten around tasks completed, not tokens spent. We're also starting to account for what I'd call the <span class="highlight">"Unreliability Tax"</span>—the hidden but significant engineering cost of building production-grade systems on top of non-deterministic models. This tax is paid in the engineering hours spent on robust retry logic with exponential backoff, the computational overhead of input/output validation parsers that can handle hallucinated JSON, and the architectural complexity of stateful error recovery to roll back a workflow that fails midway.</p>
  <p>The central challenge for engineers and innovators in the next decade will not be merely building bigger models, but mastering the art of orchestrating these powerful, token-hungry agents. The future will belong to those of us who can manage this flow of digital thought as meticulously as a conductor leads an orchestra.</p>
  <p>However, there is a fascinating counter-argument to consider: the <span class="highlight">Intelligence Paradox</span>. Does a truly advanced agent use <em>more</em> tokens, or <em>fewer</em>? A novice programmer might write 1,000 lines of brute-force code to solve a problem a senior engineer solves in 100 elegant lines. It's possible that the current explosion in token use is a symptom of agent immaturity, and that as these systems become more intelligent, they will become more efficient, learning to solve complex problems with a fraction of the "thought" they require today.</p>
  <p>The age of the token has just begun, and it promises to be a far stranger, faster, and more transformative era than the one we're leaving behind.</p>
</section>

<!-- ════════════════════════════════════════════════════════════ -->
<!-- FOOTER -->
<!-- ════════════════════════════════════════════════════════════ -->
<div class="footer">
  <p><strong>Sources:</strong>
    <a href="https://openrouter.ai/rankings" target="_blank">OpenRouter Rankings</a> ·
    <a href="https://a16z.com/state-of-ai/" target="_blank">a16z State of AI</a> ·
    <a href="https://tomtunguz.com/is-token-consumption-slowing-down/" target="_blank">Tomasz Tunguz</a> ·
    <a href="https://www.economist.com/business/2025/11/23/ai-tokens-are-surging-but-are-profits" target="_blank">The Economist</a> ·
    <a href="https://illuminem.com/illuminemvoices/the-cost-of-context-the-exponential-growth-in-tokens" target="_blank">illuminem</a> ·
    <a href="https://online.stevens.edu/blog/hidden-economics-ai-agents-token-costs-latency/" target="_blank">Stevens Institute</a>
  </p>
</div>

<!-- ════════════════════════════════════════════════════════════ -->
<!-- JAVASCRIPT -->
<!-- ════════════════════════════════════════════════════════════ -->
<script>
// ── DATA ──────────────────────────────────────────────────────

const mooresLawData = [
  { year: 1971, transistors: 2300, label: "Intel 4004" },
  { year: 1974, transistors: 4500 },
  { year: 1978, transistors: 29000, label: "Intel 8086" },
  { year: 1982, transistors: 134000 },
  { year: 1985, transistors: 275000 },
  { year: 1989, transistors: 1200000 },
  { year: 1993, transistors: 3100000, label: "Pentium" },
  { year: 1997, transistors: 7500000 },
  { year: 2000, transistors: 42000000 },
  { year: 2004, transistors: 125000000 },
  { year: 2007, transistors: 291000000, label: "iPhone era" },
  { year: 2010, transistors: 1170000000 },
  { year: 2012, transistors: 1400000000 },
  { year: 2015, transistors: 3100000000 },
  { year: 2017, transistors: 19200000000 },
  { year: 2020, transistors: 16000000000, label: "Apple M1" },
  { year: 2022, transistors: 57000000000 },
  { year: 2024, transistors: 80000000000, label: "Plateau" }
];

const tokenLawData = [
  { year: 2025.1, tokens: 1e12, label: "1T/wk", labelPos: "right" },
  { year: 2025.3, tokens: 2e12 },
  { year: 2025.45, tokens: 3.5e12 },
  { year: 2025.58, tokens: 4.5e12, label: "4.5T", labelPos: "left" },
  { year: 2025.7, tokens: 5.5e12 },
  { year: 2025.8, tokens: 7e12 },
  { year: 2025.9, tokens: 8.5e12 },
  { year: 2026.1, tokens: 10e12, label: "10T/wk", labelPos: "left" }
];

const topApps = [
  { rank: 1, name: "OpenClaw", type: "coding", tokens: 133, unit: "B" },
  { rank: 2, name: "Kilo Code", type: "coding", tokens: 93.5, unit: "B" },
  { rank: 3, name: "BLACKBOXAI", type: "coding", tokens: 49.9, unit: "B" },
  { rank: 4, name: "liteLLM", type: "other", tokens: 44.5, unit: "B" },
  { rank: 5, name: "Janitor AI", type: "other", tokens: 31, unit: "B" },
  { rank: 6, name: "Claude Code", type: "coding", tokens: 18.9, unit: "B" },
  { rank: 7, name: "Cline", type: "coding", tokens: 16.7, unit: "B" },
  { rank: 8, name: "Roo Code", type: "coding", tokens: 14.6, unit: "B" }
];

const multiplierData = [
  { label: "Simple chat", value: 1, color: "#6ee7b7" },
  { label: "Reasoning (o1)", value: 15, color: "#f59e0b" },
  { label: "Coding agent", value: 50, color: "#ea580c" },
  { label: "Deep research", value: 100, color: "#dc2626" }
];

const agentTreeData = {
  name: "Fix auth bug",
  children: [
    { name: "Read code", tokens: 8000 },
    { name: "Analyze error", tokens: 12000 },
    {
      name: "Search fixes",
      tokens: 45000,
      children: [
        { name: "Query 1", tokens: 15000 },
        { name: "Query 2", tokens: 15000 },
        { name: "Query 3", tokens: 15000 }
      ]
    },
    { name: "Write patch", tokens: 35000 },
    {
      name: "Test patch",
      tokens: 20000,
      loop: true,
      children: [
        { name: "❌ Fail → retry", tokens: 80000 },
        { name: "Write patch v2", tokens: 40000 },
        { name: "Test v2 ✓", tokens: 20000 }
      ]
    }
  ]
};

// ── INTERSECTION OBSERVER FOR FADE-INS ──────────────────────

const observer = new IntersectionObserver((entries) => {
  entries.forEach(entry => {
    if (entry.isIntersecting) {
      entry.target.classList.add('visible');
    }
  });
}, { threshold: 0.15 });

document.querySelectorAll('.fade-in').forEach(el => observer.observe(el));

// ── CHART 1: THE TWO CURVES ────────────────────────────────

function drawTwoCurves() {
  const container = document.getElementById('two-curves-chart');
  const width = container.clientWidth;
  const isMobile = width < 500;
  const height = isMobile ? Math.min(320, width * 0.7) : Math.min(420, width * 0.55);
  const margin = isMobile
    ? { top: 20, right: 15, bottom: 40, left: 45 }
    : { top: 30, right: 30, bottom: 50, left: 65 };
  const labelSize = isMobile ? '10px' : '11px';
  const titleLabelSize = isMobile ? '11px' : '13px';
  const dotLabelSize = isMobile ? '9px' : '9px';
  const milestoneSize = isMobile ? '10px' : '10px';
  const w = width - margin.left - margin.right;
  const h = height - margin.top - margin.bottom;

  const svg = d3.select('#two-curves-chart')
    .append('svg')
    .attr('width', width)
    .attr('height', height);

  const g = svg.append('g')
    .attr('transform', `translate(${margin.left},${margin.top})`);

  // X scale: use a piecewise scale to give more room to 2025-2026
  // Left portion (1971-2024) gets 60% of width, right portion (2024-2026.5) gets 40%
  const breakpoint = 2024;
  const breakW = w * 0.6;
  const xLeft = d3.scaleLinear().domain([1971, breakpoint]).range([0, breakW]);
  const xRight = d3.scaleLinear().domain([breakpoint, 2026.5]).range([breakW, w]);
  const x = (year) => year <= breakpoint ? xLeft(year) : xRight(year);
  x.domain = () => [1971, 2026.5];
  x.range = () => [0, w];

  // Y scale: log, from 1e3 to 1e13
  const y = d3.scaleLog().domain([1e3, 1e14]).range([h, 0]);

  // Grid lines
  const yTicks = [1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9, 1e10, 1e11, 1e12, 1e13];
  g.selectAll('.grid-line')
    .data(yTicks)
    .enter().append('line')
    .attr('x1', 0).attr('x2', w)
    .attr('y1', d => y(d)).attr('y2', d => y(d))
    .attr('stroke', 'rgba(0,0,0,0.06)');

  // X axis
  // X axis - manual ticks for the piecewise scale
  const xTicks = isMobile ? [1975, 1995, 2015, 2025] : [1975, 1985, 1995, 2005, 2015, 2025, 2026];
  xTicks.forEach(tick => {
    const tx = x(tick);
    g.append('line')
      .attr('x1', tx).attr('x2', tx)
      .attr('y1', h).attr('y2', h + 6)
      .attr('stroke', 'rgba(0,0,0,0.12)');
    g.append('text')
      .attr('x', tx).attr('y', h + 16)
      .attr('text-anchor', 'middle')
      .attr('fill', '#6b7280').attr('font-size', labelSize)
      .text(tick);
  });
  g.append('line')
    .attr('x1', 0).attr('x2', w)
    .attr('y1', h).attr('y2', h)
    .attr('stroke', 'rgba(0,0,0,0.12)');

  // Y axis labels
  const yLabels = [
    { v: 1e3, t: "1K" }, { v: 1e6, t: "1M" }, { v: 1e9, t: "1B" }, { v: 1e12, t: "1T" }
  ];
  yLabels.forEach(d => {
    g.append('text')
      .attr('x', -8).attr('y', y(d.v))
      .attr('text-anchor', 'end').attr('dominant-baseline', 'middle')
      .attr('fill', '#4b5563').attr('font-size', isMobile ? '10px' : '12px')
      .attr('font-weight', '500')
      .text(d.t);
  });

  // Moore's Law line
  const mooresLine = d3.line()
    .x(d => x(d.year))
    .y(d => y(d.transistors))
    .curve(d3.curveMonotoneX);

  const mooresPathLen = g.append('path')
    .datum(mooresLawData)
    .attr('d', mooresLine)
    .attr('fill', 'none')
    .attr('stroke', 'none')
    .node().getTotalLength();

  const mooresPath = g.append('path')
    .datum(mooresLawData)
    .attr('d', mooresLine)
    .attr('fill', 'none')
    .attr('stroke', '#4f46e5')
    .attr('stroke-width', 2.5)
    .attr('stroke-dasharray', mooresPathLen)
    .attr('stroke-dashoffset', mooresPathLen);

  // Moore's Law label — positioned below the curve to avoid overlap
  g.append('text')
    .attr('x', x(isMobile ? 1990 : 2000)).attr('y', y(isMobile ? 2e4 : 5e4))
    .attr('fill', '#4f46e5').attr('font-size', isMobile ? '9px' : '12px').attr('font-weight', '600')
    .text("Moore's Law");

  if (!isMobile) {
    g.append('text')
      .attr('x', x(2000)).attr('y', y(5e4) + 16)
      .attr('fill', '#6b7280').attr('font-size', '10px')
      .text("(transistors per chip)");
  }

  // Moore's Law milestone dots
  const mooresMilestones = isMobile
    ? mooresLawData.filter(d => d.label && ['Intel 4004', 'Pentium', 'Plateau'].includes(d.label))
    : mooresLawData.filter(d => d.label);
  mooresMilestones.forEach(d => {
    g.append('circle')
      .attr('cx', x(d.year)).attr('cy', y(d.transistors))
      .attr('r', isMobile ? 3 : 4).attr('fill', '#4f46e5').attr('opacity', 0.7);
    g.append('text')
      .attr('x', x(d.year)).attr('y', y(d.transistors) - 8)
      .attr('text-anchor', 'middle')
      .attr('fill', '#6b7280').attr('font-size', dotLabelSize)
      .text(d.label);
  });

  // Token Law line
  const tokenLine = d3.line()
    .x(d => x(d.year))
    .y(d => y(d.tokens))
    .curve(d3.curveMonotoneX);

  const tokenPath = g.append('path')
    .datum(tokenLawData)
    .attr('d', tokenLine)
    .attr('fill', 'none')
    .attr('stroke', '#ea580c')
    .attr('stroke-width', 3.5)
    .attr('stroke-dasharray', function() { return this.getTotalLength(); })
    .attr('stroke-dashoffset', function() { return this.getTotalLength(); });

  // Token Law glow
  g.append('path')
    .datum(tokenLawData)
    .attr('d', tokenLine)
    .attr('fill', 'none')
    .attr('stroke', '#ea580c')
    .attr('stroke-width', 8)
    .attr('opacity', 0.15);

  // Token Law label
  g.append('text')
    .attr('x', x(isMobile ? 2025.5 : 2025.2)).attr('y', y(5e13))
    .attr('fill', '#c2410c').attr('font-size', titleLabelSize).attr('font-weight', '700')
    .text("Token Law");

  if (!isMobile) {
    g.append('text')
      .attr('x', x(2025.2)).attr('y', y(5e13) + 16)
      .attr('fill', '#6b7280').attr('font-size', '10px')
      .text("(tokens/week)");
  }

  // Token Law milestone dots - on mobile, only show first and last
  const tokenMilestones = isMobile
    ? tokenLawData.filter(d => d.label && (d.label === '1T/wk' || d.label === '10T/wk'))
    : tokenLawData.filter(d => d.label);
  tokenMilestones.forEach(d => {
    g.append('circle')
      .attr('cx', x(d.year)).attr('cy', y(d.tokens))
      .attr('r', isMobile ? 4 : 5).attr('fill', '#ea580c');
    const xOff = d.labelPos === 'left' ? (isMobile ? -6 : -10) : (isMobile ? 6 : 10);
    const anchor = d.labelPos === 'left' ? 'end' : 'start';
    g.append('text')
      .attr('x', x(d.year) + xOff).attr('y', y(d.tokens) + 4)
      .attr('text-anchor', anchor)
      .attr('fill', '#c2410c').attr('font-size', milestoneSize).attr('font-weight', '600')
      .text(d.label);
  });

  // Doubling time annotation for Moore's Law
  if (!isMobile) {
    const mAnnoteX = x(1990);
    const mAnnoteY = y(1e5);
    g.append('rect')
      .attr('x', mAnnoteX - 72).attr('y', mAnnoteY - 10)
      .attr('width', 144).attr('height', 22)
      .attr('rx', 4)
      .attr('fill', 'rgba(79,70,229,0.08)')
      .attr('stroke', 'rgba(79,70,229,0.2)').attr('stroke-width', 1);
    g.append('text')
      .attr('x', mAnnoteX).attr('y', mAnnoteY + 5)
      .attr('text-anchor', 'middle')
      .attr('fill', '#4f46e5').attr('font-size', '11px').attr('font-weight', '600')
      .text('Doubles every ~2 years');
  }

  // Doubling time annotation for Token Law
  const tAnnoteX = x(isMobile ? 2025.8 : 2025.6);
  const tAnnoteY = y(isMobile ? 5e10 : 2e11);
  if (!isMobile) {
    g.append('rect')
      .attr('x', tAnnoteX - 82).attr('y', tAnnoteY - 10)
      .attr('width', 164).attr('height', 22)
      .attr('rx', 4)
      .attr('fill', 'rgba(234,88,12,0.08)')
      .attr('stroke', 'rgba(234,88,12,0.2)').attr('stroke-width', 1);
  }
  g.append('text')
    .attr('x', tAnnoteX).attr('y', tAnnoteY + 5)
    .attr('text-anchor', 'middle')
    .attr('fill', '#c2410c').attr('font-size', isMobile ? '9px' : '11px').attr('font-weight', '600')
    .text(isMobile ? '~3.5mo doubling' : 'Doubles every ~3.5 months');

  // Divider line at 2024
  g.append('line')
    .attr('x1', x(2024.5)).attr('x2', x(2024.5))
    .attr('y1', 0).attr('y2', h)
    .attr('stroke', 'rgba(0,0,0,0.12)')
    .attr('stroke-dasharray', '4 4');

  g.append('text')
    .attr('x', x(2024.5)).attr('y', -10)
    .attr('text-anchor', 'middle')
    .attr('fill', '#6b7280').attr('font-size', isMobile ? '8px' : '10px')
    .text("← 53 years | 2 years →");

  // Animate on scroll
  const chartObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        mooresPath.transition().duration(2000).ease(d3.easeCubicOut)
          .attr('stroke-dashoffset', 0);
        tokenPath.transition().delay(1500).duration(1500).ease(d3.easeCubicOut)
          .attr('stroke-dashoffset', 0);
        chartObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.3 });
  chartObserver.observe(container);
}

// ── TOP APPS TABLE ──────────────────────────────────────────

function drawAppsTable() {
  const maxTokens = topApps[0].tokens;
  const tbody = document.getElementById('apps-table-body');

  topApps.forEach((app, i) => {
    const pct = (app.tokens / maxTokens * 100).toFixed(1);
    const row = document.createElement('tr');
    row.innerHTML = `
      <td style="color:var(--text-muted)">${app.rank}</td>
      <td style="font-weight:600">${app.name}</td>
      <td><span class="type-badge ${app.type}">${app.type === 'coding' ? '⚡ Coding' : '💬 Other'}</span></td>
      <td class="bar-cell">
        <div class="bar-bg">
          <div class="bar-fill ${app.type}" style="width:0%" data-width="${pct}%"></div>
        </div>
      </td>
      <td style="text-align:right; font-family:var(--mono); font-size:0.85rem">${app.tokens}${app.unit}</td>
    `;
    tbody.appendChild(row);
  });

  // Animate bars
  const tableObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        document.querySelectorAll('.bar-fill').forEach((bar, i) => {
          setTimeout(() => {
            bar.style.width = bar.dataset.width;
          }, i * 80);
        });
        tableObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.3 });
  tableObserver.observe(tbody.closest('.chart-container'));
}

// ── AGENT COMPARISON ANIMATION ──────────────────────────────

function drawAgentComparison() {
  // Simple query: just a dot
  const simpleSvg = d3.select('#simple-dot-viz')
    .append('svg')
    .attr('width', '100%')
    .attr('height', 120);

  const dotGroup = simpleSvg.append('g')
    .attr('transform', 'translate(50%, 60)');

  simpleSvg.append('circle')
    .attr('cx', '50%').attr('cy', 60)
    .attr('r', 0)
    .attr('fill', '#16a34a')
    .attr('opacity', 0.8)
    .transition().delay(500).duration(600)
    .attr('r', 8);

  simpleSvg.append('text')
    .attr('x', '50%').attr('y', 90)
    .attr('text-anchor', 'middle')
    .attr('fill', '#16a34a').attr('font-size', '12px')
    .attr('opacity', 0)
    .text('"Paris"')
    .transition().delay(1000).duration(400)
    .attr('opacity', 1);

  // Animate simple counter
  const simpleObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        animateCounter('simple-token-count', 0, 47, 800, false);
        setTimeout(() => {
          document.getElementById('simple-cost').textContent = '$0.0001';
          document.getElementById('simple-token-count').textContent = '47';
        }, 850);
        simpleObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.3 });
  simpleObserver.observe(document.querySelector('.agent-card.simple'));

  // Agent tree
  drawAgentTree();

  // Animate agent counter
  const agentObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        animateCounter('agent-token-count', 0, 2900000, 2500, true);
        setTimeout(() => {
          document.getElementById('agent-cost').textContent = '$5.80';
          document.getElementById('agent-token-count').textContent = '2,900,000';
        }, 2600);
        agentObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.3 });
  agentObserver.observe(document.querySelector('.agent-card.agentic'));
}

function drawAgentTree() {
  const container = document.getElementById('agent-tree-viz');
  const width = container.clientWidth;
  const isMobileTree = width < 400;
  const height = isMobileTree ? 240 : 320;

  // On mobile, use a simplified tree (collapse leaf children)
  let treeData = agentTreeData;
  if (isMobileTree) {
    treeData = {
      name: agentTreeData.name,
      children: agentTreeData.children.map(c => ({
        name: c.name,
        tokens: c.tokens,
        loop: c.loop
        // omit grandchildren on mobile
      }))
    };
  }

  const svg = d3.select('#agent-tree-viz')
    .append('svg')
    .attr('width', width)
    .attr('height', height);

  const pad = isMobileTree ? 10 : 20;
  const g = svg.append('g').attr('transform', `translate(${pad}, ${pad})`);

  const treeLayout = d3.tree()
    .size([width - pad * 2, height - pad * 2 - 20])
    .separation((a, b) => a.parent === b.parent ? (isMobileTree ? 1 : 1.5) : 2);
  const root = d3.hierarchy(treeData);
  treeLayout(root);

  // Links
  g.selectAll('.agent-tree-link')
    .data(root.links())
    .enter().append('path')
    .attr('class', 'agent-tree-link')
    .attr('d', d3.linkVertical().x(d => d.x).y(d => d.y))
    .attr('stroke', d => d.target.data.loop ? '#ef4444' : 'rgba(249,115,22,0.3)')
    .attr('stroke-dasharray', d => d.target.data.loop ? '4 3' : 'none');

  // Nodes
  const nodes = g.selectAll('.node')
    .data(root.descendants())
    .enter().append('g')
    .attr('transform', d => `translate(${d.x},${d.y})`);

  nodes.append('circle')
    .attr('r', d => d.depth === 0 ? 6 : 5)
    .attr('fill', d => {
      if (d.data.loop) return '#ef4444';
      if (d.data.name.includes('❌')) return '#ef4444';
      if (d.data.name.includes('✓')) return '#22c55e';
      return '#c2410c';
    })
    .attr('opacity', 0.9);

  // Labels: alternate above/below for leaf nodes to avoid overlap
  let leafIndex = 0;
  nodes.append('text')
    .attr('dy', d => {
      if (d.children) return -14;
      leafIndex++;
      return (leafIndex % 2 === 0) ? -14 : 20;
    })
    .attr('text-anchor', 'middle')
    .attr('fill', '#6b7280')
    .attr('font-size', '8px')
    .text(d => {
      const name = d.data.name;
      if (name.length > 14) return name.slice(0, 12) + '…';
      return name;
    });
}

function animateCounter(id, start, end, duration, format) {
  const el = document.getElementById(id);
  const startTime = performance.now();

  function update(currentTime) {
    const elapsed = currentTime - startTime;
    const progress = Math.min(elapsed / duration, 1);
    const eased = 1 - Math.pow(1 - progress, 3);
    const current = Math.round(start + (end - start) * eased);

    if (format) {
      el.textContent = current.toLocaleString();
    } else {
      el.textContent = current;
    }

    if (progress < 1) {
      requestAnimationFrame(update);
    }
  }
  requestAnimationFrame(update);
}

// ── MULTIPLIER BARS ─────────────────────────────────────────

function drawMultiplierBars() {
  const container = document.getElementById('multiplier-bars');
  const maxVal = multiplierData[multiplierData.length - 1].value;

  multiplierData.forEach(d => {
    const row = document.createElement('div');
    row.className = 'multiplier-row';
    const pct = (d.value / maxVal * 100);
    const textColor = d.value <= 15 ? '#1a1a2e' : '#ffffff';
    row.innerHTML = `
      <div class="multiplier-label">${d.label}</div>
      <div class="multiplier-bar-track">
        <div class="multiplier-bar" style="width:0%; background:${d.color}; color:${textColor}" data-width="${Math.max(pct, 3)}%">
          ${d.value}×
        </div>
      </div>
    `;
    container.appendChild(row);
  });

  const barObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        container.querySelectorAll('.multiplier-bar').forEach((bar, i) => {
          setTimeout(() => {
            bar.style.width = bar.dataset.width;
          }, i * 200);
        });
        barObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.3 });
  barObserver.observe(container);
}

// ── JEVONS CHART ────────────────────────────────────────────

function drawJevonsChart() {
  const container = document.getElementById('jevons-chart');
  const width = container.clientWidth;
  const height = 220;
  const margin = { top: 20, right: 20, bottom: 40, left: 20 };
  const w = width - margin.left - margin.right;
  const h = height - margin.top - margin.bottom;

  const svg = d3.select('#jevons-chart')
    .append('svg')
    .attr('width', width)
    .attr('height', height);

  const g = svg.append('g')
    .attr('transform', `translate(${margin.left},${margin.top})`);

  // Y-axis label
  svg.append('text')
    .attr('x', 12)
    .attr('y', margin.top - 6)
    .attr('fill', '#6b7280')
    .attr('font-size', '10px')
    .text('Multiplier (×)');

  const data = [
    { label: "Hardware\nefficiency", value: 4, color: "#16a34a", symbol: "×4" },
    { label: "Software\noptimization", value: 3, color: "#0891b2", symbol: "×3" },
    { label: "Combined\nefficiency", value: 12, color: "#4f46e5", symbol: "×12" },
    { label: "Token\ngrowth", value: 50, color: "#ea580c", symbol: "×50" },
    { label: "Net energy\nper query", value: 4.2, color: "#dc2626", symbol: "×4.2↑" }
  ];

  const x = d3.scaleBand().domain(data.map(d => d.label)).range([0, w]).padding(0.3);
  const y = d3.scaleLinear().domain([0, 55]).range([h, 0]);

  // Bars
  const bars = g.selectAll('.bar')
    .data(data)
    .enter().append('rect')
    .attr('x', d => x(d.label))
    .attr('width', x.bandwidth())
    .attr('y', h)
    .attr('height', 0)
    .attr('rx', 4)
    .attr('fill', d => d.color)
    .attr('opacity', 0.85);

  // Labels above bars
  const labels = g.selectAll('.bar-label')
    .data(data)
    .enter().append('text')
    .attr('x', d => x(d.label) + x.bandwidth() / 2)
    .attr('y', h)
    .attr('text-anchor', 'middle')
    .attr('fill', d => d.color)
    .attr('font-size', '14px')
    .attr('font-weight', '700')
    .attr('font-family', 'var(--mono)')
    .attr('opacity', 0)
    .text(d => d.symbol);

  // X axis labels
  const jevonsLabelSize = width < 400 ? '9px' : '11px';
  const jevonsLabelSpacing = width < 400 ? 11 : 13;
  data.forEach((d, i) => {
    const lines = d.label.split('\n');
    lines.forEach((line, j) => {
      g.append('text')
        .attr('x', x(d.label) + x.bandwidth() / 2)
        .attr('y', h + 18 + j * jevonsLabelSpacing)
        .attr('text-anchor', 'middle')
        .attr('fill', '#4b5563').attr('font-size', jevonsLabelSize)
        .text(line);
    });
  });

  // Animate on scroll
  let jevonsAnimated = false;
  function animateJevons() {
    if (jevonsAnimated) return;
    jevonsAnimated = true;
    bars.transition().duration(1200).delay((d, i) => i * 200)
      .attr('y', d => y(d.value))
      .attr('height', d => h - y(d.value));
    labels.transition().duration(400).delay((d, i) => 1200 + i * 200)
      .attr('y', d => y(d.value) - 8)
      .attr('opacity', 1);
  }
  const jevonsObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        animateJevons();
        jevonsObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.1 });
  jevonsObserver.observe(container);
  // Fallback: animate after 4 seconds regardless
  setTimeout(animateJevons, 4000);
}

// ── INIT ────────────────────────────────────────────────────

document.addEventListener('DOMContentLoaded', () => {
  drawTwoCurves();
  drawAppsTable();
  drawAgentComparison();
  drawMultiplierBars();
  drawJevonsChart();
});

// Redraw on resize (debounced)
let resizeTimer;
window.addEventListener('resize', () => {
  clearTimeout(resizeTimer);
  resizeTimer = setTimeout(() => {
    document.getElementById('two-curves-chart').innerHTML = '';
    document.getElementById('jevons-chart').innerHTML = '';
    drawTwoCurves();
    drawJevonsChart();
  }, 300);
});
</script>


<script>
// ── DATA ──────────────────────────────────────────────────────

const mooresLawData = [
  { year: 1971, transistors: 2300, label: "Intel 4004" },
  { year: 1974, transistors: 4500 },
  { year: 1978, transistors: 29000, label: "Intel 8086" },
  { year: 1982, transistors: 134000 },
  { year: 1985, transistors: 275000 },
  { year: 1989, transistors: 1200000 },
  { year: 1993, transistors: 3100000, label: "Pentium" },
  { year: 1997, transistors: 7500000 },
  { year: 2000, transistors: 42000000 },
  { year: 2004, transistors: 125000000 },
  { year: 2007, transistors: 291000000, label: "iPhone era" },
  { year: 2010, transistors: 1170000000 },
  { year: 2012, transistors: 1400000000 },
  { year: 2015, transistors: 3100000000 },
  { year: 2017, transistors: 19200000000 },
  { year: 2020, transistors: 16000000000, label: "Apple M1" },
  { year: 2022, transistors: 57000000000 },
  { year: 2024, transistors: 80000000000, label: "Plateau" }
];

const tokenLawData = [
  { year: 2025.1, tokens: 1e12, label: "1T/wk", labelPos: "right" },
  { year: 2025.3, tokens: 2e12 },
  { year: 2025.45, tokens: 3.5e12 },
  { year: 2025.58, tokens: 4.5e12, label: "4.5T", labelPos: "left" },
  { year: 2025.7, tokens: 5.5e12 },
  { year: 2025.8, tokens: 7e12 },
  { year: 2025.9, tokens: 8.5e12 },
  { year: 2026.1, tokens: 10e12, label: "10T/wk", labelPos: "left" }
];

const topApps = [
  { rank: 1, name: "OpenClaw", type: "coding", tokens: 133, unit: "B" },
  { rank: 2, name: "Kilo Code", type: "coding", tokens: 93.5, unit: "B" },
  { rank: 3, name: "BLACKBOXAI", type: "coding", tokens: 49.9, unit: "B" },
  { rank: 4, name: "liteLLM", type: "other", tokens: 44.5, unit: "B" },
  { rank: 5, name: "Janitor AI", type: "other", tokens: 31, unit: "B" },
  { rank: 6, name: "Claude Code", type: "coding", tokens: 18.9, unit: "B" },
  { rank: 7, name: "Cline", type: "coding", tokens: 16.7, unit: "B" },
  { rank: 8, name: "Roo Code", type: "coding", tokens: 14.6, unit: "B" }
];

const multiplierData = [
  { label: "Simple chat", value: 1, color: "#6ee7b7" },
  { label: "Reasoning (o1)", value: 15, color: "#f59e0b" },
  { label: "Coding agent", value: 50, color: "#ea580c" },
  { label: "Deep research", value: 100, color: "#dc2626" }
];

const agentTreeData = {
  name: "Fix auth bug",
  children: [
    { name: "Read code", tokens: 8000 },
    { name: "Analyze error", tokens: 12000 },
    {
      name: "Search fixes",
      tokens: 45000,
      children: [
        { name: "Query 1", tokens: 15000 },
        { name: "Query 2", tokens: 15000 },
        { name: "Query 3", tokens: 15000 }
      ]
    },
    { name: "Write patch", tokens: 35000 },
    {
      name: "Test patch",
      tokens: 20000,
      loop: true,
      children: [
        { name: "❌ Fail → retry", tokens: 80000 },
        { name: "Write patch v2", tokens: 40000 },
        { name: "Test v2 ✓", tokens: 20000 }
      ]
    }
  ]
};

// ── INTERSECTION OBSERVER FOR FADE-INS ──────────────────────

const observer = new IntersectionObserver((entries) => {
  entries.forEach(entry => {
    if (entry.isIntersecting) {
      entry.target.classList.add('visible');
    }
  });
}, { threshold: 0.15 });

document.querySelectorAll('.fade-in').forEach(el => observer.observe(el));

// ── CHART 1: THE TWO CURVES ────────────────────────────────

function drawTwoCurves() {
  const container = document.getElementById('two-curves-chart');
  const width = container.clientWidth;
  const isMobile = width < 500;
  const height = isMobile ? Math.min(320, width * 0.7) : Math.min(420, width * 0.55);
  const margin = isMobile
    ? { top: 20, right: 15, bottom: 40, left: 45 }
    : { top: 30, right: 30, bottom: 50, left: 65 };
  const labelSize = isMobile ? '10px' : '11px';
  const titleLabelSize = isMobile ? '11px' : '13px';
  const dotLabelSize = isMobile ? '9px' : '9px';
  const milestoneSize = isMobile ? '10px' : '10px';
  const w = width - margin.left - margin.right;
  const h = height - margin.top - margin.bottom;

  const svg = d3.select('#two-curves-chart')
    .append('svg')
    .attr('width', width)
    .attr('height', height);

  const g = svg.append('g')
    .attr('transform', `translate(${margin.left},${margin.top})`);

  // X scale: use a piecewise scale to give more room to 2025-2026
  // Left portion (1971-2024) gets 60% of width, right portion (2024-2026.5) gets 40%
  const breakpoint = 2024;
  const breakW = w * 0.6;
  const xLeft = d3.scaleLinear().domain([1971, breakpoint]).range([0, breakW]);
  const xRight = d3.scaleLinear().domain([breakpoint, 2026.5]).range([breakW, w]);
  const x = (year) => year <= breakpoint ? xLeft(year) : xRight(year);
  x.domain = () => [1971, 2026.5];
  x.range = () => [0, w];

  // Y scale: log, from 1e3 to 1e13
  const y = d3.scaleLog().domain([1e3, 1e14]).range([h, 0]);

  // Grid lines
  const yTicks = [1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9, 1e10, 1e11, 1e12, 1e13];
  g.selectAll('.grid-line')
    .data(yTicks)
    .enter().append('line')
    .attr('x1', 0).attr('x2', w)
    .attr('y1', d => y(d)).attr('y2', d => y(d))
    .attr('stroke', 'rgba(0,0,0,0.06)');

  // X axis
  // X axis - manual ticks for the piecewise scale
  const xTicks = isMobile ? [1975, 1995, 2015, 2025] : [1975, 1985, 1995, 2005, 2015, 2025, 2026];
  xTicks.forEach(tick => {
    const tx = x(tick);
    g.append('line')
      .attr('x1', tx).attr('x2', tx)
      .attr('y1', h).attr('y2', h + 6)
      .attr('stroke', 'rgba(0,0,0,0.12)');
    g.append('text')
      .attr('x', tx).attr('y', h + 16)
      .attr('text-anchor', 'middle')
      .attr('fill', '#6b7280').attr('font-size', labelSize)
      .text(tick);
  });
  g.append('line')
    .attr('x1', 0).attr('x2', w)
    .attr('y1', h).attr('y2', h)
    .attr('stroke', 'rgba(0,0,0,0.12)');

  // Y axis labels
  const yLabels = [
    { v: 1e3, t: "1K" }, { v: 1e6, t: "1M" }, { v: 1e9, t: "1B" }, { v: 1e12, t: "1T" }
  ];
  yLabels.forEach(d => {
    g.append('text')
      .attr('x', -8).attr('y', y(d.v))
      .attr('text-anchor', 'end').attr('dominant-baseline', 'middle')
      .attr('fill', '#4b5563').attr('font-size', isMobile ? '10px' : '12px')
      .attr('font-weight', '500')
      .text(d.t);
  });

  // Moore's Law line
  const mooresLine = d3.line()
    .x(d => x(d.year))
    .y(d => y(d.transistors))
    .curve(d3.curveMonotoneX);

  const mooresPathLen = g.append('path')
    .datum(mooresLawData)
    .attr('d', mooresLine)
    .attr('fill', 'none')
    .attr('stroke', 'none')
    .node().getTotalLength();

  const mooresPath = g.append('path')
    .datum(mooresLawData)
    .attr('d', mooresLine)
    .attr('fill', 'none')
    .attr('stroke', '#4f46e5')
    .attr('stroke-width', 2.5)
    .attr('stroke-dasharray', mooresPathLen)
    .attr('stroke-dashoffset', mooresPathLen);

  // Moore's Law label — positioned below the curve to avoid overlap
  g.append('text')
    .attr('x', x(isMobile ? 1990 : 2000)).attr('y', y(isMobile ? 2e4 : 5e4))
    .attr('fill', '#4f46e5').attr('font-size', isMobile ? '9px' : '12px').attr('font-weight', '600')
    .text("Moore's Law");

  if (!isMobile) {
    g.append('text')
      .attr('x', x(2000)).attr('y', y(5e4) + 16)
      .attr('fill', '#6b7280').attr('font-size', '10px')
      .text("(transistors per chip)");
  }

  // Moore's Law milestone dots
  const mooresMilestones = isMobile
    ? mooresLawData.filter(d => d.label && ['Intel 4004', 'Pentium', 'Plateau'].includes(d.label))
    : mooresLawData.filter(d => d.label);
  mooresMilestones.forEach(d => {
    g.append('circle')
      .attr('cx', x(d.year)).attr('cy', y(d.transistors))
      .attr('r', isMobile ? 3 : 4).attr('fill', '#4f46e5').attr('opacity', 0.7);
    g.append('text')
      .attr('x', x(d.year)).attr('y', y(d.transistors) - 8)
      .attr('text-anchor', 'middle')
      .attr('fill', '#6b7280').attr('font-size', dotLabelSize)
      .text(d.label);
  });

  // Token Law line
  const tokenLine = d3.line()
    .x(d => x(d.year))
    .y(d => y(d.tokens))
    .curve(d3.curveMonotoneX);

  const tokenPath = g.append('path')
    .datum(tokenLawData)
    .attr('d', tokenLine)
    .attr('fill', 'none')
    .attr('stroke', '#ea580c')
    .attr('stroke-width', 3.5)
    .attr('stroke-dasharray', function() { return this.getTotalLength(); })
    .attr('stroke-dashoffset', function() { return this.getTotalLength(); });

  // Token Law glow
  g.append('path')
    .datum(tokenLawData)
    .attr('d', tokenLine)
    .attr('fill', 'none')
    .attr('stroke', '#ea580c')
    .attr('stroke-width', 8)
    .attr('opacity', 0.15);

  // Token Law label
  g.append('text')
    .attr('x', x(isMobile ? 2025.5 : 2025.2)).attr('y', y(5e13))
    .attr('fill', '#c2410c').attr('font-size', titleLabelSize).attr('font-weight', '700')
    .text("Token Law");

  if (!isMobile) {
    g.append('text')
      .attr('x', x(2025.2)).attr('y', y(5e13) + 16)
      .attr('fill', '#6b7280').attr('font-size', '10px')
      .text("(tokens/week)");
  }

  // Token Law milestone dots - on mobile, only show first and last
  const tokenMilestones = isMobile
    ? tokenLawData.filter(d => d.label && (d.label === '1T/wk' || d.label === '10T/wk'))
    : tokenLawData.filter(d => d.label);
  tokenMilestones.forEach(d => {
    g.append('circle')
      .attr('cx', x(d.year)).attr('cy', y(d.tokens))
      .attr('r', isMobile ? 4 : 5).attr('fill', '#ea580c');
    const xOff = d.labelPos === 'left' ? (isMobile ? -6 : -10) : (isMobile ? 6 : 10);
    const anchor = d.labelPos === 'left' ? 'end' : 'start';
    g.append('text')
      .attr('x', x(d.year) + xOff).attr('y', y(d.tokens) + 4)
      .attr('text-anchor', anchor)
      .attr('fill', '#c2410c').attr('font-size', milestoneSize).attr('font-weight', '600')
      .text(d.label);
  });

  // Doubling time annotation for Moore's Law
  if (!isMobile) {
    const mAnnoteX = x(1990);
    const mAnnoteY = y(1e5);
    g.append('rect')
      .attr('x', mAnnoteX - 72).attr('y', mAnnoteY - 10)
      .attr('width', 144).attr('height', 22)
      .attr('rx', 4)
      .attr('fill', 'rgba(79,70,229,0.08)')
      .attr('stroke', 'rgba(79,70,229,0.2)').attr('stroke-width', 1);
    g.append('text')
      .attr('x', mAnnoteX).attr('y', mAnnoteY + 5)
      .attr('text-anchor', 'middle')
      .attr('fill', '#4f46e5').attr('font-size', '11px').attr('font-weight', '600')
      .text('Doubles every ~2 years');
  }

  // Doubling time annotation for Token Law
  const tAnnoteX = x(isMobile ? 2025.8 : 2025.6);
  const tAnnoteY = y(isMobile ? 5e10 : 2e11);
  if (!isMobile) {
    g.append('rect')
      .attr('x', tAnnoteX - 82).attr('y', tAnnoteY - 10)
      .attr('width', 164).attr('height', 22)
      .attr('rx', 4)
      .attr('fill', 'rgba(234,88,12,0.08)')
      .attr('stroke', 'rgba(234,88,12,0.2)').attr('stroke-width', 1);
  }
  g.append('text')
    .attr('x', tAnnoteX).attr('y', tAnnoteY + 5)
    .attr('text-anchor', 'middle')
    .attr('fill', '#c2410c').attr('font-size', isMobile ? '9px' : '11px').attr('font-weight', '600')
    .text(isMobile ? '~3.5mo doubling' : 'Doubles every ~3.5 months');

  // Divider line at 2024
  g.append('line')
    .attr('x1', x(2024.5)).attr('x2', x(2024.5))
    .attr('y1', 0).attr('y2', h)
    .attr('stroke', 'rgba(0,0,0,0.12)')
    .attr('stroke-dasharray', '4 4');

  g.append('text')
    .attr('x', x(2024.5)).attr('y', -10)
    .attr('text-anchor', 'middle')
    .attr('fill', '#6b7280').attr('font-size', isMobile ? '8px' : '10px')
    .text("← 53 years | 2 years →");

  // Animate on scroll
  const chartObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        mooresPath.transition().duration(2000).ease(d3.easeCubicOut)
          .attr('stroke-dashoffset', 0);
        tokenPath.transition().delay(1500).duration(1500).ease(d3.easeCubicOut)
          .attr('stroke-dashoffset', 0);
        chartObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.3 });
  chartObserver.observe(container);
}

// ── TOP APPS TABLE ──────────────────────────────────────────

function drawAppsTable() {
  const maxTokens = topApps[0].tokens;
  const tbody = document.getElementById('apps-table-body');

  topApps.forEach((app, i) => {
    const pct = (app.tokens / maxTokens * 100).toFixed(1);
    const row = document.createElement('tr');
    row.innerHTML = `
      <td style="color:var(--text-muted)">${app.rank}</td>
      <td style="font-weight:600">${app.name}</td>
      <td><span class="type-badge ${app.type}">${app.type === 'coding' ? '⚡ Coding' : '💬 Other'}</span></td>
      <td class="bar-cell">
        <div class="bar-bg">
          <div class="bar-fill ${app.type}" style="width:0%" data-width="${pct}%"></div>
        </div>
      </td>
      <td style="text-align:right; font-family:var(--mono); font-size:0.85rem">${app.tokens}${app.unit}</td>
    `;
    tbody.appendChild(row);
  });

  // Animate bars
  const tableObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        document.querySelectorAll('.bar-fill').forEach((bar, i) => {
          setTimeout(() => {
            bar.style.width = bar.dataset.width;
          }, i * 80);
        });
        tableObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.3 });
  tableObserver.observe(tbody.closest('.chart-container'));
}

// ── AGENT COMPARISON ANIMATION ──────────────────────────────

function drawAgentComparison() {
  // Simple query: just a dot
  const simpleSvg = d3.select('#simple-dot-viz')
    .append('svg')
    .attr('width', '100%')
    .attr('height', 120);

  const dotGroup = simpleSvg.append('g')
    .attr('transform', 'translate(50%, 60)');

  simpleSvg.append('circle')
    .attr('cx', '50%').attr('cy', 60)
    .attr('r', 0)
    .attr('fill', '#16a34a')
    .attr('opacity', 0.8)
    .transition().delay(500).duration(600)
    .attr('r', 8);

  simpleSvg.append('text')
    .attr('x', '50%').attr('y', 90)
    .attr('text-anchor', 'middle')
    .attr('fill', '#16a34a').attr('font-size', '12px')
    .attr('opacity', 0)
    .text('"Paris"')
    .transition().delay(1000).duration(400)
    .attr('opacity', 1);

  // Animate simple counter
  const simpleObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        animateCounter('simple-token-count', 0, 47, 800, false);
        setTimeout(() => {
          document.getElementById('simple-cost').textContent = '$0.0001';
          document.getElementById('simple-token-count').textContent = '47';
        }, 850);
        simpleObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.3 });
  simpleObserver.observe(document.querySelector('.agent-card.simple'));

  // Agent tree
  drawAgentTree();

  // Animate agent counter
  const agentObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        animateCounter('agent-token-count', 0, 2900000, 2500, true);
        setTimeout(() => {
          document.getElementById('agent-cost').textContent = '$5.80';
          document.getElementById('agent-token-count').textContent = '2,900,000';
        }, 2600);
        agentObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.3 });
  agentObserver.observe(document.querySelector('.agent-card.agentic'));
}

function drawAgentTree() {
  const container = document.getElementById('agent-tree-viz');
  const width = container.clientWidth;
  const isMobileTree = width < 400;
  const height = isMobileTree ? 240 : 320;

  // On mobile, use a simplified tree (collapse leaf children)
  let treeData = agentTreeData;
  if (isMobileTree) {
    treeData = {
      name: agentTreeData.name,
      children: agentTreeData.children.map(c => ({
        name: c.name,
        tokens: c.tokens,
        loop: c.loop
        // omit grandchildren on mobile
      }))
    };
  }

  const svg = d3.select('#agent-tree-viz')
    .append('svg')
    .attr('width', width)
    .attr('height', height);

  const pad = isMobileTree ? 10 : 20;
  const g = svg.append('g').attr('transform', `translate(${pad}, ${pad})`);

  const treeLayout = d3.tree()
    .size([width - pad * 2, height - pad * 2 - 20])
    .separation((a, b) => a.parent === b.parent ? (isMobileTree ? 1 : 1.5) : 2);
  const root = d3.hierarchy(treeData);
  treeLayout(root);

  // Links
  g.selectAll('.agent-tree-link')
    .data(root.links())
    .enter().append('path')
    .attr('class', 'agent-tree-link')
    .attr('d', d3.linkVertical().x(d => d.x).y(d => d.y))
    .attr('stroke', d => d.target.data.loop ? '#ef4444' : 'rgba(249,115,22,0.3)')
    .attr('stroke-dasharray', d => d.target.data.loop ? '4 3' : 'none');

  // Nodes
  const nodes = g.selectAll('.node')
    .data(root.descendants())
    .enter().append('g')
    .attr('transform', d => `translate(${d.x},${d.y})`);

  nodes.append('circle')
    .attr('r', d => d.depth === 0 ? 6 : 5)
    .attr('fill', d => {
      if (d.data.loop) return '#ef4444';
      if (d.data.name.includes('❌')) return '#ef4444';
      if (d.data.name.includes('✓')) return '#22c55e';
      return '#c2410c';
    })
    .attr('opacity', 0.9);

  // Labels: alternate above/below for leaf nodes to avoid overlap
  let leafIndex = 0;
  nodes.append('text')
    .attr('dy', d => {
      if (d.children) return -14;
      leafIndex++;
      return (leafIndex % 2 === 0) ? -14 : 20;
    })
    .attr('text-anchor', 'middle')
    .attr('fill', '#6b7280')
    .attr('font-size', '8px')
    .text(d => {
      const name = d.data.name;
      if (name.length > 14) return name.slice(0, 12) + '…';
      return name;
    });
}

function animateCounter(id, start, end, duration, format) {
  const el = document.getElementById(id);
  const startTime = performance.now();

  function update(currentTime) {
    const elapsed = currentTime - startTime;
    const progress = Math.min(elapsed / duration, 1);
    const eased = 1 - Math.pow(1 - progress, 3);
    const current = Math.round(start + (end - start) * eased);

    if (format) {
      el.textContent = current.toLocaleString();
    } else {
      el.textContent = current;
    }

    if (progress < 1) {
      requestAnimationFrame(update);
    }
  }
  requestAnimationFrame(update);
}

// ── MULTIPLIER BARS ─────────────────────────────────────────

function drawMultiplierBars() {
  const container = document.getElementById('multiplier-bars');
  const maxVal = multiplierData[multiplierData.length - 1].value;

  multiplierData.forEach(d => {
    const row = document.createElement('div');
    row.className = 'multiplier-row';
    const pct = (d.value / maxVal * 100);
    const textColor = d.value <= 15 ? '#1a1a2e' : '#ffffff';
    row.innerHTML = `
      <div class="multiplier-label">${d.label}</div>
      <div class="multiplier-bar-track">
        <div class="multiplier-bar" style="width:0%; background:${d.color}; color:${textColor}" data-width="${Math.max(pct, 3)}%">
          ${d.value}×
        </div>
      </div>
    `;
    container.appendChild(row);
  });

  const barObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        container.querySelectorAll('.multiplier-bar').forEach((bar, i) => {
          setTimeout(() => {
            bar.style.width = bar.dataset.width;
          }, i * 200);
        });
        barObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.3 });
  barObserver.observe(container);
}

// ── JEVONS CHART ────────────────────────────────────────────

function drawJevonsChart() {
  const container = document.getElementById('jevons-chart');
  const width = container.clientWidth;
  const height = 220;
  const margin = { top: 20, right: 20, bottom: 40, left: 20 };
  const w = width - margin.left - margin.right;
  const h = height - margin.top - margin.bottom;

  const svg = d3.select('#jevons-chart')
    .append('svg')
    .attr('width', width)
    .attr('height', height);

  const g = svg.append('g')
    .attr('transform', `translate(${margin.left},${margin.top})`);

  // Y-axis label
  svg.append('text')
    .attr('x', 12)
    .attr('y', margin.top - 6)
    .attr('fill', '#6b7280')
    .attr('font-size', '10px')
    .text('Multiplier (×)');

  const data = [
    { label: "Hardware\nefficiency", value: 4, color: "#16a34a", symbol: "×4" },
    { label: "Software\noptimization", value: 3, color: "#0891b2", symbol: "×3" },
    { label: "Combined\nefficiency", value: 12, color: "#4f46e5", symbol: "×12" },
    { label: "Token\ngrowth", value: 50, color: "#ea580c", symbol: "×50" },
    { label: "Net energy\nper query", value: 4.2, color: "#dc2626", symbol: "×4.2↑" }
  ];

  const x = d3.scaleBand().domain(data.map(d => d.label)).range([0, w]).padding(0.3);
  const y = d3.scaleLinear().domain([0, 55]).range([h, 0]);

  // Bars
  const bars = g.selectAll('.bar')
    .data(data)
    .enter().append('rect')
    .attr('x', d => x(d.label))
    .attr('width', x.bandwidth())
    .attr('y', h)
    .attr('height', 0)
    .attr('rx', 4)
    .attr('fill', d => d.color)
    .attr('opacity', 0.85);

  // Labels above bars
  const labels = g.selectAll('.bar-label')
    .data(data)
    .enter().append('text')
    .attr('x', d => x(d.label) + x.bandwidth() / 2)
    .attr('y', h)
    .attr('text-anchor', 'middle')
    .attr('fill', d => d.color)
    .attr('font-size', '14px')
    .attr('font-weight', '700')
    .attr('font-family', 'var(--mono)')
    .attr('opacity', 0)
    .text(d => d.symbol);

  // X axis labels
  const jevonsLabelSize = width < 400 ? '9px' : '11px';
  const jevonsLabelSpacing = width < 400 ? 11 : 13;
  data.forEach((d, i) => {
    const lines = d.label.split('\n');
    lines.forEach((line, j) => {
      g.append('text')
        .attr('x', x(d.label) + x.bandwidth() / 2)
        .attr('y', h + 18 + j * jevonsLabelSpacing)
        .attr('text-anchor', 'middle')
        .attr('fill', '#4b5563').attr('font-size', jevonsLabelSize)
        .text(line);
    });
  });

  // Animate on scroll
  let jevonsAnimated = false;
  function animateJevons() {
    if (jevonsAnimated) return;
    jevonsAnimated = true;
    bars.transition().duration(1200).delay((d, i) => i * 200)
      .attr('y', d => y(d.value))
      .attr('height', d => h - y(d.value));
    labels.transition().duration(400).delay((d, i) => 1200 + i * 200)
      .attr('y', d => y(d.value) - 8)
      .attr('opacity', 1);
  }
  const jevonsObserver = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        animateJevons();
        jevonsObserver.unobserve(entry.target);
      }
    });
  }, { threshold: 0.1 });
  jevonsObserver.observe(container);
  // Fallback: animate after 4 seconds regardless
  setTimeout(animateJevons, 4000);
}

// ── INIT ────────────────────────────────────────────────────

document.addEventListener('DOMContentLoaded', () => {
  drawTwoCurves();
  drawAppsTable();
  drawAgentComparison();
  drawMultiplierBars();
  drawJevonsChart();
});

// Redraw on resize (debounced)
let resizeTimer;
window.addEventListener('resize', () => {
  clearTimeout(resizeTimer);
  resizeTimer = setTimeout(() => {
    document.getElementById('two-curves-chart').innerHTML = '';
    document.getElementById('jevons-chart').innerHTML = '';
    drawTwoCurves();
    drawJevonsChart();
  }, 300);
});
</script>

</div>]]></content><author><name></name></author><category term="ai" /><category term="agents" /><category term="manus" /><category term="productivity" /><category term="agentic-ai" /><category term="token-law" /><category term="moores-law" /><summary type="html"><![CDATA[OpenRouter data shows that AI agents now consume more tokens than humans. Here's what that means for the future of computing.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://mandar.dev/assets/images/2026-02-20-token-law-hero.png" /><media:content medium="image" url="https://mandar.dev/assets/images/2026-02-20-token-law-hero.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Inversion of Control</title><link href="https://mandar.dev/2026/02/13/inversion-of-control/" rel="alternate" type="text/html" title="Inversion of Control" /><published>2026-02-13T00:00:00+00:00</published><updated>2026-02-13T00:00:00+00:00</updated><id>https://mandar.dev/2026/02/13/inversion-of-control</id><content type="html" xml:base="https://mandar.dev/2026/02/13/inversion-of-control/"><![CDATA[<p><img src="/assets/images/2026-02-13-inversion-of-control-hero.png" alt="A workspace with a laptop and phone, connected by glowing threads to an abstract AI network above — representing the inversion of control from human to AI orchestrator" /></p>

<p>In software engineering, there’s a powerful design principle known as <strong>Inversion of Control</strong>. It’s often summarized by the “Hollywood Principle”: <em>Don’t call us, we’ll call you.</em> In traditional programming, your custom code calls a library to get work done. With IoC, a framework calls your code when it’s needed. You cede control of the program’s flow to gain the power and structure of the framework. As <a href="https://martinfowler.com/bliki/InversionOfControl.html">Martin Fowler puts it</a>, this inversion is a “defining characteristic of a framework.”</p>

<p>For years, this concept was confined to my code editor. Then, a few weeks ago, I applied it to my life.</p>

<h2 id="the-experiment">The Experiment</h2>

<p>I had a backlog of personal tasks that was years deep — a sprawling, amorphous blob of neglected chores, half-finished projects, and “should-dos” that had been piling up since before the pandemic. Fix the dripping faucet in the guest bathroom. Digitize the box of old photos in the closet. Cancel that subscription I haven’t used in two years. Research refinancing options. The list was long enough to be paralyzing, which is precisely why none of it was getting done.</p>

<p>So I tried an experiment. I had Manus, my AI assistant, create a comprehensive spreadsheet of every single one of these long-abandoned tasks. Then, I gave it a new role: <strong>project manager</strong>. The directive was simple: every weekend and every other weeknight, assign me a small, manageable chunk of work from this list. Prioritize by impact. Keep the assignments short enough to finish in a single sitting.</p>

<p>And then, the inversion happened. Instead of me calling on my AI when I needed help, my AI started calling on me when it was time to work. On a Tuesday evening, my phone buzzed: <em>“Tonight’s task: Cancel the unused Audible subscription and the old cloud storage plan. Estimated time: 15 minutes.”</em> I did it. It took twelve. The next Saturday morning: <em>“Weekend project: Sort the garage shelf with the old electronics. Decide keep/donate/recycle for each item.”</em> Done by lunch.</p>

<p>The backlog began to shrink. Things actually got done. But the feeling was strange and new. I was being managed by an AI. And it was working better than anything I’d tried before.</p>

<h2 id="the-inversion-from-tool-to-orchestrator">The Inversion: From Tool to Orchestrator</h2>

<p>This experience felt profoundly different from the typical way we interact with AI. The traditional model is straightforward: a human has a task, asks an AI for help, and the human decides what to do with the output. The human is always in the driver’s seat. My experiment inverted this relationship entirely. The AI held the master plan. The AI decided what to assign and when. The human — me — simply executed the task. I had become the dependency, injected into the workflow when the framework deemed it necessary.</p>

<blockquote>
  <p>“This inversion of control gives frameworks the power to serve as extensible skeletons. The methods supplied by the user tailor the generic algorithms defined in the framework for a particular application.” — Ralph Johnson and Brian Foote, <a href="https://en.wikipedia.org/wiki/Inversion_of_control"><em>Designing Reusable Classes</em></a> (1988)</p>
</blockquote>

<p>Replace “methods supplied by the user” with “hours supplied by the human,” and you have a remarkably accurate description of what happened. The AI provided the skeleton — the schedule, the priorities, the sequencing — and I supplied the labor to tailor it to my specific life.</p>

<h2 id="the-soaring-value-of-human-time">The Soaring Value of Human Time</h2>

<p>The logic behind this delegation becomes clear when you examine the economics of our most finite resource: time. We are living in an era of deepening <strong>time poverty</strong>. According to the <a href="https://www.bls.gov/news.release/atus.htm">Bureau of Labor Statistics</a>, Americans spend between 2.3 and 2.7 hours per day on household activities. Another <a href="https://nypost.com/2020/08/04/average-american-spends-this-much-time-doing-housework-a-month-and-this-task-takes-the-longest/">survey</a> found that the average person spends nearly 24 hours a month on cleaning alone. That’s an entire day, every month, consumed by maintenance.</p>

<p>Paradoxically, the more successful you become, the worse this feels. <a href="https://www.bbh.com/us/en/insights/capital-partners-insights/the-value-of-time--understanding-and-maximizing-time-affluence.html">Research from Brown Brothers Harriman</a> explains why: “As an item increases in value, it is perceived to be a scarcer resource.” The more valuable your time becomes — through career advancement, growing responsibilities, or simply a deeper appreciation for your remaining years — the less of it you feel you have. This is the time-poverty trap, and it explains why high-income professionals often feel more rushed than anyone else.</p>

<p>Human time is rapidly becoming the most precious commodity in the economy. AI orchestration is the market’s logical response.</p>

<h2 id="the-psychology-of-letting-go">The Psychology of Letting Go</h2>

<p>Of course, ceding control is not easy. The resistance is deeply psychological, and it goes beyond mere preference. Ross Blankenship, writing in <a href="https://www.psychologytoday.com/us/blog/leading-fast-and-slow/202501/delegation-and-grief-the-emotional-cost-of-learning-to-lead"><em>Psychology Today</em></a>, describes delegation as “a shift in identity and mindset, often accompanied by a raft of discomfort.” We are wired with loss aversion, causing us to fixate on what might go wrong if we hand over the reins. This anxiety is compounded by what one writer calls the <a href="https://thenegativepsychologist.com/control-paradox/">“control paradox”</a>: the obsessive pursuit of control is ultimately self-defeating. The more tightly you grip your to-do list, the more it owns you.</p>

<p>The breakthrough for me was realizing the AI didn’t need to be perfect. It just needed to be persistent. I was outsourcing not just the <em>doing</em> of the tasks, but the immense cognitive burden of <em>thinking about</em> them — the constant, low-grade anxiety of an unmanaged backlog. David Allen’s “Getting Things Done” methodology was revolutionary for its core insight: get everything out of your head. AI completes this vision. It doesn’t just capture your tasks; it triages, schedules, and assigns them back to you in digestible pieces. The paradigm has shifted from self-management to AI-management.</p>

<h2 id="the-agentic-shift">The Agentic Shift</h2>

<p>My personal experiment mirrors a much larger trend reshaping the technology industry. Andrej Karpathy recently evolved his popular concept of “vibe coding” into a new paradigm he calls <a href="https://www.businessinsider.com/agentic-engineering-andrej-karpathy-vibe-coding-2026-2"><strong>“agentic engineering”</strong></a>. In this model, the developer’s role shifts from writing code to directing autonomous AI agents that manage the workflow. As Karpathy told <em>Business Insider</em>, he favors the term because “there is an art and science and expertise to it.”</p>

<p>This isn’t just a theory about coding. It’s a new organizational principle. A recent <a href="https://hbr.org/2026/02/to-thrive-in-the-ai-era-companies-need-agent-managers">Harvard Business Review article</a> argues that companies will need a new role — the <strong>“Agent Manager”</strong> — whose primary function is to “translate functional expertise into measurable AI performance.” The parallels to my experiment are direct. I became the agent manager of my own life. I translated my personal expertise — knowing which tasks mattered, which could wait, which needed specific conditions — into a system that an AI could execute against. The AI became the project manager; I became the skilled resource it deployed.</p>

<h2 id="the-enterprise-parallel-this-is-coming-to-your-company">The Enterprise Parallel: This Is Coming to Your Company</h2>

<p>If this works for a personal chore list, imagine what happens when enterprises apply the same principle to entire departments.</p>

<p>They already are. <a href="https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025">Gartner predicts</a> that by the end of 2026, <strong>40% of enterprise applications will feature task-specific AI agents</strong>, up from less than 5% in 2025. <a href="https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-agentic-organization-contours-of-the-next-paradigm-for-the-ai-era">McKinsey has dubbed this</a> the rise of the <strong>“agentic organization,”</strong> calling it “the largest organizational paradigm shift since the industrial revolution.” In their model, humans and AI agents work side by side, creating value at near-zero marginal cost. McKinsey cites a global bank that created an “agent factory” to manage its know-your-customer compliance processes, achieving a <strong>50% reduction in time and effort</strong>.</p>

<table>
  <thead>
    <tr>
      <th>Dimension</th>
      <th>Traditional Model</th>
      <th>Inverted (IoC) Model</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Personal Life</strong></td>
      <td>Human reviews to-do list, decides what to do, does it</td>
      <td>AI holds the master list, assigns tasks to human on a schedule</td>
    </tr>
    <tr>
      <td><strong>Software Dev</strong></td>
      <td>Developer writes code, calls libraries as needed</td>
      <td>Framework orchestrates execution, calls developer’s code when needed</td>
    </tr>
    <tr>
      <td><strong>Enterprise Ops</strong></td>
      <td>Manager assigns tasks to team, tracks progress</td>
      <td>AI agent triages work, assigns to employees, manages workflow</td>
    </tr>
    <tr>
      <td><strong>Who Holds the Plan</strong></td>
      <td>The human</td>
      <td>The AI</td>
    </tr>
    <tr>
      <td><strong>Who Executes</strong></td>
      <td>The human (with AI help)</td>
      <td>The human (directed by AI)</td>
    </tr>
  </tbody>
</table>

<p><a href="https://www.moveworks.com/us/en/resources/blog/improve-workflow-efficiency-with-ai-agent-orchestration">Moveworks offers</a> a vivid analogy: “Think of your enterprise AI agents as talented musicians. Each is a virtuoso in their respective specialty. But without a conductor, you don’t get beautiful music — you get noise.” The conductor is the AI orchestration layer. The musicians are us.</p>

<h2 id="eyes-wide-open">Eyes Wide Open</h2>

<p>I want to be honest about the risks, because they are real. The <a href="https://www.adalovelaceinstitute.org/report/dilemmas-of-delegation/">Ada Lovelace Institute</a> warns that widespread AI delegation could trigger a “far broader and faster wave of cognitive deskilling” than any previous technology. A <a href="https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/">study from the MIT Media Lab</a> warns of “cognitive atrophy” from over-reliance on AI-driven solutions. And as researchers at the <a href="https://www.hertie-school.org/en/digital-governance/research/blog/detail/content/the-threat-to-human-autonomy-in-ai-systems-is-a-design-problem">Hertie School</a> note, “algorithms themselves are not neutral” — the design of these systems can subtly manipulate behavior and erode autonomy.</p>

<p>I felt a flicker of this myself. After a few weeks of Manus managing my backlog, I noticed something: I had stopped thinking about it entirely. The low-grade anxiety was gone, replaced by a kind of pleasant blankness. When someone asked me about my weekend plans, I genuinely didn’t know until I checked what had been assigned. Is that freedom? Or is it the beginning of dependency?</p>

<p>I think the answer is: it’s both, and the balance depends entirely on design. If the AI is transparent about its reasoning, if the human retains veto power, if the system is built to augment rather than replace human judgment, then the inversion of control can be genuinely liberating.</p>

<h2 id="the-new-contract">The New Contract</h2>

<p>In software, you give up control to a framework to gain power and structure. You write less boilerplate. You focus on the logic that matters. The framework handles the rest.</p>

<p>In life, the same trade applies. You give up the cognitive burden of managing your endless backlog — the scheduling, the prioritizing, the guilt — and you gain the mental space to focus on what is truly important to you. Your relationships. Your craft. Your health. The things that no AI can do on your behalf.</p>

<p>Human time is becoming the API that AI orchestrates. The framework is calling. The question is whether we’ll answer with intention.</p>]]></content><author><name></name></author><category term="ai" /><category term="agents" /><category term="manus" /><category term="productivity" /><category term="agentic-ai" /><summary type="html"><![CDATA[I delegated the management of my personal life to an AI agent. It worked surprisingly well — and it reveals how human time is becoming the API that AI orchestrates.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://mandar.dev/assets/images/2026-02-13-inversion-of-control-hero.png" /><media:content medium="image" url="https://mandar.dev/assets/images/2026-02-13-inversion-of-control-hero.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">From Chatbot to Employee: Why 2026 is the Year of the AI Agent</title><link href="https://mandar.dev/2026/02/06/from-chatbot-to-employee-year-of-ai-agent/" rel="alternate" type="text/html" title="From Chatbot to Employee: Why 2026 is the Year of the AI Agent" /><published>2026-02-06T00:00:00+00:00</published><updated>2026-02-06T00:00:00+00:00</updated><id>https://mandar.dev/2026/02/06/from-chatbot-to-employee-year-of-ai-agent</id><content type="html" xml:base="https://mandar.dev/2026/02/06/from-chatbot-to-employee-year-of-ai-agent/"><![CDATA[<p><img src="/assets/images/2026-02-06-year-of-ai-agent-hero.png" alt="Illustration of an autonomous AI agent orchestrating multiple digital tasks including code execution, file management, and workflow automation" /></p>

<p>For the past year, the world has been captivated by the power of large language models (LLMs). We’ve watched them write poetry, generate code, and answer questions with uncanny fluency. But for all their impressive capabilities, we’ve mostly interacted with them through a simple chat interface. We prompt, they respond. It’s a powerful paradigm, but it’s also a limited one.</p>

<p>I believe 2026 marks the beginning of a fundamental shift in how we interact with AI — a move away from the passive chatbot and toward the proactive, autonomous <strong>AI agent</strong>. This isn’t just a prediction; it’s a change I’ve started to experience firsthand. And like any significant technological leap, it’s been a mix of the magical, the clunky, and the profoundly promising.</p>

<h2 id="my-clunky-magical-weekend-with-openclaw">My Clunky, Magical Weekend with OpenClaw</h2>

<p>Over the weekend, I decided to dive into the world of self-hosted AI agents by installing OpenClaw, the open-source project that has been generating a massive amount of buzz. Originally known as ClawdBot, this tool has rapidly evolved, capturing the imagination of developers with its promise of an AI that “actually does things.”</p>

<p>My initial experience was, to be blunt, clunky. The Docker setup was mostly broken, and it took some wrangling with a Linux environment to get it running. It’s a far cry from a polished, consumer-ready product. But once it was working, I saw the magic. The ability to simply ask an AI to perform multi-step tasks on my own machine — to read files, execute commands, and interact with the web without me holding its hand at every step — felt like a monumental leap.</p>

<p>What struck me most was the sheer potential. Here was a tool, built by a small community and only a few weeks old, that could orchestrate complex workflows on my behalf. It’s shocking, in a way, that we haven’t seen a similar experience from the tech giants who already have so much of our data and context. Why can’t my Google Assistant, with its deep knowledge of my calendar, emails, and habits, do what OpenClaw is attempting?</p>

<p>The answer, I think, lies in the open-source community that has sprung up around OpenClaw. In just a few weeks, a <a href="https://github.com/VoltAgent/awesome-openclaw-skills">repository of over 1,700 community-built “skills”</a> has emerged, offering automations for everything from managing a smart home with IoT devices to controlling Winamp. There are skills for fetching and summarizing tech news, triaging GitHub issues, and even programmatically creating videos with Remotion. This explosion of creativity is a testament to the power of open, extensible systems.</p>

<h2 id="from-chatbots-to-agents-a-necessary-evolution">From Chatbots to Agents: A Necessary Evolution</h2>

<p>My experience with OpenClaw crystallized a feeling that has been growing for a while: the chatbot is not the endgame. As Mitchell Hashimoto, the creator of Vagrant and Terraform, recently wrote in his excellent post, <a href="https://mitchellh.com/writing/my-ai-adoption-journey">“My AI Adoption Journey,”</a> to find real value in AI for complex work, you must move beyond the chat interface and embrace the agent.</p>

<p>Hashimoto’s journey from AI skeptic to daily agent user is a must-read for any developer. He outlines a six-step process that involves dropping the chatbot for meaningful work, reproducing your own work with agents to build expertise, and eventually outsourcing high-confidence tasks to agents running in the background. His core insight is that agents, unlike chatbots, can interact with the world. They can read files, execute programs, and make HTTP requests — the fundamental building blocks of any real-world task.</p>

<p>This is the key difference. A chatbot is a conversational partner; an <strong>AI agent</strong> is a digital employee. It can work autonomously, in parallel, and even while you sleep.</p>

<h2 id="the-polished-future-my-experience-with-manus">The Polished Future: My Experience with Manus</h2>

<p>While OpenClaw represents the raw, community-driven frontier of agentic AI, platforms like Manus show us what a more polished, integrated experience can look like. My “aha” moment with Manus came when I discovered its ability to schedule tasks using natural language. I could simply tell it, “run this research task every Friday at 8 AM,” and it would set up the equivalent of a cron job. This seemingly simple feature is a game-changer. It transforms the AI from a one-off tool into a persistent, reliable assistant.</p>

<p>Of course, no tool is perfect. While the scheduling is powerful, I’m eager to see more robust error handling and dependency management for these scheduled tasks in future updates. But this is exactly the point: we are now discussing feature requests for autonomous agents, not just prompt techniques for chatbots.</p>

<p>This is the future that Goldman Sachs CIO Marco Argenti <a href="https://www.goldmansachs.com/insights/articles/what-to-expect-from-ai-in-2026-personal-agents-mega-alliances">predicted</a> when he said that 2026 would be an even bigger year for change than 2025. He argues that AI models are becoming the new operating systems, and that we are moving toward an “agent-as-a-service” economy where companies deploy “human-orchestrated fleets of specialized multi-agent teams.”</p>

<h2 id="the-questions-we-should-be-asking">The Questions We Should Be Asking</h2>

<p>We are at the very beginning of this agentic shift. The tools are still early, the workflows are still being defined, and the security implications are still being understood. But the trajectory is clear. The conversation is moving from what AI can <em>say</em> to what AI can <em>do</em>.</p>

<p>Whether it’s the grassroots energy of OpenClaw or the enterprise-ready power of Manus, the message is the same: the era of the passive chatbot is ending. The year of the proactive, autonomous AI agent has begun. And for developers and tech-savvy professionals, the opportunity to build, automate, and create with these new tools is immense.</p>

<p>But as we build, we need to ask the right questions:</p>

<ul>
  <li>What’s the one tedious task you’d outsource to an AI agent tomorrow?</li>
  <li>Are we prepared for the security and privacy challenges of agents with persistent memory and full system access?</li>
  <li>Which will win out: the polished, walled-garden agents from big tech, or the chaotic, open-source bazaar of tools like OpenClaw?</li>
</ul>

<p>It’s going to be a wild ride. I, for one, can’t wait to see what we build.</p>]]></content><author><name></name></author><category term="ai" /><category term="agents" /><category term="openclaw" /><category term="manus" /><category term="agentic-ai" /><summary type="html"><![CDATA[A personal exploration of the shift from passive chatbots to autonomous AI agents, with hands-on experiences using OpenClaw and Manus, and why 2026 marks the beginning of the agentic AI era.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://mandar.dev/assets/images/2026-02-06-year-of-ai-agent-hero.png" /><media:content medium="image" url="https://mandar.dev/assets/images/2026-02-06-year-of-ai-agent-hero.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">command line graphing: gnuplot</title><link href="https://mandar.dev/2013/01/22/command-line-graphing-gnuplot/" rel="alternate" type="text/html" title="command line graphing: gnuplot" /><published>2013-01-22T07:23:37+00:00</published><updated>2013-01-22T07:23:37+00:00</updated><id>https://mandar.dev/2013/01/22/command-line-graphing-gnuplot</id><content type="html" xml:base="https://mandar.dev/2013/01/22/command-line-graphing-gnuplot/"><![CDATA[<p><img src="https://64.media.tumblr.com/cca31e1a4430db99a48f488a70077d8a/tumblr_inline_mh0jkxILV41rn6683.jpg" alt="" /></p>

<p><strong>Who needs charts on the command line?</strong></p>

<p>If you have ever been stuck at an ssh prompt staring a bunch of data gnuplot is a must have tool. I was looking at response times over an ssh prompt and realized how I missed MS Excel. All this data would make much more sense as a chart! Apparently there are others who would agree with me and that’s the reason gnuplot exists.</p>

<p><strong>Enter gnuplot</strong></p>

<p><a href="http://en.wikipedia.org/wiki/Gnuplot">gnuplot</a> is a command line tool that generates complex plots from data. It has a lot of options and can generate the plots as PNG, SVG etc. But most importantly it can plot your data as an ASCII chart :).</p>

<p><strong>Bonus points</strong></p>

<p>I use gnuplot to check my nodejs logs:</p>

<blockquote>
  <p>tail –lines=1000 ~/.forever/*.log | grep ‘Time: ’ | sed ’s/.* Time: l: //’ &gt; data<br />
gnuplot&gt; set terminal dumb<br />
gnuplot&gt; plot “data”</p>
</blockquote>

<p>This is a static plot but you can even plot animated graphs like <a href="http://filipivianna.blogspot.in/2011/11/more-trickery-with-gnuplot-dumb.html">this</a>: </p>

<p><img src="http://1.bp.blogspot.com/-6m29LnS1AR0/Ts0tJUar0EI/AAAAAAAAAwo/9lGnM2wA8sk/s1600/screenshot.gif" alt="Animated memory usage" /></p>]]></content><author><name></name></author><category term="tips" /><category term="bash" /><summary type="html"><![CDATA[]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://64.media.tumblr.com/cca31e1a4430db99a48f488a70077d8a/tumblr_inline_mh0jkxILV41rn6683.jpg" /><media:content medium="image" url="https://64.media.tumblr.com/cca31e1a4430db99a48f488a70077d8a/tumblr_inline_mh0jkxILV41rn6683.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Cloud9: IDE in the cloud</title><link href="https://mandar.dev/2013/01/09/cloud9-ide-in-the-cloud/" rel="alternate" type="text/html" title="Cloud9: IDE in the cloud" /><published>2013-01-09T12:00:00+00:00</published><updated>2013-01-09T12:00:00+00:00</updated><id>https://mandar.dev/2013/01/09/cloud9-ide-in-the-cloud</id><content type="html" xml:base="https://mandar.dev/2013/01/09/cloud9-ide-in-the-cloud/"><![CDATA[<p><a href="http://c9.io/">Cloud9</a> is a cloud based IDE that works completely within the browser. It even provides a full linux terminal within the browser. It is ideal to try out some open source language/framework quickly without doing the full setup.</p>

<p>Currently it supports nodejs, PHP, python and ruby out of the box. There are plans to support .Net in the future. It allows one-click deployment to cloud platforms like Azure, CloudFoundry and Heroku. PFB screenshot of my attempt at a simple nodejs program :).</p>

<p><img src="https://64.media.tumblr.com/d7228f17b3688772c57936b516d6ecaf/tumblr_inline_mgcuinlXNZ1rn6683.gif" alt="Cloud9 screenshot" /></p>]]></content><author><name></name></author><summary type="html"><![CDATA[Cloud9 is a cloud based IDE that works completely within the browser. It even provides a full linux terminal within the browser. It is ideal to try out some open source language/framework quickly without doing the full setup.]]></summary></entry><entry><title type="html">@benguild: Leaked screenshots of native Google Maps (Alpha) for iOS 6.</title><link href="https://mandar.dev/2012/10/15/benguild-leaked-screenshots-of-native-google/" rel="alternate" type="text/html" title="@benguild: Leaked screenshots of native Google Maps (Alpha) for iOS 6." /><published>2012-10-15T04:03:45+00:00</published><updated>2012-10-15T04:03:45+00:00</updated><id>https://mandar.dev/2012/10/15/benguild-leaked-screenshots-of-native-google</id><content type="html" xml:base="https://mandar.dev/2012/10/15/benguild-leaked-screenshots-of-native-google/"><![CDATA[<p><a href="http://benguild.com/post/33553036078/leaked-screenshots-of-native-google-maps-alpha-for">@benguild: Leaked screenshots of native Google Maps (Alpha) for iOS 6.</a></p>

<p><a href="http://benguild.com/post/33553036078/leaked-screenshots-of-native-google-maps-alpha-for">benguild</a>:</p>

<blockquote>
  <p>Oh boy! <strong>Here we go.</strong></p>

  <ul>
    <li>It’s vector-based.</li>
    <li>It’s got two-finger rotation to any angle.</li>
    <li>It’s super fast.</li>
    <li>4-inch height of the iPhone 5 is supported!</li>
  </ul>

  <p>Here are some blurry photos. Ever since iOS 6 came out, we lost <a href="http://maps.google.com">Google Maps</a>. However, Google has been rumored to be hard at work on their own…</p>
</blockquote>]]></content><author><name></name></author><category term="google maps" /><category term="iphone5" /><category term="ios6" /><summary type="html"><![CDATA[@benguild: Leaked screenshots of native Google Maps (Alpha) for iOS 6.]]></summary></entry></feed>