<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>GPU Optimization | Awesome Agents</title><link>https://awesomeagents.ai/tags/gpu-optimization/</link><description>Your guide to AI models, agents, and the future of intelligence. Reviews, leaderboards, news, and tools - all in one place.</description><language>en-us</language><managingEditor>contact@awesomeagents.ai (Awesome Agents)</managingEditor><lastBuildDate>Mon, 06 Apr 2026 16:46:45 +0200</lastBuildDate><atom:link href="https://awesomeagents.ai/tags/gpu-optimization/index.xml" rel="self" type="application/rss+xml"/><item><title>AutoKernel - AI Agents That Write Faster GPU Kernels</title><link>https://awesomeagents.ai/news/autokernel-open-source-gpu-kernel-agent/</link><pubDate>Mon, 06 Apr 2026 16:46:45 +0200</pubDate><guid>https://awesomeagents.ai/news/autokernel-open-source-gpu-kernel-agent/</guid><description><![CDATA[<div class="podcast-embed">
<iframe style="border-radius:12px" src="https://open.spotify.com/embed/episode/5uoCCHceOu15oDClUUwdiM?utm_source=generator&theme=0" width="100%" height="152" frameBorder="0" allowfullscreen="" allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy"></iframe>
</div>
<p>You point it at a PyTorch model, tell it to run, and go to sleep. By morning it hands you a set of Triton kernels tuned specifically for your hardware. That is the pitch behind <a href="https://github.com/RightNow-AI/autokernel">AutoKernel</a>, a framework released today by RightNow AI that applies an autonomous LLM agent loop to GPU kernel optimization - no CUDA expertise required.</p>]]></description><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<div class="podcast-embed">
<iframe style="border-radius:12px" src="https://open.spotify.com/embed/episode/5uoCCHceOu15oDClUUwdiM?utm_source=generator&theme=0" width="100%" height="152" frameBorder="0" allowfullscreen="" allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy"></iframe>
</div>
<p>You point it at a PyTorch model, tell it to run, and go to sleep. By morning it hands you a set of Triton kernels tuned specifically for your hardware. That is the pitch behind <a href="https://github.com/RightNow-AI/autokernel">AutoKernel</a>, a framework released today by RightNow AI that applies an autonomous LLM agent loop to GPU kernel optimization - no CUDA expertise required.</p>
<p>The project drops with an <a href="https://arxiv.org/html/2603.21331v1">arXiv paper</a> from authors Jaber Jaber and Osama Jaber and already has roughly 1,000 GitHub stars within hours of the release post going up on Hacker News.</p>
<div class="news-tldr">
<p><strong>TL;DR</strong></p>
<ul>
<li>AutoKernel runs an iterative edit-benchmark-revert agent loop on GPU kernels for ~300-400 experiments overnight per kernel</li>
<li>Beats <code>torch.compile</code> on 12 of 16 tested configurations; reaches 5.29x over PyTorch eager on RMSNorm</li>
<li>Supports 9 kernel types (matmul, softmax, layernorm, RMSNorm, Flash Attention, SwiGLU, cross-entropy, RoPE, parallel reduction) via Triton and CUDA C++</li>
<li>MIT licensed, runs on NVIDIA H100/A100/L40S and AMD MI300X/MI350X; also tested on RTX 4090</li>
<li>Lags clearly behind cuBLAS on compute-bound matmul - the authors are transparent about this</li>
</ul>
</div>
<h2 id="how-the-agent-loop-works">How the Agent Loop Works</h2>
<p>Unlike one-shot kernel generators, AutoKernel runs a closed feedback loop. There are three phases.</p>
<h3 id="phase-a-profiling">Phase A: Profiling</h3>
<p>The system uses <code>torch.profiler</code> with shape recording to capture per-kernel GPU time across the full forward pass. It detects the target GPU automatically and ranks each kernel by runtime contribution using Amdahl's law - the bottleneck kernels get the agent's attention first.</p>
<h3 id="phase-b-the-edit-loop">Phase B: The Edit Loop</h3>
<p>This is the core. The agent adjusts a single file - <code>kernel.py</code> - and a fixed benchmark harness assesses the change through five correctness checks before measuring performance. If the new kernel is faster and correct, the change is kept. Otherwise it reverts. Each iteration takes roughly 90 seconds.</p>
<p>At that rate you get about 40 experiments per hour, or 300-400 over an overnight run. The agent keeps working on a kernel until one of four conditions triggers a move-on: five consecutive reverts, 90% peak GPU utilization, two hours elapsed, or a 2x speedup reached.</p>
<p>The entire strategy the agent draws from is encoded in <code>program.md</code>, a 909-line document that RightNow calls the &quot;research org code.&quot; It's basically a ranked playbook of GPU optimization techniques across six tiers:</p>
<ol>
<li><strong>Block size tuning</strong> (10-50% gains) - power-of-2 tile dimensions, warp counts, pipeline stages</li>
<li><strong>Memory access</strong> (10-30%) - coalesced loads, software prefetching, L2 swizzling, shared memory padding</li>
<li><strong>Compute</strong> (5-15%) - TF32 accumulation, epilogue fusion, loop invariant hoisting</li>
<li><strong>Advanced</strong> (5-20%) - split-K, persistent kernels, Triton autotune, warp specialization</li>
<li><strong>Architecture-specific</strong> (5-15%) - TMA on Hopper, cp.async on Ampere</li>
<li><strong>Kernel-specific</strong> - online softmax for attention, Welford's algorithm for normalization</li>
</ol>
<p>The agent reads from this document, applies a change, sees the benchmark result, and adapts. It isn't doing anything architecturally exotic - but the playbook encoding real expert knowledge is the part that makes it work better than naive sampling.</p>
<h3 id="phase-c-verification">Phase C: Verification</h3>
<p>After optimization completes, AutoKernel runs end-to-end correctness and speedup validation against the original model. The results log everything to a plain <code>results.tsv</code> capturing experiment number, throughput in TFLOPS or GB/s, speedup, correctness status, and VRAM usage.</p>
<p><img src="/images/news/autokernel-open-source-gpu-kernel-agent-hardware.jpg" alt="NVIDIA H100 GPU used for AutoKernel benchmark runs">
<em>The AutoKernel benchmark suite was run on a NVIDIA H100, though the framework supports A100, L40S, and AMD MI300X/MI350X targets as well.</em>
<small>Source: commons.wikimedia.org</small></p>
<h2 id="getting-started">Getting Started</h2>
<p>Running AutoKernel against a model is a few lines:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">git clone https://github.com/RightNow-AI/autokernel
</span></span><span class="line"><span class="cl"><span class="nb">cd</span> autokernel
</span></span><span class="line"><span class="cl">pip install -r requirements.txt
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">from</span> <span class="nn">autokernel</span> <span class="kn">import</span> <span class="n">optimize</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># Point at any PyTorch model</span>
</span></span><span class="line"><span class="cl"><span class="n">optimize</span><span class="p">(</span>
</span></span><span class="line"><span class="cl">    <span class="n">model</span><span class="o">=</span><span class="n">your_model</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="n">sample_inputs</span><span class="o">=</span><span class="n">sample_inputs</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="n">backend</span><span class="o">=</span><span class="s2">&#34;triton&#34;</span><span class="p">,</span>       <span class="c1"># or &#34;cuda&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="n">target_gpu</span><span class="o">=</span><span class="s2">&#34;H100&#34;</span><span class="p">,</span>      <span class="c1"># auto-detected if omitted</span>
</span></span><span class="line"><span class="cl">    <span class="n">budget_hours</span><span class="o">=</span><span class="mi">8</span><span class="p">,</span>         <span class="c1"># overnight run</span>
</span></span><span class="line"><span class="cl"><span class="p">)</span>
</span></span></code></pre></div><p>The framework handles profiling, kernel extraction, the agent loop, and validation. Outputs land in an <code>optimized_kernels/</code> directory with drop-in replacements for the original PyTorch ops.</p>
<h2 id="benchmark-results">Benchmark Results</h2>
<p>Tested on a H100 against PyTorch 2.x eager mode and <code>torch.compile</code> with max-autotune:</p>
<table>
  <thead>
      <tr>
          <th>Kernel</th>
          <th>Size</th>
          <th>vs PyTorch Eager</th>
          <th>vs torch.compile</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>RMSNorm</td>
          <td>8192 x 8192</td>
          <td>5.29x</td>
          <td>2.83x</td>
      </tr>
      <tr>
          <td>Softmax</td>
          <td>8192 x 8192</td>
          <td>2.82x</td>
          <td>3.44x</td>
      </tr>
      <tr>
          <td>Cross-Entropy</td>
          <td>8192 x 32k vocab</td>
          <td>2.21x</td>
          <td>2.94x</td>
      </tr>
      <tr>
          <td>LayerNorm</td>
          <td>8192 x 4096</td>
          <td>1.25x</td>
          <td>3.21x</td>
      </tr>
  </tbody>
</table>
<p>AutoKernel beats <code>torch.compile</code> on 12 of 16 configurations tested. Memory-bound operations - normalization, reduction, loss kernels - see the biggest gains because AutoKernel's loop can tune memory access patterns more aggressively than the one-shot compiler heuristics. The framework also claimed first place on a community vector sum reduction benchmark on B200, hitting 44.086 microseconds.</p>
<p><img src="/images/news/autokernel-open-source-gpu-kernel-agent-benchmark.jpg" alt="AutoKernel benchmark progress chart showing per-kernel speedup over experiment iterations">
<em>AutoKernel's progress.png from the GitHub README shows how speedup builds across the agent's iterative experiment loop.</em>
<small>Source: github.com/RightNow-AI/autokernel</small></p>
<h2 id="hardware-and-compatibility">Hardware and Compatibility</h2>
<table>
  <thead>
      <tr>
          <th>Requirement</th>
          <th>Detail</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Python</td>
          <td>3.9+</td>
      </tr>
      <tr>
          <td>PyTorch</td>
          <td>2.0+</td>
      </tr>
      <tr>
          <td>Triton backend</td>
          <td>NVIDIA H100, A100, L40S, B200; AMD MI300X, MI350X; RTX 4090</td>
      </tr>
      <tr>
          <td>CUDA backend</td>
          <td>Any CUDA 11.8+ GPU</td>
      </tr>
      <tr>
          <td>LLM API</td>
          <td>OpenAI, Anthropic, or local model with API compatibility</td>
      </tr>
      <tr>
          <td>Minimum VRAM</td>
          <td>16 GB recommended (40 GB+ for large model runs)</td>
      </tr>
      <tr>
          <td>License</td>
          <td>MIT (code), CC BY 4.0 (paper)</td>
      </tr>
  </tbody>
</table>
<div class="pull-quote">
<p>The benchmark harness is fixed. Only the kernel changes. That constraint is what makes the loop trustworthy - the agent can't cheat its own eval.</p>
</div>
<h2 id="how-it-compares">How It Compares</h2>
<p>This isn't the first system to apply AI-driven iteration to GPU code. Google's AlphaEvolve hit 23% acceleration on specific kernels and reportedly cut 1% off Gemini training time. The difference is that AlphaEvolve is closed, internal to Google, and not configurable for arbitrary models. AutoKernel is MIT-licensed and runs against whatever you point it at.</p>
<p>Meta's <a href="/news/meta-kernelevolve-agentic-kernel-optimization/">KernelEvolve</a> took a similar agentic approach but was built for Meta's internal production kernels and isn't publicly available. The <a href="/news/autoagent-self-optimizing-harness/">AutoAgent framework from MIT</a> applied self-optimizing loops to agent orchestration; AutoKernel brings the same concept down to the kernel level.</p>
<p>Andrej Karpathy's autoresearch concept - autonomous loops running overnight experiments - is clearly an intellectual ancestor here. RightNow AI is applying that pattern to a domain where iteration has historically required years of human expertise.</p>
<h2 id="where-it-falls-short">Where It Falls Short</h2>
<p>The honest assessment is that matmul performance is a weak spot. AutoKernel's Triton starter reaches 278 TFLOPS on H100 against cuBLAS at 989.5 TFLOPS - roughly 28% of peak. On compute-bound workloads where cuBLAS and CUTLASS are dominant, the framework currently doesn't come close.</p>
<p>Hacker News commenters pushed back on this. User <code>aviinuo</code> noted that for 4kx4kx4k FP16 matmul, &quot;cutlass is like 3x faster than this.&quot; User <code>ademeure</code> flagged inconsistency in the matmul benchmark claim, pointing out that a reported 18.9% peak use on H100 doesn't square with the claimed cuBLAS comparison numbers.</p>
<p>The framework also has hard scope limits: single-GPU only, no support for distributed kernels or multi-device memory management, and code generation limits mean the agent can't yet handle complex techniques like software pipelining or custom PTX emission.</p>
<p>At 40 experiments per hour, a difficult kernel may need multiple overnight runs to converge. RightNow AI is transparent about all of this in the paper.</p>
<p><img src="/images/news/autokernel-open-source-gpu-kernel-agent-servers.jpg" alt="GPU server rack representing the compute infrastructure needed for kernel optimization at scale">
<em>AutoKernel is designed for single-GPU optimization; multi-GPU distributed kernel support is on the roadmap.</em>
<small>Source: commons.wikimedia.org</small></p>
<h2 id="what-to-watch">What To Watch</h2>
<p>RightNow AI is a NVIDIA Inception Program member, and AutoKernel is closely integrated with their commercial AI code editor for CUDA/Triton development (free tier, $20/month Pro). The open-source release follows a pattern of using community visibility to drive paid-product adoption - the framework shows the technology, the editor is where you use it day-to-day.</p>
<p>The interesting question is whether the <code>program.md</code> playbook gets contributed to by the community over time. The six-tier optimization hierarchy is the intellectual core of the system. Opening it up to community improvements could make the agent meaningfully more capable without requiring changes to the loop architecture.</p>
<p>For teams running large training runs who already know their bottleneck kernels, this is worth testing today. For everyone else, the &quot;go to sleep&quot; pitch is real - the system runs unattended and gives you something concrete in the morning, even if it won't compete with cuBLAS on every workload.</p>
<hr>
<p><strong>Sources:</strong></p>
<ul>
<li><a href="https://github.com/RightNow-AI/autokernel">AutoKernel GitHub repository</a></li>
<li><a href="https://arxiv.org/html/2603.21331v1">arXiv paper 2603.21331</a></li>
<li><a href="https://www.marktechpost.com/2026/04/06/rightnow-ai-releases-autokernel-an-open-source-framework-that-applies-an-autonomous-agent-loop-to-gpu-kernel-optimization-for-arbitrary-pytorch-models/">MarkTechPost coverage</a></li>
<li><a href="https://forums.developer.nvidia.com/t/autokernel-autoresearch-for-kernel-optimization/363215">NVIDIA Developer Forums thread</a></li>
</ul>
]]></content:encoded><dc:creator>Sophie Zhang</dc:creator><category>News</category><media:content url="https://awesomeagents.ai/images/news/autokernel-open-source-gpu-kernel-agent_hu_8bb2b35a132603d8.jpg" medium="image" width="1200" height="675"/><media:thumbnail url="https://awesomeagents.ai/images/news/autokernel-open-source-gpu-kernel-agent_hu_8bb2b35a132603d8.jpg" width="1200" height="675"/></item></channel></rss>