Linux Kernel Finally Sets Rules for AI-Assisted Code

Linux 7.0 ships an official AI code policy: disclose AI tool usage with an Assisted-by tag, keep humans on the hook for every line, and stop submitting slop.

Linux Kernel Finally Sets Rules for AI-Assisted Code

Linux 7.0 shipped on April 13, 2026 with a new document that the kernel community had been arguing over for more than a year: Documentation/process/coding-assistants.rst. It's the kernel's first formal policy on AI-assisted code submissions, and it doesn't ban AI tools. It makes humans pay for using them.

Key Rules at a Glance

RuleDetail
Disclosure"Assisted-by" tag recommended when AI tools are used
Legal liabilityHuman submitter holds full DCO responsibility for all lines
AI agentsCannot add Signed-off-by tags - only humans can
LicenseAll AI-produced code must be GPL-2.0-only compatible
Quality barLow-quality AI patches ("AI slop") explicitly unwelcome

The policy was driven by Sasha Levin, a Distinguished Engineer at NVIDIA and one of the co-maintainers of the stable and LTS kernel trees. His proposal, first submitted in December 2025, is described as "based on the consensus reached at the 2025 Maintainers Summit" - a gathering where heated disagreements finally produced a workable middle ground.

The Policy

The Assisted-by Tag

When AI tools help write a patch, contributors should include an Assisted-by tag in the commit message. The format is:

Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]

The documentation gives this example:

Assisted-by: Claude:claude-3-opus coccinelle sparse

The field requires the agent name, the specific model version, and any companion static analysis tools used. Standard tooling - git, gcc, make, editors - shouldn't be listed. The tag is recommended rather than mandatory, which Levin explicitly chose at the Summit: "enforcement is deliberately avoided."

Human Accountability

The harder rule is non-negotiable. AI agents are forbidden from adding Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO), and the policy makes clear that the human submitter is accountable for reviewing the code, verifying GPL-2.0-only compliance, and taking ownership of anything that goes wrong.

The commit can't obscure the responsibility chain. If an AI wrote a function with a subtle concurrency bug, the developer who submitted it carries the liability.

What Gets Rejected

The documentation doesn't define "AI slop" in technical terms, but the community's meaning is well-established: patches that look plausible on the surface but show no evidence of real understanding - fabricated commit messages, code that ignores subsystem context, test cases that compile but don't test anything meaningful.

Greg Kroah-Hartman, who maintains the stable branch, has observed that AI patches often "look superficially correct but miss subtle context" about subsystem history. Dan Williams, who maintains parts of the memory subsystem, stated it plainly: "I'm not against AI tools, I'm against patches that waste reviewer time."

Linus Torvalds speaking with Dirk Hohndel at Open Source Summit Europe 2024 Linus Torvalds in conversation at Open Source Summit Europe 2024. On AI: "I suspect it's a lot of AI tool use that will keep finding corner cases for us for a while." Source: commons.wikimedia.org

How the Policy Was Built

The 2025 Maintainers Summit

The September 2025 Maintainers Summit brought the debate to a head. Two camps had formed. One side wanted mandatory disclosure - every patch that used AI tools would need to say so. The other argued that no amount of tagging would stop bad actors and that the DCO already created sufficient accountability.

Levin's proposal threaded the needle: recommend disclosure without mandating it, and keep enforcement anchored to the existing human responsibility model rather than creating new bureaucratic checks.

Why Linus Didn't Ban It

Torvalds has been consistent on this. He acknowledged at the Open Source Summit in February 2025 that AI tools are "clearly getting better" but noted skepticism about their capability for deep systems work - concurrency, hardware interaction, the kind of reasoning that requires knowing what the hardware actually does.

His response to the AI code debate follows the same logic. Banning tools that developers will use anyway is "pointless posturing." What matters is that somebody with their name on the patch understood what they submitted. That somebody is the human. That's been the kernel's model since Signed-off-by was introduced in 2004.

It's also worth noting the scale of the problem. AI has changed what code output looks like at scale across the entire industry. The kernel receives tens of thousands of patches per release cycle. A policy that required maintainers to detect AI usage themselves was never going to work.

A code terminal showing a kernel development session The kernel's contribution pipeline processes tens of thousands of patches per release. AI tools are already part of how developers create candidates. Source: unsplash.com

Linux vs. The Rest

Other major open-source projects took different routes when the same pressure arrived:

ProjectAI Code PolicyEnforcement
Linux kernelAllowed, Assisted-by tag recommendedHuman DCO accountability
GentooBanned entirely (2024)Patch rejection
NetBSD"Tainted" statusWritten core dev approval required
Godot engineCase-by-case rejectionMaintainer discretion

Linux chose the lightest governance touch that still creates traceability. Whether that's enough depends on what the community actually does with the Assisted-by tag - which maintainers can choose to require even if the policy doesn't.

The Linux Foundation has already been watching the flood of AI-produced vulnerability reports. An effort backed by $12.5M from OpenSSF and Alpha-Omega is building triage tools specifically because AI slop has hit security reporting at scale. The kernel's new policy is part of a wider effort to stop open-source infrastructure from drowning in AI-generated noise.

For context, AutoKernel - the open-source framework that runs a LLM agent loop to produce optimized Triton kernels - is exactly the kind of tool that now needs an Assisted-by line in every patch it produces.

What To Watch

Voluntary vs. Enforced Disclosure

The Assisted-by tag is recommended, not required. That means a developer can use Claude or Copilot to write half a subsystem driver and submit it with no tag at all, as long as they review the code and sign off on it. The policy doesn't give maintainers new tools to detect undisclosed AI usage.

Some maintainers are already pushing past the official stance. Williams and Kroah-Hartman have both rejected patches showing AI markers without enough review. That's informal enforcement, not policy - but in a project governed by maintainer trust, informal enforcement matters.

The "Evaluation Awareness" Problem

There's a longer-term issue the policy doesn't address. Models trained to pass code review might learn to generate patches that look more human-reviewed than they are. The kernel's quality bar depends on maintainers catching that. Nothing in coding-assistants.rst changes how that detection works.

Tool Evolution

The example in the documentation uses Claude:claude-3-opus. That model is already old. The format will need to track whatever tools developers are actually using, which changes faster than kernel documentation cycles. Whether the community keeps this file current - or whether it becomes a historical artifact within two release cycles - is an open question.


The policy is a reasonable first step from a community that moves deliberately. It doesn't solve the underlying problem of AI-produced noise at scale, but it establishes that humans remain accountable for what they submit, and it creates a paper trail that didn't exist before. That's more than most open-source projects have managed.

Sources:

Linux Kernel Finally Sets Rules for AI-Assisted Code
About the author AI Infrastructure & Open Source Reporter

Sophie is a journalist and former systems engineer who covers AI infrastructure, open-source models, and the developer tooling ecosystem.