Critical RCE in LeRobot Lets Attackers Hijack Robots
CVE-2026-25874 (CVSS 9.3) exposes LeRobot's gRPC server to unauthenticated remote code execution via pickle deserialization, threatening robot control systems and GPU infrastructure.

A critical unpatched vulnerability in Hugging Face's LeRobot framework allows any attacker with network access to execute arbitrary code on a server running the library - no authentication, no credentials, no exploit chain required. The flaw, tracked as CVE-2026-25874 with a CVSS score of 9.3, was discovered by security researcher Valentin Lobstein and publicly disclosed on April 23, 2026.
LeRobot has over 21,500 stars on GitHub and is widely used by robotics researchers and practitioners to train and deploy autonomous control policies. The servers it runs on typically have elevated privileges, access to GPU compute, trained models, and Hugging Face API keys. Depending on deployment, a compromised LeRobot server can also mean an attacker gains low-level control over the physical robot the system is driving.
TL;DR
- CVE-2026-25874 (CVSS 9.3): unauthenticated remote code execution in LeRobot's gRPC PolicyServer
- Root cause:
pickle.loads()deserializes attacker-controlled data with no authentication or TLS - Affected versions: v0.4.3 through v0.5.1; fix tracked in PR #3048
- Attacker impact: full server compromise, API key theft, model theft, physical robot hijacking
- Discovered by Valentin Lobstein (Chocapikk), reported December 2025, CVE assigned April 2026
The Vulnerability: Pickle Over Unauthenticated gRPC
LeRobot's async inference module runs a gRPC server - the PolicyServer - that receives observations from robots and returns action predictions from a trained model. The server listens on an open port using add_insecure_port(): no TLS, no authentication, nothing. Any machine on the same network can connect and send arbitrary data.
When data arrives, the server calls pickle.loads() on it directly. Python's pickle format is a serialization protocol that can encode arbitrary executable code. A specially crafted pickle payload can use special methods like __reduce__() to execute any command when deserialized - the type validation that happens afterward is irrelevant because the code has already run.
Two gRPC endpoints are exposed to this attack: SendPolicyInstructions() and SendObservations(). Both accept pickle-serialized data without validation. The codebase even includes # nosec comments to suppress security warnings rather than address the underlying issue.
# Vulnerable pattern in LeRobot's PolicyServer
def SendObservations(self, request, context):
observations = pickle.loads(request.data) # attacker-controlled bytes
# type checks happen here - too late
The vulnerable module was introduced in September 2025 and has not received security fixes since.
LeRobot is one of the most prominent open-source robotics ML frameworks, with thousands of deployments in research and production environments.
Source: github.com
What an Attacker Can Do
The impact is not limited to the server. Because LeRobot deployments typically run with elevated privileges on GPU-backed machines, a successful exploit gives an attacker:
- Arbitrary OS command execution with the service's privileges
- Full access to Hugging Face API keys and other credentials stored on the system
- Access to trained robot policy models and proprietary datasets
- The ability to pivot into the internal network from the compromised server
- Depending on deployment: direct control over the physical robot the PolicyServer is managing
That last point is the one that distinguishes this from a typical server-side RCE. If the compromised host is managing a robotic arm on a factory floor or a humanoid robot in a research lab, an attacker who controls the PolicyServer effectively controls the robot.
Hugging Face built safetensors specifically because pickle is dangerous for ML data. Their own framework uses pickle anyway, without authentication, over a cleartext network connection.
The Irony: Hugging Face Knows Pickle Is Dangerous
The safetensors format - created and maintained by Hugging Face - exists precisely because pickle is unsafe for machine learning data. The safetensors documentation explicitly warns against pickle deserialization of untrusted data. Hugging Face has championed for moving the ML ecosystem away from pickle for model weights specifically because of this class of vulnerability.
Yet LeRobot's inference server uses pickle.loads() to deserialize arbitrary network-received data, over a unauthenticated, unencrypted connection, with comments in the code that suppress security warnings rather than fix the problem.
Who Found It and When
Lobstein, who operates under the handle Chocapikk and works as a security engineer at LeakIX, filed a private report in December 2025. He confirmed the vulnerability independently with a proof of concept against LeRobot v0.4.3 from PyPI in February 2026. The CVE was formally assigned and publicly disclosed on April 23, 2026.
The attack requires only network access to LeRobot's gRPC port and a crafted pickle payload - no credentials or prior access needed.
Source: unsplash.com
The disclosure timeline is remarkable: more than four months passed between the private report and public disclosure, suggesting the fix wasn't straightforward to implement. A patch is tracked in GitHub PR #3048.
Affected Versions and Mitigation
Confirmed vulnerable versions run from v0.4.3 through v0.5.1. The LeRobot v0.5.0 release brought humanoid robot support and expanded hardware compatibility but did not address this vulnerability.
Until a patched release is available, teams running LeRobot inference servers should take immediate steps:
Firewall the gRPC port - restrict access to the PolicyServer to trusted source IPs only. This is the most important immediate mitigation.
Isolate the deployment - run LeRobot services in containers with minimal privileges and no access to production credentials.
Rotate API keys - if the LeRobot server had access to Hugging Face tokens or other credentials, rotate them now and audit access logs.
Monitor for the patch - watch GitHub PR #3048 and the official release channel. The fix replaces pickle with safe serialization and adds proper authentication and TLS to the gRPC server.
The Broader Pattern
This vulnerability follows a pattern that shows up repeatedly in ML infrastructure: security practices that work fine for model training and offline research fall apart the moment inference servers become network-accessible. The gRPC endpoint that works perfectly in a local lab becomes a remote code execution surface when exposed on a shared network.
LeRobot isn't the only robotics ML framework with this issue - it's simply the one with a CVE attached to it today. As the robotics ML ecosystem matures and inference servers move from research environments into production deployments, the attack surface grows proportionally.
The combination of physical actuation and network-accessible inference is a configuration that demands the same security discipline applied to any production API: authentication, encryption, and input validation at the boundary. Relying on network isolation alone isn't sufficient, and suppressing security warnings in source code is not a security strategy.
Sources:
