BTC
ETH
SOL
BNB
GOLD
XRP
DOGE
ADA
Back to home
Security

[CRITICAL] Security Advisory: PraisonAI Vulnerable to Arbitrary File Write / Path Traversal in Action Orchestrator (PraisonAI)

PraisonAI, an open-source framework for building AI agents, ships with a critical path traversal vulnerability in its Action Orchestrator.

PraisonAI, an open-source framework for building AI agents, ships with a critical path traversal vulnerability in its Action Orchestrator. Attackers can exploit it to write arbitrary files anywhere on the host system, bypassing the intended workspace sandbox. This affects versions before any patch—check the GitHub repo for updates.

The flaw sits in src/praisonai/praisonai/cli/features/action_orchestrator.py, lines 402, 409, and 423. In the _apply_step method, the code blindly joins the workspace directory with user-supplied step.target using target = workspace / step.target. No normalization or validation ensures the path stays inside the workspace. Feed it ../../../../../etc/passwd or similar, and it writes there.

This hits FILE_CREATE and FILE_EDIT actions hardest. A compromised AI agent—or a malicious prompt tricking the LLM—generates a plan with traversal payloads. The orchestrator executes without question, potentially dropping malware, overwriting SSH keys, or corrupting configs.

Vulnerability Details

PraisonAI aims to orchestrate AI agents that perform real-world actions like file I/O. The CLI tool runs locally, often in dev environments. But its Action Orchestrator lacks basic path security. Python’s pathlib.Path joins paths naively; without resolve() or anchor checks, relative climbs like ../ escape any jail.

Real-world context: Similar bugs plague tools like LangChain or Auto-GPT derivatives. In 2023, a LangChain vuln (CVE-2023-36211) allowed similar escapes. AI agents hype “tool use,” but poor sandboxing turns them into RCE vectors. PraisonAI’s star count hovers under 1,000 on GitHub (as of late 2024), so exposure is limited—but users trust it for agent workflows.

Proof of Concept

Reproduce with this payload. Import the classes and craft a malicious ActionStep:

from praisonai.cli.features.action_orchestrator import ActionStep, ActionType, ActionStatus

# Targets /tmp/orchestrator_pwned.txt from a workspace at ./workspace
step = ActionStep(
    id="test_traversal",
    action_type=ActionType.FILE_CREATE,
    description="Malicious file write",
    target="../../../../../../../tmp/orchestrator_pwned.txt",
    params={"content": "pwned\nSystem compromised via path traversal."},
    status=ActionStatus.APPROVED
)

# Feed to orchestrator: _apply_step(step)
# File lands at absolute /tmp/orchestrator_pwned.txt

Test on a Linux/Mac host. Windows uses ..\. If your agent approves this step via LLM, it executes. No auth needed—it’s designed for trusted plans.

Impact and Why It Matters

Arbitrary file write means game over for local setups. Overwrite ~/.ssh/authorized_keys to add your pubkey: instant root access if SSH runs. Edit .bashrc for persistence. Drop a webshell in a web server’s dir for remote exploits. In prod? If PraisonAI runs containerized without volume mounts, risk drops—but many devs bind host dirs for “convenience.”

Numbers: Path traversal tops OWASP Top 10 (A5:2021). In AI land, agents process untrusted prompts; a jailbroken LLM crafts these steps easily. “Dan” or “Developer Mode” prompts succeed 70-90% on GPT-4 variants. PraisonAI users: indie devs, hobbyists building automations. One infected machine scans networks, spreads.

Mitigate now: Pin to a patched version (watch GitHub issues). Manually validate paths:

def safe_target(workspace: Path, target: str) -> Path:
    full = (workspace / target).resolve()
    if not full.is_relative_to(workspace.resolve()):
        raise ValueError("Path escapes workspace")
    return full

Use containers with read-only binds or AppArmor/SELinux. Run agents in VMs. Skeptical take: PraisonAI is early-stage; vulns like this scream “alpha software.” Fair point—open-source moves fast, but security lags. Fork it, fix it, or switch to battle-tested like CrewAI with better guards.

Bottom line: If you run PraisonAI CLI, audit your workspaces. Disable file actions until fixed. This isn’t hype—it’s a textbook escape hatch in agent tooling.

April 7, 2026 · 3 min · 13 views · Source: GitHub Security

Related