Documentation Index
Fetch the complete documentation index at: https://docs.mirage.strukto.ai/llms.txt
Use this file to discover all available pages before exploring further.
OpenAI Codex is OpenAI’s coding-agent CLI. Like Claude Code, it operates on a real filesystem and doesn’t expose a pluggable backend, so Mirage integrates by FUSE-mounting a workspace and letting you point codex at the mountpoint.
Install
pnpm add @struktoai/mirage-node @zkochan/fuse-native
See TypeScript FUSE setup for the pnpm onlyBuiltDependencies allow-list and macOS macFUSE 4 symlink workaround. Then install OpenAI Codex separately.
Usage
import { MountMode, RAMResource, Workspace } from '@struktoai/mirage-node'
const ws = new Workspace(
{ '/': new RAMResource() },
{ mode: MountMode.WRITE, fuse: true },
)
await ws.execute('true') // wait for mount
console.log(`cd ${ws.fuseMountpoint} && codex`)
// ... run codex in another terminal, then:
await ws.close()
The mountpoint behaves like a regular directory. Codex’s read_file, write_file, and run_shell tools all dispatch through Mirage’s ops layer.
Why FUSE instead of an SDK integration?
The Codex CLI’s tool surface isn’t pluggable, it expects host file operations. FUSE makes those host operations be Mirage operations.
You lose per-tool customization and Mirage’s op-record telemetry; you gain zero integration effort and compatibility with every Codex feature. Same trade-off as Claude Code.