import { RAMResource, S3Resource, SlackResource, Workspace,} from '@struktoai/mirage-node'const ws = new Workspace({ '/data': new RAMResource(), '/s3': new S3Resource({ bucket: 'my-bucket' }), '/slack': new SlackResource(),})// Same shell vocabulary, three different backends.await ws.execute('echo "hello mirage" > /data/hello.txt')await ws.execute('ls /s3/reports/')const res = await ws.execute('grep -r "release" /slack/eng')console.log(new TextDecoder().decode(res.stdout))
Add GitHub, Postgres, SSH, Notion, Google Drive, … and the same shell vocabulary keeps working. That’s the whole pitch.
What is Mirage?
Mirage is a Unified Virtual Filesystem for AI agents. It mounts your apps, services, and systems (S3, R2, GCS, Gmail, Google Drive, GitHub, Linear, Notion, Slack, Discord, Telegram, MongoDB, Redis, SSH, local disk, and more) behind one filesystem interface. Agents reach every backend with the same handful of Unix-like tools, pipelines compose across services as naturally as on a local disk, and the workspace embeds in any application, sandbox, or coding agent runtime.
One Filesystem
Every service speaks the same filesystem semantics, so agents reason about one abstraction instead of N SDKs and M MCPs. S3, R2, Google Drive, GitHub, Linear, Notion, Slack, Discord, MongoDB, Redis, SSH, and more mount side-by-side under a single root.
Familiar Bash Tools
Agents reuse the same handful of Unix-like tools (ls, find, grep, cat, …) instead of learning a new API per service. Pipelines compose across services as naturally as on a local disk, the exact corpus modern LLMs are most heavily trained on.
# Find every mention of "mirage" across three servicesgrep -r "mirage" /slack /gmail /github
Portable Workspaces
Clone, snapshot, and version your environment. Move agent runs between machines without restarting or reconfiguring the system, and replay any past state on demand.
Embed in Apps and Agents
Python and TypeScript SDKs give your AI agents a virtual filesystem directly inside FastAPI, Express, browser apps, or any async runtime, no separate process required. Works with the major agent frameworks (OpenAI Agents SDK, Vercel AI SDK, LangChain, Pydantic AI, CAMEL, OpenHands) and a lightweight CLI plugs into coding agents like Claude Code and Codex.
Git-style Versioning
Snapshot and clone your workspace the way git treats source. Fork from any past state and replay an agent run.
A Real-world Example
An agent watches your team’s Slack #incident channel. A user posts a screenshot of mirage —help with the message “the CLI design is confusing and hard to follow”.
Built with the OpenAI Agents SDK, the agent walks Slack, GitHub, and Linear through one bash tool.
Code
Python
TypeScript
from agents import Runnerfrom agents.run import RunConfigfrom agents.sandbox import SandboxAgent, SandboxRunConfigfrom mirage import MountMode, Workspacefrom mirage.agents.openai_agents import MirageSandboxClientfrom mirage.resource.github import GitHubResourcefrom mirage.resource.linear import LinearResourcefrom mirage.resource.slack import SlackResourceslack = SlackResource(...)github = GitHubResource(repo="strukto-ai/mirage")linear = LinearResource(...)ws = Workspace({ "/slack": (slack, MountMode.READ), "/github": (github, MountMode.READ), "/linear": (linear, MountMode.WRITE),})agent = SandboxAgent( name="Design feedback triage", model="gpt-5.5", instructions=ws.file_prompt,)config = RunConfig(sandbox=SandboxRunConfig(client=MirageSandboxClient(ws)))task = ( "Scan recent messages in the Slack #incident channel. If anyone posts " "feedback about Mirage with a screenshot, read the image, locate the " "relevant CLI code in the Mirage GitHub repo, and file a design issue " "in Linear with the screenshot, the user's feedback, and links to the " "offending source files.")result = await Runner.run(agent, task, run_config=config)print(result.final_output)
import { GitHubResource, LinearResource, MountMode, SlackResource, Workspace,} from '@struktoai/mirage-node'import { Agent, run, shellTool } from '@openai/agents'import { MirageShell, buildSystemPrompt } from '@struktoai/mirage-agents/openai'const slack = new SlackResource({ token: process.env.SLACK_BOT_TOKEN! })const github = new GitHubResource({ token: process.env.GITHUB_TOKEN!, owner: 'strukto-ai', repo: 'mirage',})const linear = new LinearResource({ apiKey: process.env.LINEAR_API_KEY! })const ws = new Workspace( { '/slack': slack, '/github': github, '/linear': linear }, { mode: MountMode.READ, modeOverrides: { '/linear': MountMode.WRITE }, },)const agent = new Agent({ name: 'Design feedback triage', model: 'gpt-5.5', instructions: buildSystemPrompt({ workspace: ws }), tools: [shellTool({ shell: new MirageShell(ws) })],})const task = 'Scan recent messages in the Slack #incident channel. If anyone posts ' + 'feedback about Mirage with a screenshot, read the image, locate the ' + 'relevant CLI code in the Mirage GitHub repo, and file a design issue ' + 'in Linear with the screenshot, the user\'s feedback, and links to the ' + 'offending source files.'const result = await run(agent, task)console.log(result.finalOutput)
API preview. Runnable TypeScript example coming soon. See multi_resource_agent.ts for the same shell-tool pattern over Slack + S3 today.
Walk-through
Inside the workspace shell, the agent runs three steps:
# 1. Read the latest #incident message + list its attachments$ cat /slack/channels/incident__C0B0DB9K11T/2026-04-28/chat.jsonl$ ls /slack/channels/incident__C0B0DB9K11T/2026-04-28/files/# image__F0B01A3R171.png <- the agent reads this via the model's vision input# 2. Find the CLI source the screenshot is complaining about$ rg -n "Mirage daemon CLI|workspace|session|provision" /github/typescript$ cat /github/typescript/packages/cli/src/main.ts# 3. File a design issue in Linear with the feedback + code refs$ linear-issue-create --team_id <team-id> \ --title "[Design] Rework Mirage CLI top-level command surface" \ --description "$(cat <<'EOF'... feedback, screenshot summary, and links to the offending files ...EOF)"
The agent files a new issue in Linear with the user’s feedback and links to the relevant source files.
Use Cases
Mirage shows up wherever an agent needs to read, write, or stitch together data that doesn’t already live on a local disk.
Cross-app triage agents
Watch Slack, search GitHub, file Linear issues. One shell, no per-service wiring.
SWE agents on remote data
Point Claude Code or Codex at S3, Postgres, or SSH hosts as if they were files.
RAG with writes
READ + WRITE mounts on Notion, Drive, and Linear so the agent can edit and comment back.
Ops & observability copilots
tail, grep, jq over remote logs, metrics, and config without per-source plugins.
Sandbox-native workflows
Embed inside Daytona, E2B, Modal, or Vercel sandboxes as the data plane.
Reproducible agent runs
Snapshot, restore, and clone workspaces for branchable, replayable agent runs.
FAQ
Do I need FUSE?
No. Mirage runs the workspace in-process: ws.execute(…) parses and dispatches commands without mounting anything on the host. FUSE is an optional surface if you also want host tools (editors, language servers, rg) to see the workspace. See FUSE for the optional integration.
Where does the shell run? Is it really bash?
The shell runs in your Mirage process. It’s a tree-sitter bash parser plus a custom executor that routes commands to per-mount handlers, so there is no subshell to /bin/bash and no os.system. Most common Unix verbs work (ls, cat, grep, find, head, wc, jq, …) plus pipes, redirects, globs, and &&/||. See Shell limitations for the gaps.
Is it sandboxed?
The shell only sees your mounted resources: no arbitrary host filesystem, no shelling out to host binaries. For untrusted code you’d still want a real sandbox (Daytona, E2B, Modal); Mirage embeds inside those rather than replacing them.
What about latency and cost?
Backend-bound. cat /s3/… is one GetObject; find /postgres/… is a SQL query. Reads cache per session, and mirage provision returns a dry-run estimate (network bytes, cache hits, projected cost) before you commit to an expensive operation.
Hosted or self-hosted?
Self-hosted. Mirage is a library plus a thin local daemon. The daemon lives in your process or sandbox; data only leaves your network if a mount you configured already does (e.g. an S3 read).
Which agent frameworks are supported?
Anything that exposes a shell tool: Claude Code, Codex / OpenAI Agents SDK, Cursor, OpenHands, Pydantic deepagents. Direct SDK integrations live under Python agents and TypeScript agents.
Python or TypeScript?
Both. The Python package is the reference implementation; the TypeScript SDK (@struktoai/mirage-node) ships the same Workspace/execute surface and most resources. Some agent integrations land on Python first.
What if a resource I want isn't supported yet?
Resources are pluggable. The “Add a resource” guide walks through the read/write/stat surface a new backend implements. PRs welcome.
Explore Mirage
Python Quickstart
Create a workspace, mount resources, and run shell commands in minutes.
TypeScript Quickstart
Same Workspace API in Node, browser, and edge runtimes.
CLI
Drive workspaces from the shell: create, execute, snapshot, and restore.
Resource Matrix
Compare resources by mount mode, setup path, and common use cases.
Community & Support
Need Help?
Setup failures, credential issues, and FUSE gotchas.
Book a Call
Talk to the team directly if you are blocked.
Join our Discord
Chat with the community and get help from the team.