Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mirage.strukto.ai/llms.txt

Use this file to discover all available pages before exploring further.

deepagents is LangChain’s framework for long-horizon coding agents. It accepts a pluggable backend for filesystem and shell operations, Mirage ships one.

Install

uv add 'mirage-ai[deepagents]' langchain-anthropic
This pulls in deepagents>=0.4.12. Bring your own LangChain chat model (langchain-anthropic, langchain-openai, etc.).

Usage

from deepagents import create_deep_agent
from langchain_anthropic import ChatAnthropic

from mirage import MountMode, Workspace
from mirage.agents.langchain import (
    LangchainWorkspace,
    build_system_prompt,
    extract_text,
)
from mirage.resource.ram import RAMResource

ws = Workspace({"/": RAMResource()}, mode=MountMode.WRITE)

agent = create_deep_agent(
    model=ChatAnthropic(model="claude-sonnet-4-20250514"),
    system_prompt=build_system_prompt(
        mount_info={"/": "In-memory filesystem (read/write)"},
    ),
    backend=LangchainWorkspace(ws),
)

result = agent.invoke({
    "messages": [{"role": "user", "content": "Create /report.md and summarize."}],
})
for text in extract_text(result["messages"]):
    print(text)

Exports

SymbolPurpose
LangchainWorkspaceBackend implementation for deepagents, wires reads, writes, edits, and shell.
extract_textPulls the text content out of LangChain messages.
build_system_promptGenerates a system prompt that describes mounted paths to the model.

Examples