CLI
Commands, flags, and exit codes.
For a guided, end-to-end introduction, start with Quickstart.
Entry point: jaunt = "jaunt.cli:main" (see pyproject.toml in the repo root).
All generation commands respect [agent].engine. Jaunt now defaults to
aider; if you switch back to legacy, you still use the same CLI commands.
Init
Create a starter jaunt.toml in the target directory:
uv run jaunt init
uv run jaunt init --root /tmp/myproj
uv run jaunt init --force
uv run jaunt init --jsonBuild
Generate implementation modules for @jaunt.magic specs:
uv run jaunt build
uv run jaunt build --force
uv run jaunt build --jobs 16
uv run jaunt build --target my_app.specs
uv run jaunt build --no-infer-deps
uv run jaunt build --jsonIf an upstream module's exported API changed, jaunt build also regenerates its stale dependents.
Test
Generate tests for @jaunt.test specs and run pytest:
uv run jaunt test
uv run jaunt test --no-build
uv run jaunt test --no-run
uv run jaunt test --pytest-args=-k --pytest-args email
uv run jaunt test --jsonEval
Run the built-in eval suite against one provider/model or compare multiple.
uv run jaunt eval
uv run jaunt eval --suite codegen
uv run jaunt eval --suite agent
uv run jaunt eval --provider anthropic --model claude-sonnet-4-5-20250929
uv run jaunt eval --compare openai:gpt-5.2 anthropic:claude-haiku-4-5
uv run jaunt eval --json--suite codegen: the original built-in code generation suite--suite agent: the end-to-end Aider/skills-oriented suite
Status
Show stale/fresh modules:
uv run jaunt status
uv run jaunt status --target my_app.specs
uv run jaunt status --jsonjaunt status should reflect the same dependency-driven freshness model as jaunt build, including modules invalidated by an upstream dependency API change.
Clean
Remove generated directories under configured source/test roots:
uv run jaunt clean
uv run jaunt clean --dry-run
uv run jaunt clean --jsonWatch
Watch source/test roots and rebuild on relevant .py changes:
uv run jaunt watch
uv run jaunt watch --test
uv run jaunt watch --jsonWhen --json is enabled, watch emits one JSON object per rebuild cycle (the watch envelope), not nested build/test JSON documents.
MCP
Start the stdio MCP server:
uv run jaunt mcp serve
uv run jaunt mcp serve --root /path/to/projectmcp requires the optional dependency: pip install jaunt[mcp].
Skills
Manage project-local skills under .agents/skills/:
uv run jaunt skill list
uv run jaunt skill show rich
uv run jaunt skill add rich --description "Rich usage notes" --lib rich
uv run jaunt skill remove rich -f
uv run jaunt skill import --from /path/to/skills-dir
uv run jaunt skill refresh
uv run jaunt skill build richTwo skill commands matter most when you are using the Aider runtime:
jaunt skill build <name>: expand or refine a user-managed checked-in skilljaunt skill refresh: refresh Jaunt-managed auto-generated skills
Both commands use the configured internal runtime, so agent.engine = "aider"
applies here too.
Note: jaunt skill build <name> expects the skill to exist already and to have
library metadata, usually from jaunt skill add <name> --lib <package>.
Common Flags
--root /path/to/project: override project discovery (otherwise Jaunt searches upward forjaunt.toml).--config /path/to/jaunt.toml: override config path.--target MODULE[:QUALNAME]: restrict work to one or more modules (currently module-level;:QUALNAMEis ignored for filtering).--no-infer-deps: disable best-effort dependency inference (explicitdeps=still applies).--json: emit structured JSON output for agent/CI workflows.
Exit Codes
0: success2: config/discovery/dependency-cycle errors3: generation errors (LLM/backend/validation/import)4: pytest failure (only whenjaunt testactually runs pytest)
Note: jaunt test runs pytest only on the generated test files it just wrote (not the entire suite). Run pytest separately for a full test run.
Eval Results Snapshot (2026-02-15 UTC)
The table below captures the eval runs executed during provider bring-up and reasoning-control testing.
| Run (UTC) | Mode | Target | Reasoning | Passed | Failed | Skipped | Total | Notes |
|---|---|---|---|---|---|---|---|---|
| 2026-02-15T21-34-58Z | single | cerebras:gpt-oss-120b | none | 0 | 10 | 0 | 10 | Missing cerebras-cloud-sdk dependency |
| 2026-02-15T21-35-17Z | single | cerebras:gpt-oss-120b | none | 0 | 10 | 0 | 10 | Cerebras 402 payment_required quota/billing error |
| 2026-02-15T21-36-54Z | single | cerebras:gpt-oss-120b | none | 10 | 0 | 0 | 10 | All eval cases passed |
| 2026-02-15T22-01-24Z-custom-compare | compare | cerebras:gpt-oss-120b | low | 10 | 0 | 0 | 10 | All eval cases passed |
| 2026-02-15T22-01-24Z-custom-compare | compare | openai:gpt-5.2 | none | 10 | 0 | 0 | 10 | All eval cases passed |
| 2026-02-15T22-01-24Z-custom-compare | compare | anthropic:opus-4.6 | none | 0 | 10 | 0 | 10 | Anthropic 404 not_found_error for model name |
| 2026-02-15T22-04-19Z-custom-compare | compare | cerebras:gpt-oss-120b | low | 10 | 0 | 0 | 10 | All eval cases passed |
| 2026-02-15T22-04-19Z-custom-compare | compare | openai:gpt-5.2 | none | 10 | 0 | 0 | 10 | All eval cases passed |
| 2026-02-15T22-04-19Z-custom-compare | compare | anthropic:claude-haiku-4-5 | none | 9 | 1 | 0 | 10 | One assertion failure (example_slugify_smoke) |
Raw artifacts are under examples/expr_eval/.jaunt/evals/ in the repository.
Next: Configuration.