Jaunt
Reference

CLI

Commands, flags, and exit codes.

For a guided, end-to-end introduction, start with Quickstart.

Entry point: jaunt = "jaunt.cli:main" (see pyproject.toml in the repo root).

All generation commands respect [agent].engine. Jaunt now defaults to aider; if you switch back to legacy, you still use the same CLI commands.

Init

Create a starter jaunt.toml in the target directory:

uv run jaunt init
uv run jaunt init --root /tmp/myproj
uv run jaunt init --force
uv run jaunt init --json

Build

Generate implementation modules for @jaunt.magic specs:

uv run jaunt build
uv run jaunt build --force
uv run jaunt build --jobs 16
uv run jaunt build --target my_app.specs
uv run jaunt build --no-infer-deps
uv run jaunt build --json

If an upstream module's exported API changed, jaunt build also regenerates its stale dependents.

Test

Generate tests for @jaunt.test specs and run pytest:

uv run jaunt test
uv run jaunt test --no-build
uv run jaunt test --no-run
uv run jaunt test --pytest-args=-k --pytest-args email
uv run jaunt test --json

Eval

Run the built-in eval suite against one provider/model or compare multiple.

uv run jaunt eval
uv run jaunt eval --suite codegen
uv run jaunt eval --suite agent
uv run jaunt eval --provider anthropic --model claude-sonnet-4-5-20250929
uv run jaunt eval --compare openai:gpt-5.2 anthropic:claude-haiku-4-5
uv run jaunt eval --json
  • --suite codegen: the original built-in code generation suite
  • --suite agent: the end-to-end Aider/skills-oriented suite

Status

Show stale/fresh modules:

uv run jaunt status
uv run jaunt status --target my_app.specs
uv run jaunt status --json

jaunt status should reflect the same dependency-driven freshness model as jaunt build, including modules invalidated by an upstream dependency API change.

Clean

Remove generated directories under configured source/test roots:

uv run jaunt clean
uv run jaunt clean --dry-run
uv run jaunt clean --json

Watch

Watch source/test roots and rebuild on relevant .py changes:

uv run jaunt watch
uv run jaunt watch --test
uv run jaunt watch --json

When --json is enabled, watch emits one JSON object per rebuild cycle (the watch envelope), not nested build/test JSON documents.

MCP

Start the stdio MCP server:

uv run jaunt mcp serve
uv run jaunt mcp serve --root /path/to/project

mcp requires the optional dependency: pip install jaunt[mcp].

Skills

Manage project-local skills under .agents/skills/:

uv run jaunt skill list
uv run jaunt skill show rich
uv run jaunt skill add rich --description "Rich usage notes" --lib rich
uv run jaunt skill remove rich -f
uv run jaunt skill import --from /path/to/skills-dir
uv run jaunt skill refresh
uv run jaunt skill build rich

Two skill commands matter most when you are using the Aider runtime:

  • jaunt skill build <name>: expand or refine a user-managed checked-in skill
  • jaunt skill refresh: refresh Jaunt-managed auto-generated skills

Both commands use the configured internal runtime, so agent.engine = "aider" applies here too.

Note: jaunt skill build <name> expects the skill to exist already and to have library metadata, usually from jaunt skill add <name> --lib <package>.

Common Flags

  • --root /path/to/project: override project discovery (otherwise Jaunt searches upward for jaunt.toml).
  • --config /path/to/jaunt.toml: override config path.
  • --target MODULE[:QUALNAME]: restrict work to one or more modules (currently module-level; :QUALNAME is ignored for filtering).
  • --no-infer-deps: disable best-effort dependency inference (explicit deps= still applies).
  • --json: emit structured JSON output for agent/CI workflows.

Exit Codes

  • 0: success
  • 2: config/discovery/dependency-cycle errors
  • 3: generation errors (LLM/backend/validation/import)
  • 4: pytest failure (only when jaunt test actually runs pytest)

Note: jaunt test runs pytest only on the generated test files it just wrote (not the entire suite). Run pytest separately for a full test run.

Eval Results Snapshot (2026-02-15 UTC)

The table below captures the eval runs executed during provider bring-up and reasoning-control testing.

Run (UTC)ModeTargetReasoningPassedFailedSkippedTotalNotes
2026-02-15T21-34-58Zsinglecerebras:gpt-oss-120bnone010010Missing cerebras-cloud-sdk dependency
2026-02-15T21-35-17Zsinglecerebras:gpt-oss-120bnone010010Cerebras 402 payment_required quota/billing error
2026-02-15T21-36-54Zsinglecerebras:gpt-oss-120bnone100010All eval cases passed
2026-02-15T22-01-24Z-custom-comparecomparecerebras:gpt-oss-120blow100010All eval cases passed
2026-02-15T22-01-24Z-custom-comparecompareopenai:gpt-5.2none100010All eval cases passed
2026-02-15T22-01-24Z-custom-comparecompareanthropic:opus-4.6none010010Anthropic 404 not_found_error for model name
2026-02-15T22-04-19Z-custom-comparecomparecerebras:gpt-oss-120blow100010All eval cases passed
2026-02-15T22-04-19Z-custom-comparecompareopenai:gpt-5.2none100010All eval cases passed
2026-02-15T22-04-19Z-custom-comparecompareanthropic:claude-haiku-4-5none91010One assertion failure (example_slugify_smoke)

Raw artifacts are under examples/expr_eval/.jaunt/evals/ in the repository.

Next: Configuration.

On this page