Quickstart
Zero to generated code in 5 minutes (copy-pasteable).
This is the shortest path to an "aha": write one tiny spec, run jaunt build, and call real generated code like normal Python.
jaunt build calls your configured LLM provider (default: OpenAI) and will spend tokens. Set OPENAI_API_KEY first.
Project Layout
Create a small "src layout" project. The __init__.py files can be empty:
myproj/
jaunt.toml
src/
my_app/
__init__.py # empty
specs.py # your @jaunt.magic specs
tests/
__init__.py # empty
specs_email.py # your @jaunt.test specs1) Install
Jaunt is an MVP and is typically installed from a local checkout.
If you’re working from the Jaunt repo:
uv sync
export OPENAI_API_KEY=...
uv run jaunt --versionIf you’re adding Jaunt to your own project, install it however you vend it (local path, internal index, or git). You also need pytest for jaunt test:
# Example (from package index):
uv add "jaunt[openai]" # or jaunt[anthropic], jaunt[cerebras]
uv add pytest
export OPENAI_API_KEY=...If you install Jaunt from a local editable path, also install the provider SDK
and Aider runtime you plan to use in that environment. The package extras above
already bundle both. For a manual local setup, add the matching provider SDK
plus aider-chat.
If you prefer Jaunt's legacy direct-SDK runtime instead of the default Aider runtime,
set this in jaunt.toml:
[agent]
engine = "legacy"2) Create jaunt.toml
myproj/jaunt.toml:
version = 1
[paths]
source_roots = ["src"]
test_roots = ["tests"]
generated_dir = "__generated__"3) Write Your First Spec (@jaunt.magic)
myproj/src/my_app/specs.py:
from __future__ import annotations
import jaunt
@jaunt.magic()
def normalize_email(raw: str) -> str:
"""
Normalize an email address for stable comparisons.
Rules:
- Strip surrounding whitespace.
- Lowercase the whole string.
- Must contain exactly one "@".
- Local-part and domain must both be non-empty after splitting.
Errors:
- Raise ValueError if `raw` is invalid by the rules above.
"""
raise RuntimeError("spec stub (generated at build time)")This docstring is the contract. The type hints matter. The body is a placeholder.
4) Generate Code
From myproj/:
jaunt buildJaunt reads your spec, sends it to OpenAI, validates the output, and writes a real module to:
src/my_app/__generated__/specs.py5) Use It (Runtime Forwarding)
You import the spec module. The decorator forwards your call into __generated__/.
PYTHONPATH=src python - <<'PY'
from my_app.specs import normalize_email
print(normalize_email(" [email protected] ")) # -> "[email protected]"
PYIf you call it before building, you’ll get jaunt.JauntNotBuiltError with a hint to run jaunt build.
6) Add A Test Spec (@jaunt.test)
myproj/tests/specs_email.py:
from __future__ import annotations
import jaunt
@jaunt.test()
def test_normalize_email__lowercases_and_strips() -> None:
"""
Assert normalize_email:
- strips surrounding whitespace
- lowercases
- rejects invalid inputs like "no-at-sign" (ValueError)
Examples:
- normalize_email(" [email protected] ") == "[email protected]"
"""
raise AssertionError("spec stub (generated at test time)")Now generate tests and run pytest:
PYTHONPATH=src jaunt testJaunt writes generated tests under:
tests/__generated__/specs_email.py
7) Optional: Switch To Aider Mode
The commands stay the same. You only change the internal runtime in
jaunt.toml:
[agent]
engine = "aider"
[aider]
build_mode = "architect"
test_mode = "code"
skill_mode = "code"
editor_model = ""
map_tokens = 0
save_traces = falseThen keep using:
jaunt build
PYTHONPATH=src jaunt testIf you use Aider, the safest high-throughput setup is to leave
llm.api_key_env on the provider's canonical env var name, such as
OPENAI_API_KEY.
What Next
Next: How It Works.