Reference
OpenAI Backend
How the default backend calls OpenAI and how prompts are loaded.
This page covers Jaunt's legacy direct OpenAI backend. If you are using
agent.engine = "aider", see Aider Runtime
instead.
The default backend is jaunt.generate.openai_backend.OpenAIBackend:
- reads the API key from
os.environ[llm.api_key_env] - requires the OpenAI SDK to be installed (
pip install jaunt[openai]) - uses the OpenAI Python SDK (
openai.AsyncOpenAI) - calls
chat.completions.create(model=..., messages=[...]) - passes
reasoning_effortwhenllm.reasoning_effortis configured - prefers structured output (
response_format = json_schema) for module generation - strips a single top-level markdown fence (
...) when fallback text mode is used - retries transient API failures with exponential backoff
Prompt templates live in src/jaunt/prompts/ and are packaged with the wheel/sdist.
Next: Limitations.