Jaunt
Reference

Limitations / Gotchas

Current behavioral constraints in the MVP implementation.

Jaunt is an MVP. These are the sharp edges you should know about.

Provider Support

The CLI supports llm.provider = "openai", llm.provider = "anthropic", and llm.provider = "cerebras".

Practical gotcha: provider SDKs are optional dependencies. You need the matching extra installed in the environment that runs Jaunt:

  • pip install jaunt[openai]
  • pip install jaunt[anthropic]
  • pip install jaunt[cerebras]
  • or pip install jaunt[all]

Those provider extras now include Jaunt's default Aider runtime too. If you manage dependencies manually, make sure the environment includes both the provider SDK and aider-chat.

Known gotcha: aider-chat 0.86.2 currently pins numpy==1.26.4. That means jaunt[all] can fail to resolve in environments that already require numpy>=2.

If you hit that conflict:

  • prefer running Jaunt in an isolated tool environment, for example: uv tool run --from 'jaunt[all]' jaunt build --root .
  • or install jaunt[all-sdk] instead, which keeps the same Jaunt CLI and provider/tooling extras but omits aider-chat

Hardcoded __generated__ Dir Name

Runtime forwarding for @jaunt.magic reads JAUNT_GENERATED_DIR (default: __generated__). jaunt build/jaunt test set this env var in-process, but that does not persist across separate shell sessions or app runtimes.

If you use a custom paths.generated_dir, set JAUNT_GENERATED_DIR in the environment where your app runs. Otherwise forwarding will still look under __generated__.

Prompt Overrides

The [prompts] config keys (build_system, build_module, test_system, test_module) are treated as file paths by the backend. If you set them, those files must exist on disk and contain the full prompt text — they're not inline strings.

If you want to tweak prompts, copy the defaults from src/jaunt/prompts/, modify them, and version the prompt files alongside your repo.

Dependency Context Plumbing

The dependency DAG, ordering, and staleness propagation all work correctly. However, the backend does not currently pass rich "here is the generated dependency source" context to dependents. The LLM generating a dependent spec doesn't see what its dependencies actually implemented.

In practice, this means you may need to be more explicit in docstrings (or add prompt= context) when a spec depends on non-trivial behavior from another generated spec. Ordering and digest-based staleness still work — it's only the prompt context that's limited.

Auto-Generated PyPI Skills

The skills system adds extra network calls (to PyPI for README fetches) and extra OpenAI calls (to generate skill docs) during jaunt build. These are best-effort: failures emit a warning and the build continues without the missing skills.

This means build output can vary based on your environment, network connectivity, and what's installed in the active venv. If reproducibility matters, commit the generated .agents/skills/ directory.

Aider + Custom API Key Env Names

The Aider runtime currently expects the provider's canonical API key env var name (OPENAI_API_KEY, ANTHROPIC_API_KEY, or CEREBRAS_API_KEY).

Jaunt supports llm.api_key_env pointing at a different variable name, but for agent.engine = "aider" it has to remap that key through os.environ while the task runs. To keep concurrent Aider tasks from clobbering each other's credentials, Jaunt protects that remap with a process-wide lock.

Effectively:

  • canonical provider env var name: parallel Aider tasks can run concurrently
  • custom llm.api_key_env name: Aider tasks stay correct, but they serialize

This is a known limitation, not the intended long-term design. The follow-up is to pass the resolved API key directly into the Aider/litellm model path so the global env remap can be removed.

Next: Development.

On this page