pi (pi-coding-agent)

Use Northern Inference with @mariozechner/pi-coding-agent

pi (@mariozechner/pi-coding-agent, source badlogic/pi-mono) is an MIT- licensed terminal coding harness. It speaks OpenAI Chat Completions natively, so NI works as a custom provider with no patching.

Install

npm install -g @mariozechner/pi-coding-agent

The binary is named pi.

Config file

~/.pi/agent/models.json (this exact path; pi reads it on startup AND on every /model switch):

{
  "providers": {
    "northerninference": {
      "baseUrl": "https://northerninference.ca/v1",
      "api": "openai-completions",
      "apiKey": "ni_live_REPLACE_WITH_YOUR_KEY",
      "models": [
        { "id": "azure/DeepSeek-V3.2" },
        { "id": "azure/DeepSeek-R1" },
        { "id": "azure/gpt-5.4-2026-03-05" }
      ]
    }
  }
}

Three rules that catch most setup mistakes:

  1. **baseUrl is https://northerninference.ca/v1** - apex domain, no

trailing slash, no /chat/completions suffix. pi's OpenAI client appends /chat/completions itself.

  1. **apiKey must start with ni_live_** - that's our key prefix.

Don't paste with quotes around it from somewhere else; raw string.

  1. **models[].id is the routing key, not a friendly name.** Use exact

strings from https://northerninference.ca/v1/models. Mixing provider prefix is required (azure/..., bedrock/...).

Running

pi

Then in pi:

the northerninference provider section.

Tier selection

By default, pi sends OpenAI-shape requests with no extra_body. NI falls through to your API key's default_privacy_tier. To set the tier per-key:

  1. NI portal → Keys → edit your key → set Default Privacy Tier to

provider_api (T4) or managed_canadian_cloud (T3).

  1. Save. Subsequent pi calls hit that tier automatically.

To override per-call from pi, you'd need an extension that injects extra_body: {privacy_tier: "..."} into the OpenAI request. pi doesn't expose that in the config schema today. Cleanest path: one NI key per tier (e.g. pi-t3 defaulting to T3, pi-t4 defaulting to T4) and switch via /login or by maintaining two ~/.pi/agent/models.json variants.

Troubleshooting

"Invalid API key format" or "401 Unauthorized" with no further detail

Run pi with verbose HTTP logging (if the env var works for pi's transport):

NODE_DEBUG=http,https pi

Confirm the request includes Authorization: Bearer ni_live_.... If the header is missing or empty, pi isn't picking up apiKey from the config. Common causes:

camelCase per pi's schema).

python -m json.tool < ~/.pi/agent/models.json to check.

picked one of those instead. Unset them or move the NI provider above the conflicting one.

"Tier unavailable" 400 error

Your key's default tier doesn't match where the model is deployed. See Tier selection above.

"Model not found" 400 error

Your models[].id doesn't match a deployed model. Visit https://northerninference.ca/models to see the current routing keys.

Rotating your key

If you've ever pasted your ni_live_... key into a chat, IRC, screenshot, or pair-debug session, treat it as compromised. NI portal → Keys → revoke → create a fresh one → update ~/.pi/agent/models.json.


Source: tests/user_run_tests/integrations/pi.md. Spot a problem? Let us know.