Skip to main content

Configuration

Every ANIP service is configured through a service config object. This page covers the key configuration options: storage, authentication, trust level, and runtime behavior.

Storage

ANIP manages its own storage for audit logs, delegation tokens, checkpoints, and internal coordination state. You configure it with a connection string — the runtime handles schema creation, migrations, and all database operations internally.

Supported backends

BackendConnection stringUse case
In-memory"memory" or ":memory:"Development, testing
SQLite"sqlite:///path/to/db"Single-instance production
PostgreSQL"postgres://user:pass@host:5432/db"Multi-replica production

These are the three backends ANIP supports today. Other databases (MySQL, MongoDB, etc.) are not currently supported.

ANIP storage is separate from your application database

ANIP's storage is for protocol state only — audit logs, tokens, checkpoints, and leases. It does not store your application data. Your capability handlers can use any database, ORM, or data layer you want (SQLAlchemy, Prisma, GORM, Hibernate, Entity Framework — anything). The ANIP storage configuration only affects the protocol runtime's own state.

Using ORMs in your capability handlers

Your capability handlers are normal application code — they can use any ORM, database client, or data layer. ANIP doesn't interfere with your application's data access patterns.

from sqlalchemy.orm import Session
from anip_service import Capability

def search_flights_handler(ctx, params):
# Use your own database connection — ANIP doesn't touch it
with Session(your_engine) as session:
flights = session.query(Flight).filter(
Flight.origin == params["origin"],
Flight.destination == params["destination"],
).all()
return {"flights": [f.to_dict() for f in flights]}

search_flights = Capability(
name="search_flights",
description="Search available flights",
side_effect="read",
scope=["travel.search"],
handler=search_flights_handler,
)

The key point: ANIP's storage config is for the protocol runtime only. Your handlers talk to your own database with your own ORM. The two never mix.

In-memory (development)

No persistence — data is lost when the process exits. Good for tests and development.

service = ANIPService(
service_id="my-service",
capabilities=[...],
storage="memory", # or omit — memory is the default
authenticate=...,
)

SQLite (single-instance production)

Persistent local storage. Good for single-process deployments.

service = ANIPService(
service_id="my-service",
capabilities=[...],
storage="sqlite:///anip.db",
authenticate=...,
)

PostgreSQL (cluster production)

Shared storage for multi-replica deployments. Required for horizontal scaling.

service = ANIPService(
service_id="my-service",
capabilities=[...],
storage="postgres://user:pass@host:5432/anip",
authenticate=...,
)

The runtime creates all required tables automatically on first connection — just point it at an empty PostgreSQL database. With PostgreSQL, multiple replicas can run behind a load balancer with automatic coordination. See Horizontal Scaling for details.

Authentication

ANIP supports multiple authentication methods that can be used simultaneously.

API keys

The simplest path — map bearer strings to principal identities. See Authentication for examples in all languages.

OIDC / OAuth2

Validate external JWTs from any OIDC-compliant identity provider (Keycloak, Auth0, Okta, Azure AD, etc.):

Set environment variables:

OIDC_ISSUER_URL=https://keycloak.example.com/realms/anip
OIDC_AUDIENCE=my-service # defaults to service_id
# OIDC_JWKS_URL=... # optional override

The service auto-discovers the OIDC configuration from the issuer URL, validates incoming JWTs, and maps claims to ANIP principals:

  • email claim → human:{email}
  • sub claim → oidc:{sub}

API keys continue to work alongside OIDC tokens.

Trust level

The trust setting controls the cryptographic trust posture of the service:

LevelWhat it meansWhen to use
"declarative"No signing — capabilities are declared but not cryptographically verifiedDevelopment, testing
"signed"Manifest and tokens are signed with the service's key pair, JWKS publishedProduction
"anchored"Audit checkpoints are anchored to external trust sourcesCompliance, regulated environments
service = ANIPService(
service_id="my-service",
capabilities=[...],
trust="signed",
key_path="keys/", # directory for key storage
authenticate=...,
)

When trust is "signed" or higher, the runtime generates an Ed25519 key pair on first run (stored in key_path) and uses it to sign manifests, delegation tokens, and checkpoints.

Key management

How ANIP keys work (and how they differ from OIDC)

If you've worked with Keycloak, Auth0, or any OAuth2/OIDC identity provider, you're used to JWKS as part of an identity trust chain — the IdP's JWKS proves that tokens were issued by a trusted authority, and relying parties use it to verify identity claims.

ANIP JWKS is different. It's a verification surface, not a trust anchor.

OIDC / OAuth2 JWKSANIP JWKS
ScopeOrganization-wide identity providerPer-service
What it verifies"This token was issued by our IdP""This manifest/token/checkpoint was signed by this service"
Trust sourceThe IdP is the trust anchorTrust comes from deployment context, not the key itself
Key managementCentralized (IdP manages rotation)Per-service (each service has its own key pair)
Who rotatesIdP admin or platform automationService operator or platform automation

ANIP JWKS answers: which public keys verify artifacts from this service?

It does not answer: should I trust this service at all?

Trust comes from the wider deployment context — transport security, platform policy, service identity, and optionally anchored checkpoints that provide external verification.

Zero-config for development

In development and local use, key management is invisible:

  1. Service starts with trust: "signed" and a key_path
  2. If no key exists at that path, the runtime generates an Ed25519 key pair automatically
  3. The public key is published at /.well-known/jwks.json
  4. The private key signs manifests, tokens, and checkpoints
  5. Keys persist across restarts (same key_path = same keys = consistent verification)

You don't need to generate PEM files, configure certificates, or set up any key infrastructure. Just run the service.

Production key management

In production, all replicas must use the same signing key (see Horizontal Scaling). Options:

ApproachHow it worksBest for
Shared file pathAll replicas mount the same key directorySimple deployments
Kubernetes SecretMount a Secret at key_path, shared across the DeploymentKubernetes
KMS-backedCustom KeyManager delegates signing to AWS KMS, GCP Cloud KMS, or HashiCorp VaultHigh-security / regulated

Key rotation

When rotating keys:

  1. Deploy the new key material to all replicas
  2. During the rollover window, the JWKS publishes both old and new public keys
  3. New artifacts are signed with the new key
  4. Old artifacts remain verifiable with the old key (matched by kid)
  5. Remove the old key after the rollover/retention window

Important: Coordinate key rotation as an atomic configuration change (e.g., update the Kubernetes Secret, then trigger a rolling restart). A rolling deploy that updates one replica at a time creates a window where different replicas sign with different keys.

Operational checklist

EnvironmentKey setupRotationMonitoring
DevelopmentAutomatic (zero config)Not neededNot needed
StagingShared file or SecretManual or automatedVerify JWKS endpoint returns expected kid
ProductionKubernetes Secret or KMSAutomated with rollover windowAlert on JWKS kid mismatch across replicas

Checkpoint policy

Control how often Merkle checkpoints are generated over the audit log:

from anip_server import CheckpointPolicy

service = ANIPService(
service_id="my-service",
capabilities=[...],
checkpoint_policy=CheckpointPolicy(interval_seconds=60),
authenticate=...,
)

Next steps