Appearance
Providers
This guide explains how to configure provider-backed model access for package examples, FastAPI apps, and source demos. It does not compare model quality or recommend one provider over another.
This guide is for developers who want to run real hosted model calls from SwarmForge. It assumes that you know which provider you want to use and can set environment variables or pass configuration in code.
After reading this guide, you should be able to:
- set the required environment variables for your integration path
- configure
ModelConfig(...) - understand which variables are used by package examples, FastAPI apps, and the demo UI
The provider layer exposes one OpenAI-compatible client wrapper and one configuration object. Every runnable runtime example in these docs uses this layer, so you should explicitly choose both the provider and the model before you run anything.
Environment Variables By Service
Use the lists below instead of relying on .env.example being open locally.
Package runtime examples
Applies to Getting Started, Create Your First Agent, Create Your First Multi-Agent Swarm, and Orchestration.
Always set:
MODEL_PROVIDERLLM_MODEL
Then set the provider-specific auth variable for the provider you chose:
- OpenRouter:
OPENROUTER_API_KEY - Gemini:
GEMINI_API_KEYorGOOGLE_API_KEY - OpenAI-compatible
openai:OPENAI_API_KEY - Any other provider name such as
anthropic-proxy:${PROVIDER_NAME_UPPER}_API_KEY
Optional OpenRouter attribution headers:
OPENROUTER_SITE_URLOPENROUTER_APP_NAME
FastAPI server defaults
Applies to API, examples/fastapi_swarm.py, and examples/fastapi_server.py.
FastAPI reads provider configuration from the server environment or from default_model_config in your app wiring. Set the same provider variables as the package runtime examples:
MODEL_PROVIDERLLM_MODEL- matching API key variable for that provider
Optional server bind variables:
SWARMFORGE_HOSTdefault:127.0.0.1SWARMFORGE_PORTdefault:8000
Demo UI
Applies to the local demo-ui/ app when it is connected to examples/fastapi_swarm.py.
The demo UI reads only these variables from the root .env:
MODEL_PROVIDERoptional, defaults the provider selectorLLM_MODELoptional, defaults the model fieldSWARMFORGE_HOSToptional, defaults API host to127.0.0.1SWARMFORGE_PORToptional, defaults API port to8000
The demo UI does not read provider API keys directly. Those stay on the server side.
OpenRouter conversation script
Applies to examples/openrouter_conversation.py.
Required:
MODEL_PROVIDER=openrouterLLM_MODELOPENROUTER_API_KEYOPENROUTER_SITE_URLOPENROUTER_APP_NAME
Supported Modes
- OpenRouter through
https://openrouter.ai/api/v1 - Gemini OpenAI-compatible mode through
https://generativelanguage.googleapis.com/v1beta/openai/ - other OpenAI-compatible endpoints by overriding
base_url,api_key, andmodel
OpenRouter
Copy the repository example config first:
bash
cp .env.example .envThen set the OpenRouter values in .env:
dotenv
MODEL_PROVIDER=openrouter
LLM_MODEL=openrouter/auto
OPENROUTER_API_KEY=sk-or-...
OPENROUTER_SITE_URL=https://your-app.example
OPENROUTER_APP_NAME="Your App Name"Shell exports still work, but .env is the default path used by the examples and tests. Replace LLM_MODEL with the exact OpenRouter model slug you want when you need stable behavior instead of router-selected defaults.
Default OpenRouter settings:
- provider:
openrouter - model:
openrouter/auto - base URL:
https://openrouter.ai/api/v1
Gemini OpenAI-compat
Set Gemini values in .env:
dotenv
MODEL_PROVIDER=gemini
LLM_MODEL=gemini-3-flash-preview
GEMINI_API_KEY=...GOOGLE_API_KEY is also supported if you prefer that variable name.
Gemini uses:
- provider:
gemini - base URL:
https://generativelanguage.googleapis.com/v1beta/openai/
OpenAI-compatible Custom Provider
If you are not using OpenRouter or Gemini, set:
dotenv
MODEL_PROVIDER=openai
LLM_MODEL=gpt-4.1-mini
OPENAI_API_KEY=...If you construct ModelConfig(...) directly in code, you can also override base_url there for another OpenAI-compatible endpoint.
Configuration Surface
ModelConfig supports:
providerbase_urlapi_keymodeltemperaturemax_tokenssite_urlapp_namedefault_headersdefault_chat_params
Minimal client setup
python
from swarmforge.env import require_env_vars
from swarmforge.evaluation.provider import ModelConfig, OpenAIClientWrapper
env = require_env_vars("MODEL_PROVIDER", "LLM_MODEL")
client = OpenAIClientWrapper(ModelConfig())
response = client.chat_completion(
messages=[
{"role": "system", "content": "You are a concise assistant."},
{
"role": "user",
"content": (
"Reply with one sentence that names the active provider and model: "
f"{env['MODEL_PROVIDER']} / {env['LLM_MODEL']}"
),
},
]
)
print(response.choices[0].message.content)Switch providers by changing .env values rather than editing the code sample.
For tool and handoff examples, choose a model that supports tool or function calling.
Wrapper Behavior
OpenAIClientWrapper.chat_completion(...) passes through:
- chat messages
- optional tool definitions
- optional timeout override
- optional structured
response_format - optional provider-specific
extra_params
Notes
.envvalues are loaded automatically from the current working directory upward- provider-specific headers are applied for OpenRouter attribution
- extra OpenAI-compatible request fields can be passed through
default_chat_params - explicit
api_keyvalues support${ENV_VAR}substitution