Providers
Every LLM interaction goes through the LLMProvider interface.
You can stay fully deterministic, choose one provider, or use hybrid
routing once the AI path is worth enabling.
Four concrete providers plus factory auto-detection
Anthropic, OpenAI, Ollama, and custom OpenAI-compatible endpoints are all supported behind one interface.
Keep the core CI loop deterministic first
Add a provider when you want generation, healing, or crew workflows. The product stays useful even when the AI path is off.
Anthropic (Default)
export ANTHROPIC_API_KEY=sk-ant-...Supports vision (image analysis) and prompt caching. Used for complex tasks like test generation and code analysis by default.
import { AnthropicProvider } from '@yasserkhanorg/impact-gate';
const provider = new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY,});OpenAI
export OPENAI_API_KEY=sk-...Supports GPT models. Configure as the primary provider when you prefer OpenAI’s model family.
import { OpenAIProvider } from '@yasserkhanorg/impact-gate';
const provider = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY,});Ollama (Free, Local)
export OLLAMA_BASE_URL=http://localhost:11434export OLLAMA_MODEL=deepseek-r1:7bRuns entirely on your machine with no API costs. Install Ollama, pull a model, and point the tool at your local instance.
Custom Provider
Any OpenAI-compatible endpoint works as a custom provider. Useful for self-hosted models, Azure OpenAI, or other API-compatible services.
Auto-Detection
The factory detects which provider to use based on environment variables:
import { LLMProviderFactory } from '@yasserkhanorg/impact-gate';
// Checks ANTHROPIC_API_KEY, OPENAI_API_KEY, OLLAMA_BASE_URL in orderconst provider = LLMProviderFactory.createFromEnv();Hybrid Mode
Mix local and premium providers when cost matters
- Ollama handles simple classifications and short answers
- Anthropic / OpenAI handles generation, vision, and complex analysis
Every provider respects the same spend controls
Before every LLM request, accumulated cost is checked against the
--budget-usd limit and rejected cleanly if it would exceed it.
Combine a free local provider for routine calls with a premium provider for complex tasks:
- Ollama handles simple classifications and short answers
- Anthropic/OpenAI handles test generation, vision, and complex analysis
The factory supports this through its hybrid configuration, automatically routing based on task complexity.
Model Routing
The model router sends different task types to cost-appropriate models:
| Task Type | Model Tier | Examples |
|---|---|---|
| Classification | Fast/cheap | Impact categorization, simple yes/no |
| Analysis | Mid-tier | Flow mapping, gap detection |
| Generation | Capable | Test code generation, healing |
| Vision | Vision-enabled | Screenshot analysis, UI verification |
This routing happens automatically and helps control costs without sacrificing quality where it matters.