Skip to content

Providers

Provider reference

Every LLM interaction goes through the LLMProvider interface. You can stay fully deterministic, choose one provider, or use hybrid routing once the AI path is worth enabling.

Provider model

Four concrete providers plus factory auto-detection

Anthropic, OpenAI, Ollama, and custom OpenAI-compatible endpoints are all supported behind one interface.

Best practice

Keep the core CI loop deterministic first

Add a provider when you want generation, healing, or crew workflows. The product stays useful even when the AI path is off.

Anthropic (Default)

Terminal window
export ANTHROPIC_API_KEY=sk-ant-...

Supports vision (image analysis) and prompt caching. Used for complex tasks like test generation and code analysis by default.

import { AnthropicProvider } from '@yasserkhanorg/impact-gate';
const provider = new AnthropicProvider({
apiKey: process.env.ANTHROPIC_API_KEY,
});

OpenAI

Terminal window
export OPENAI_API_KEY=sk-...

Supports GPT models. Configure as the primary provider when you prefer OpenAI’s model family.

import { OpenAIProvider } from '@yasserkhanorg/impact-gate';
const provider = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY,
});

Ollama (Free, Local)

Terminal window
export OLLAMA_BASE_URL=http://localhost:11434
export OLLAMA_MODEL=deepseek-r1:7b

Runs entirely on your machine with no API costs. Install Ollama, pull a model, and point the tool at your local instance.

Custom Provider

Any OpenAI-compatible endpoint works as a custom provider. Useful for self-hosted models, Azure OpenAI, or other API-compatible services.

Auto-Detection

The factory detects which provider to use based on environment variables:

import { LLMProviderFactory } from '@yasserkhanorg/impact-gate';
// Checks ANTHROPIC_API_KEY, OPENAI_API_KEY, OLLAMA_BASE_URL in order
const provider = LLMProviderFactory.createFromEnv();

Hybrid Mode

Hybrid routing

Mix local and premium providers when cost matters

  • Ollama handles simple classifications and short answers
  • Anthropic / OpenAI handles generation, vision, and complex analysis
Budget enforcement

Every provider respects the same spend controls

Before every LLM request, accumulated cost is checked against the --budget-usd limit and rejected cleanly if it would exceed it.

Combine a free local provider for routine calls with a premium provider for complex tasks:

  • Ollama handles simple classifications and short answers
  • Anthropic/OpenAI handles test generation, vision, and complex analysis

The factory supports this through its hybrid configuration, automatically routing based on task complexity.

Model Routing

The model router sends different task types to cost-appropriate models:

Task TypeModel TierExamples
ClassificationFast/cheapImpact categorization, simple yes/no
AnalysisMid-tierFlow mapping, gap detection
GenerationCapableTest code generation, healing
VisionVision-enabledScreenshot analysis, UI verification

This routing happens automatically and helps control costs without sacrificing quality where it matters.