- Route requests across multiple AI providers
- Implement fallback mechanisms for better reliability
- Monitor and analyze your AI usage
- Cache responses for cost optimization
- Apply rate limiting and usage controls
Authentication
You need both a Portkey API key and a virtual key for model routing. Get them from Portkey here.Example
UsePortkey
with your Agent
:
Advanced Configuration
You can configure Portkey with custom routing and retry policies: View more examples here.
Params
Parameter | Type | Default | Description |
---|---|---|---|
id | str | "gpt-4o-mini" | The id of the model to use through Portkey |
name | str | "Portkey" | The name of the model |
provider | str | "Portkey" | The provider of the model |
api_key | Optional[str] | None | The API key for Portkey (defaults to PORTKEY_API_KEY env var) |
base_url | str | "https://api.portkey.ai/v1" | The base URL for the Portkey API |
virtual_key | Optional[str] | None | The virtual key for the underlying provider |
trace_id | Optional[str] | None | Custom trace ID for request tracking |
config_id | Optional[str] | None | Configuration ID for Portkey routing |
Portkey
also supports the params of OpenAI.