How to use 1 API key for all LLM with openrouter.ai
In this tutorial, I'll show you how to simplify your AI development workflow by using a single API key to access multiple language models through OpenRouter.ai. This approach eliminates the need to manage separate API keys for each AI provider, saving you time and reducing complexity in your projects.
Quick Overview of the Process
- Sign up for an OpenRouter.ai account
- Get your unified API key
- Configure your application to use OpenRouter's endpoint
- Access multiple LLMs with a single integration
Why Use OpenRouter.ai for Your LLM Projects?
Before diving into the implementation details, let's explore why OpenRouter.ai is an excellent choice for developers working with multiple language models:
- Single API Key: Access Claude, GPT-4, Llama, and many other models with just one key
- Consistent Interface: Use a standardized API format across all models
- Cost Management: Easily track usage across different models in one dashboard
- Fallback Options: Configure automatic fallbacks if your preferred model is unavailable
- Model Experimentation: Quickly test and compare different models without changing your code
Prerequisites
Before we begin, make sure you have:
- Basic knowledge of API integration
- A project that requires language model capabilities
- An OpenRouter.ai account (you can sign up for free)
Step 1: Create an OpenRouter.ai Account and Get Your API Key
- Visit openrouter.ai and sign up for an account
- After logging in, navigate to the API Keys section
- Create a new API key for your project
- Copy your API key to a secure location
This unique key will allow you to access all supported language models through OpenRouter's unified endpoint.
Step 2: Understanding the OpenRouter API Structure
OpenRouter.ai provides a simple, OpenAI-compatible API that works with most existing OpenAI SDK implementations. Here's the basic structure:
// Basic OpenRouter API request structure
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer your-api-key-here',
'HTTP-Referer': 'https://your-site.com', // Optional but recommended
'X-Title': 'Your Application Name' // Optional but recommended
},
body: JSON.stringify({
model: 'anthropic/claude-3-opus', // Specify any supported model
messages: [
{ role: 'user', content: 'Hello, how are you?' }
]
})
});
const data = await response.json();
console.log(data.choices[0].message.content);
Step 3: Integrating OpenRouter into Your Application
Let's look at how to integrate OpenRouter.ai into different types of applications:
JavaScript/Node.js Integration
If you're already using the OpenAI Node.js SDK, you can simply redirect it to use OpenRouter:
import { Configuration, OpenAIApi } from 'openai';
const configuration = new Configuration({
apiKey: 'your-openrouter-api-key',
basePath: 'https://openrouter.ai/api/v1'
});
const openai = new OpenAIApi(configuration);
async function generateResponse() {
const completion = await openai.createChatCompletion({
model: 'anthropic/claude-3-haiku', // Or any other supported model
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of France?' }
]
});
console.log(completion.data.choices[0].message.content);
}
generateResponse();
Python Integration
Similarly, for Python applications using the OpenAI library:
import openai
openai.api_key = "your-openrouter-api-key"
openai.api_base = "https://openrouter.ai/api/v1"
response = openai.ChatCompletion.create(
model="google/gemini-pro",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in simple terms."}
]
)
print(response.choices[0].message.content)
Step 4: Accessing Different Models
One of the key benefits of OpenRouter is the ability to easily switch between different language models. Here's how to specify various models:
OpenAI Models
openai/gpt-4-turbo
openai/gpt-4
openai/gpt-3.5-turbo
Anthropic Models
anthropic/claude-3-opus
anthropic/claude-3-sonnet
anthropic/claude-3-haiku
Google Models
google/gemini-pro
google/gemini-1.5-pro
Meta Models
meta/llama-3-70b-instruct
meta/llama-3-8b-instruct
Other Models
mistralai/mistral-7b-instruct
mistralai/mixtral-8x7b-instruct
To use a specific model, simply change the model parameter in your API call:
const response = await openai.createChatCompletion({
model: 'meta/llama-3-70b-instruct', // Switch to any supported model
messages: [
{ role: 'user', content: 'Write a short poem about technology.' }
]
});
Step 5: Advanced Features and Configurations
Setting Model Preferences and Fallbacks
OpenRouter allows you to specify model preferences and fallbacks:
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer your-api-key-here'
},
body: JSON.stringify({
model: 'anthropic/claude-3-opus',
fallbacks: ['openai/gpt-4-turbo', 'google/gemini-pro'],
messages: [
{ role: 'user', content: 'Explain the theory of relativity.' }
]
})
});
This configuration will try to use Claude 3 Opus first, but if it's unavailable or overloaded, it will automatically fall back to GPT-4 Turbo, and then to Gemini Pro if needed.
Controlling Costs with Route Preferences
You can also set route preferences to prioritize cost, latency, or quality:
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer your-api-key-here'
},
body: JSON.stringify({
route: 'lowest-cost', // Options: 'lowest-cost', 'lowest-latency', 'highest-quality'
messages: [
{ role: 'user', content: 'Generate a list of 10 creative business ideas.' }
]
})
});
Step 6: Monitoring Usage and Costs
OpenRouter provides a comprehensive dashboard to monitor your usage across different models:
- Log in to your OpenRouter account
- Navigate to the Usage section
- View detailed breakdowns of:
- Requests per model
- Token usage
- Associated costs
- Usage trends over time
This centralized monitoring makes it easy to track expenses and optimize your model selection based on performance and cost considerations.
Step 7: Best Practices for OpenRouter Integration
Error Handling
Implement robust error handling to manage potential API issues:
try {
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer your-api-key-here'
},
body: JSON.stringify({
model: 'anthropic/claude-3-opus',
messages: [
{ role: 'user', content: 'Explain quantum computing.' }
]
})
});
if (!response.ok) {
const errorData = await response.json();
console.error('OpenRouter API Error:', errorData);
// Implement fallback logic or retry mechanism
} else {
const data = await response.json();
console.log(data.choices[0].message.content);
}
} catch (error) {
console.error('Network or parsing error:', error);
// Handle network errors
}
Rate Limiting Considerations
Be aware of rate limits and implement appropriate throttling:
// Simple exponential backoff function for retries
async function fetchWithRetry(url, options, maxRetries = 3) {
let retries = 0;
while (retries < maxRetries) {
try {
const response = await fetch(url, options);
if (response.status === 429) { // Rate limit exceeded
const retryAfter = response.headers.get('Retry-After') || Math.pow(2, retries);
console.log(`Rate limited. Retrying after ${retryAfter} seconds...`);
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
retries++;
} else {
return response;
}
} catch (error) {
retries++;
if (retries >= maxRetries) throw error;
await new Promise(resolve => setTimeout(resolve, Math.pow(2, retries) * 1000));
}
}
}
Real-World Use Cases
Multi-Model Chatbot
Create a chatbot that selects the most appropriate model based on the query type:
async function intelligentChatbot(userQuery) {
// Determine query complexity and type
let selectedModel;
if (userQuery.length > 500 || userQuery.includes("explain in detail")) {
selectedModel = 'anthropic/claude-3-opus'; // Use powerful model for complex queries
} else if (userQuery.includes("creative") || userQuery.includes("write")) {
selectedModel = 'openai/gpt-4-turbo'; // Use GPT-4 for creative tasks
} else {
selectedModel = 'anthropic/claude-3-haiku'; // Use faster model for simple queries
}
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer your-api-key-here'
},
body: JSON.stringify({
model: selectedModel,
messages: [
{ role: 'user', content: userQuery }
]
})
});
const data = await response.json();
return data.choices[0].message.content;
}
Model Comparison Tool
Build a tool to compare responses from different models:
async function compareModels(prompt) {
const models = [
'openai/gpt-4-turbo',
'anthropic/claude-3-opus',
'google/gemini-pro',
'meta/llama-3-70b-instruct'
];
const results = {};
for (const model of models) {
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer your-api-key-here'
},
body: JSON.stringify({
model: model,
messages: [
{ role: 'user', content: prompt }
]
})
});
const data = await response.json();
results[model] = {
response: data.choices[0].message.content,
tokens: data.usage.total_tokens,
latency: data.response_ms
};
}
return results;
}
Conclusion
Using OpenRouter.ai to access multiple language models with a single API key significantly simplifies AI development workflows. By eliminating the need to manage separate API keys and integrations for each model provider, you can focus on building better applications rather than dealing with infrastructure complexity.
The ability to easily switch between models, set fallbacks, and monitor usage across all providers in one dashboard makes OpenRouter an invaluable tool for developers working with multiple language models. As the AI landscape continues to evolve rapidly, having this flexibility becomes increasingly important.
Start implementing OpenRouter today and streamline your AI development process!
Additional Resources
Have you built something interesting with OpenRouter? I'd love to see your project and match you with experts for feedback! Submit your project on RealReview.Space to get valuable insights from our community of experts.