An OpenAI-compatible API gateway for Claude Code with simple auth and web-based key management & revocation
- 🔄 OpenAI-Compatible: Drop-in replacement for OpenAI API clients
- 🔐 Secure Token Exchange: Convert Claude OAuth tokens to API keys via HTTPS interface
- 🎯 Simple Management: Web UI for creating and managing API keys
- 🚀 Streaming Support: Real-time responses via Server-Sent Events
- 📦 Minimal Size: Lightweight Express.js application
- 🔒 Session-Based Auth: Secure admin access with HTTP-only cookies
- 🎨 Modern UI: Beautiful dark-themed interface with DaisyUI
- 🐳 Docker Ready: Pre-configured for containerized deployment
-
Clone the repository:
git clone https://github.com/cabinlab/claude-code-api cd claude-code-api
-
Generate self-signed certificates:
npm run generate-certs
-
Create
.env
file:cp .env.example .env # Edit .env and set your ADMIN_PASSWORD
-
Start the service:
docker-compose up -d
-
Generate an API key:
- Visit https://localhost:8443
- Enter admin password
- Paste your Claude OAuth token (get with
claude get-token
) - Copy the generated API key
# Install dependencies
npm install
# Generate certificates
npm run generate-certs
# Set environment variables
export ADMIN_PASSWORD=your-admin-password
export CLAUDE_CODE_OAUTH_TOKEN=sk-ant-oat01-...
# Run development server
npm run dev
from openai import OpenAI
client = OpenAI(
api_key="your-generated-api-key-here", # Your generated API key
base_url="http://localhost:8000/v1"
)
response = client.chat.completions.create(
model="sonnet", # or "opus", "haiku", "gpt-4", "gpt-3.5-turbo"
messages=[
{"role": "user", "content": "Hello!"}
],
stream=True # Optional: Enable streaming
)
for chunk in response:
print(chunk.choices[0].delta.content, end="")
curl http://localhost:8000/v1/chat/completions \
-H "Authorization: Bearer your-generated-api-key-here" \
-H "Content-Type: application/json" \
-d '{
"model": "sonnet",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Variable | Description | Default |
---|---|---|
ADMIN_PASSWORD |
Password for admin interface | changeme |
CLAUDE_CODE_OAUTH_TOKEN |
Your Claude OAuth token | Required |
PORT |
HTTP API port | 8000 |
HTTPS_PORT |
HTTPS admin port | 8443 |
NODE_ENV |
Environment mode | production |
The API supports both OpenAI and Claude model names:
OpenAI Model | Claude Model |
---|---|
gpt-4 |
opus |
gpt-4-turbo |
sonnet |
gpt-3.5-turbo |
claude-3-5-haiku-20241022 |
You can also use Claude model names directly: opus
, sonnet
, haiku
.
GET /v1/models
- List available modelsPOST /v1/chat/completions
- Create chat completionGET /v1/health
- Health check
GET /
- OAuth token exchange interfaceGET /admin
- API key managementPOST /auth/exchange
- Exchange OAuth token for API keyDELETE /admin/keys/:apiKey
- Delete API key
- OAuth tokens are never exposed to API clients
- Admin interface requires HTTPS
- API keys use OpenAI-compatible format for better client support
- All key mappings are stored locally (no external database)
For production HTTPS without managing certificates:
- Create a Cloudflare Tunnel
- Add to docker-compose.yml:
cloudflared: image: cloudflare/cloudflared:latest command: tunnel --no-autoupdate run environment: - TUNNEL_TOKEN=${CLOUDFLARE_TUNNEL_TOKEN}
Configure nginx/Caddy to handle SSL termination and proxy to the API.
src/
├── server.ts # Main Express server
├── routes/
│ ├── auth.ts # OAuth token exchange
│ ├── admin.ts # Key management UI
│ └── api.ts # OpenAI-compatible API
├── services/
│ ├── claude.ts # Claude SDK integration
│ └── keyManager.ts # API key management
└── middleware/
└── security.ts # Auth & HTTPS middleware
- New Models: Update the model mapping in
claude.ts
- Rate Limiting: Modify
security.ts
middleware - Custom Endpoints: Add routes in
api.ts
from openai import OpenAI
client = OpenAI(
api_key="your-generated-api-key-here",
base_url="http://localhost:8000/v1"
)
response = client.chat.completions.create(
model="gpt-4", # or "opus", "sonnet", "haiku"
messages=[{"role": "user", "content": "Hello!"}],
stream=True # Optional: Enable streaming
)
for chunk in response:
print(chunk.choices[0].delta.content, end="")
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'your-generated-api-key-here',
baseURL: 'http://localhost:8000/v1',
});
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(completion.choices[0].message.content);
package main
import (
"context"
"fmt"
openai "github.com/sashabaranov/go-openai"
)
func main() {
config := openai.DefaultConfig("your-generated-api-key-here")
config.BaseURL = "http://localhost:8000/v1"
client := openai.NewClientWithConfig(config)
resp, err := client.CreateChatCompletion(
context.Background(),
openai.ChatCompletionRequest{
Model: openai.GPT4,
Messages: []openai.ChatCompletionMessage{
{
Role: openai.ChatMessageRoleUser,
Content: "Hello!",
},
},
},
)
if err != nil {
panic(err)
}
fmt.Println(resp.Choices[0].Message.Content)
}
require "openai"
client = OpenAI::Client.new(
access_token: "your-generated-api-key-here",
uri_base: "http://localhost:8000/v1"
)
response = client.chat(
parameters: {
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }]
}
)
puts response.dig("choices", 0, "message", "content")
- API Reference - Complete endpoint documentation
- Architecture - System design and components
- Development - Local development setup
- Security - Security considerations
- Ensure your token starts with
sk-ant-oat01-
- Check token hasn't expired at https://claude.ai/settings/oauth
- Generate a new token with
claude get-token
- Verify the token is properly saved in the admin interface
- Run
npm run generate-certs
to create certificates - Check port 8443 is not in use:
lsof -i :8443
- Ensure Docker has the certs volume mounted
- Accept the self-signed certificate warning in your browser
- Check your client supports Server-Sent Events (SSE)
- Ensure no proxy is buffering responses
- Add
X-Accel-Buffering: no
header for nginx - Verify streaming is enabled in your request (
"stream": true
)
- Ensure the key starts with
sk-
and is complete - Check the key exists in the admin dashboard
- Verify the associated OAuth token is still valid
- Look for rate limit errors (429 status code)
MIT
Contributions welcome! Please read our contributing guidelines before submitting PRs.