OpenAI API Proxy · OpenAI Compatible API

OpenAI API Proxy

Use one aliapi.me Base URL for OpenAI-compatible requests. Keep the SDK you already use while adding project keys, request logs, usage analytics and future multi-model routing.

Setup

Configure OpenAI API proxy in three steps.

This page targets real developer intent: where to put the Base URL, how to update SDK setup and how to debug failed requests.

1. Create an API key

Create a separate key for every project so usage can be limited and audited later.

Authorization: Bearer YOUR_API_KEY

2. Replace Base URL

Set your SDK or tool to the aliapi.me OpenAI-compatible endpoint.

https://api.aliapi.me/v1

3. Keep request shape

Continue using Chat Completions, models and messages in the familiar OpenAI format.

/v1/chat/completions
Capabilities

Why use a gateway instead of scattered API keys?

CapabilityDirect integrationWith aliapi.me
Base URLConfigured per providerOne entry point for future routing
KeysSpread across projects and env varsCreate, disable, limit and audit per project
DebuggingImplemented by each appCentral logs for status, latency, tokens and errors
Model switchingRequires code or config changesCan be moved into gateway policy
FAQ

OpenAI API Proxy FAQ

Do I need to rewrite my OpenAI SDK code?

No. The common path is to keep the SDK and replace only the Base URL and API key.

Can this work with Dify, FastGPT, Cursor and LangChain?

Those tools usually support a custom OpenAI-compatible Base URL, so they can connect to different models through one gateway.

How should the service be described?

Use clear terms such as API relay, unified access, API gateway and OpenAI-compatible API. Avoid unclear or exaggerated claims.

Expand from OpenAI proxy setup to multi-model access.

Browse Claude, Gemini, DeepSeek, Qwen and other model groups in the model directory, or use the FAQ library for Base URL, SDK and tool setup questions.

View model directory