AnLink

AnLink developer access

Unified access to selected Chinese models, with one API surface.

Sign in once, issue a key, pick a model, and track usage from a single console without juggling separate upstream integrations.

  • OpenAI-compatible request shape
  • Trial credit for first calls
  • Usage visibility and cost control
curl https://api.anlinkai.com/api/v1/chat/completions \
  -H "Authorization: Bearer ak_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen-flash",
    "messages": [
      {"role": "user", "content": "Write a product launch tagline."}
    ]
  }'
3models currently exposed in the MVP catalog
1000 creditdefault trial balance in the current onboarding flow
OpenAI-compatiblefirst release focuses on the chat completions path

Available now

Model catalog

Current rollout keeps the model set intentionally small so routing, billing, and support stay predictable.

Qwen Flash

active

qwen-flash

Provider
aliyun-bailian
Upstream
qwen-flash
Input / 1K
0.000000 credit
Output / 1K
0.000000 credit

Qwen 3.5 Flash

active

qwen3.5-flash

Provider
aliyun-bailian
Upstream
qwen3.5-flash
Input / 1K
0.000000 credit
Output / 1K
0.000000 credit

DeepSeek Chat

active

deepseek-chat

Provider
deepseek
Upstream
deepseek-chat
Input / 1K
0.000000 credit
Output / 1K
0.000000 credit

Quickstart

From console signup to first request

  1. Sign in to the console with your AnLink account.
  2. Create an API key from the dashboard.
  3. Choose one of the enabled model codes below.
  4. Call POST /chat/completions against the AnLink base URL.
  5. Check request logs and remaining credit in the console.

Base URL

https://api.anlinkai.com/api/v1

Auth

Authorization: Bearer ak_...

Primary path

POST /chat/completions

Current MVP

What is ready now

The first release is intentionally narrow. It is built to validate access, key issuance, request logging, and basic model operations before broader expansion.

Available

  • Account signup and login
  • API key creation in the console
  • Model catalog and current status
  • Usage log visibility in the console
  • OpenAI-compatible chat completions path

Not in this release

  • Self-service payment and recharge
  • Large model catalog expansion
  • Advanced traffic routing policies
  • Enterprise org and team controls
  • Public SLA and support workflow

Operational note

  • Initial onboarding uses trial credit
  • Model pricing is controlled centrally
  • Enabled models may change during validation
  • Upstream providers remain abstracted behind one base URL
  • Support and iteration are currently high-touch

Pricing

Simple rollout pricing for the MVP

The first release keeps pricing visible and centralized. Trial onboarding uses credit, and live model prices are shown in the catalog and console.

Trial balance

  • New accounts start with 1000 credit
  • Current internal baseline is 100 credit = 1 USD
  • Trial credit is intended for first integration and validation

Usage accounting

  • Successful requests record prompt, completion, and total tokens
  • Recorded cost is written into usage logs for each request
  • Balance checks are enforced before broader payment features are added

Operational note

  • Displayed model prices may change during MVP validation
  • Model exposure remains intentionally small during the first rollout
  • Self-service recharge is not part of the current release

Before Traffic

Integration checklist before you send real calls

  1. Confirm the model code you plan to use is marked active.
  2. Store your API key in server-side environment variables, not client bundles.
  3. Validate remaining credit in the console before test runs.
  4. Record request IDs from failures so you can match them with usage logs.
  5. Start with non-streaming chat completions during MVP validation.

Console

https://console.anlinkai.com

Quickstart page

https://www.anlinkai.com/quickstart

Primary endpoint

https://api.anlinkai.com/api/v1/chat/completions

Troubleshooting

https://www.anlinkai.com/error-codes