33 KiB
AI API 中转 Phase 1(核心对话)Implementation Plan
For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: Build the core AI chat proxy — data models, provider adapters (OpenAI-compat + Anthropic), SSE streaming, conversation management, and a frontend chat page.
Architecture: Extend the existing go-zero monolith. 7 new GORM models, a Strategy-pattern provider layer (internal/ai/provider/), billing module (internal/ai/billing/), new .api definitions with goctl-generated handlers/logic, and a React chat page with SSE streaming.
Tech Stack: Go 1.25 + go-zero + GORM + github.com/sashabaranov/go-openai + github.com/anthropics/anthropic-sdk-go | React 19 + TypeScript + Tailwind CSS v4
Task 1: Add Go SDK dependencies
Files:
- Modify:
backend/go.mod - Modify:
backend/go.sum
Step 1: Install openai Go SDK
cd D:\APPS\base\backend
go get github.com/sashabaranov/go-openai@latest
Step 2: Install anthropic Go SDK
cd D:\APPS\base\backend
go get github.com/anthropics/anthropic-sdk-go@latest
Step 3: Verify go.mod
Run: cd D:\APPS\base\backend && go mod tidy
Expected: no errors, go.mod updated with both dependencies
Step 4: Commit
cd D:\APPS\base\backend
git add go.mod go.sum
git commit -m "chore: add openai and anthropic Go SDKs"
Task 2: Create 7 GORM entity models
Files:
- Create:
backend/model/ai_provider_entity.go - Create:
backend/model/ai_model_entity.go - Create:
backend/model/ai_api_key_entity.go - Create:
backend/model/ai_conversation_entity.go - Create:
backend/model/ai_chat_message_entity.go - Create:
backend/model/ai_usage_record_entity.go - Create:
backend/model/ai_user_quota_entity.go
Step 1: Create all 7 entity files
Follow the existing entity pattern from backend/model/file_entity.go:
- Use
gorm:"column:...;type:...;..."tags - Use
json:"camelCase"tags - Implement
TableName()method - Use
time.Timefor timestamps withautoCreateTime/autoUpdateTime
backend/model/ai_provider_entity.go:
package model
import "time"
type AIProvider struct {
Id int64 `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
Name string `gorm:"column:name;type:varchar(50);uniqueIndex;not null" json:"name"`
DisplayName string `gorm:"column:display_name;type:varchar(100);not null" json:"displayName"`
BaseUrl string `gorm:"column:base_url;type:varchar(255)" json:"baseUrl"`
SdkType string `gorm:"column:sdk_type;type:varchar(20);default:'openai_compat'" json:"sdkType"`
Protocol string `gorm:"column:protocol;type:varchar(20);default:'openai'" json:"protocol"`
IsActive bool `gorm:"column:is_active;default:true" json:"isActive"`
SortOrder int `gorm:"column:sort_order;default:0" json:"sortOrder"`
CreatedAt time.Time `gorm:"column:created_at;autoCreateTime" json:"createdAt"`
UpdatedAt time.Time `gorm:"column:updated_at;autoUpdateTime" json:"updatedAt"`
}
func (AIProvider) TableName() string { return "ai_provider" }
backend/model/ai_model_entity.go:
package model
import "time"
type AIModel struct {
Id int64 `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
ProviderId int64 `gorm:"column:provider_id;index;not null" json:"providerId"`
ModelId string `gorm:"column:model_id;type:varchar(100);not null" json:"modelId"`
DisplayName string `gorm:"column:display_name;type:varchar(100)" json:"displayName"`
InputPrice float64 `gorm:"column:input_price;type:decimal(10,6);default:0" json:"inputPrice"`
OutputPrice float64 `gorm:"column:output_price;type:decimal(10,6);default:0" json:"outputPrice"`
MaxTokens int `gorm:"column:max_tokens;default:4096" json:"maxTokens"`
ContextWindow int `gorm:"column:context_window;default:128000" json:"contextWindow"`
SupportsStream bool `gorm:"column:supports_stream;default:true" json:"supportsStream"`
SupportsVision bool `gorm:"column:supports_vision;default:false" json:"supportsVision"`
IsActive bool `gorm:"column:is_active;default:true" json:"isActive"`
SortOrder int `gorm:"column:sort_order;default:0" json:"sortOrder"`
CreatedAt time.Time `gorm:"column:created_at;autoCreateTime" json:"createdAt"`
UpdatedAt time.Time `gorm:"column:updated_at;autoUpdateTime" json:"updatedAt"`
}
func (AIModel) TableName() string { return "ai_model" }
backend/model/ai_api_key_entity.go:
package model
import "time"
type AIApiKey struct {
Id int64 `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
ProviderId int64 `gorm:"column:provider_id;index;not null" json:"providerId"`
UserId int64 `gorm:"column:user_id;index;default:0" json:"userId"`
KeyValue string `gorm:"column:key_value;type:text;not null" json:"-"`
IsActive bool `gorm:"column:is_active;default:true" json:"isActive"`
Remark string `gorm:"column:remark;type:varchar(255)" json:"remark"`
CreatedAt time.Time `gorm:"column:created_at;autoCreateTime" json:"createdAt"`
UpdatedAt time.Time `gorm:"column:updated_at;autoUpdateTime" json:"updatedAt"`
}
func (AIApiKey) TableName() string { return "ai_api_key" }
backend/model/ai_conversation_entity.go:
package model
import "time"
type AIConversation struct {
Id int64 `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
UserId int64 `gorm:"column:user_id;index;not null" json:"userId"`
Title string `gorm:"column:title;type:varchar(200);default:'新对话'" json:"title"`
ModelId string `gorm:"column:model_id;type:varchar(100)" json:"modelId"`
ProviderId int64 `gorm:"column:provider_id;default:0" json:"providerId"`
TotalTokens int64 `gorm:"column:total_tokens;default:0" json:"totalTokens"`
TotalCost float64 `gorm:"column:total_cost;type:decimal(10,6);default:0" json:"totalCost"`
IsArchived bool `gorm:"column:is_archived;default:false" json:"isArchived"`
CreatedAt time.Time `gorm:"column:created_at;autoCreateTime" json:"createdAt"`
UpdatedAt time.Time `gorm:"column:updated_at;autoUpdateTime" json:"updatedAt"`
}
func (AIConversation) TableName() string { return "ai_conversation" }
backend/model/ai_chat_message_entity.go:
package model
import "time"
type AIChatMessage struct {
Id int64 `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
ConversationId int64 `gorm:"column:conversation_id;index;not null" json:"conversationId"`
Role string `gorm:"column:role;type:varchar(20);not null" json:"role"`
Content string `gorm:"column:content;type:longtext" json:"content"`
TokenCount int `gorm:"column:token_count;default:0" json:"tokenCount"`
Cost float64 `gorm:"column:cost;type:decimal(10,6);default:0" json:"cost"`
ModelId string `gorm:"column:model_id;type:varchar(100)" json:"modelId"`
LatencyMs int `gorm:"column:latency_ms;default:0" json:"latencyMs"`
CreatedAt time.Time `gorm:"column:created_at;autoCreateTime" json:"createdAt"`
}
func (AIChatMessage) TableName() string { return "ai_chat_message" }
backend/model/ai_usage_record_entity.go:
package model
import "time"
type AIUsageRecord struct {
Id int64 `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
UserId int64 `gorm:"column:user_id;index;not null" json:"userId"`
ProviderId int64 `gorm:"column:provider_id;index" json:"providerId"`
ModelId string `gorm:"column:model_id;type:varchar(100)" json:"modelId"`
InputTokens int `gorm:"column:input_tokens;default:0" json:"inputTokens"`
OutputTokens int `gorm:"column:output_tokens;default:0" json:"outputTokens"`
Cost float64 `gorm:"column:cost;type:decimal(10,6);default:0" json:"cost"`
ApiKeyId int64 `gorm:"column:api_key_id;default:0" json:"apiKeyId"`
Status string `gorm:"column:status;type:varchar(20);default:'ok'" json:"status"`
LatencyMs int `gorm:"column:latency_ms;default:0" json:"latencyMs"`
ErrorMessage string `gorm:"column:error_message;type:text" json:"errorMessage"`
CreatedAt time.Time `gorm:"column:created_at;autoCreateTime" json:"createdAt"`
}
func (AIUsageRecord) TableName() string { return "ai_usage_record" }
backend/model/ai_user_quota_entity.go:
package model
import "time"
type AIUserQuota struct {
Id int64 `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
UserId int64 `gorm:"column:user_id;uniqueIndex;not null" json:"userId"`
Balance float64 `gorm:"column:balance;type:decimal(10,4);default:0" json:"balance"`
TotalRecharged float64 `gorm:"column:total_recharged;type:decimal(10,4);default:0" json:"totalRecharged"`
TotalConsumed float64 `gorm:"column:total_consumed;type:decimal(10,4);default:0" json:"totalConsumed"`
FrozenAmount float64 `gorm:"column:frozen_amount;type:decimal(10,4);default:0" json:"frozenAmount"`
UpdatedAt time.Time `gorm:"column:updated_at;autoUpdateTime" json:"updatedAt"`
}
func (AIUserQuota) TableName() string { return "ai_user_quota" }
Step 2: Add AutoMigrate in servicecontext.go
Modify backend/internal/svc/servicecontext.go:67 — add 7 new models to the existing db.AutoMigrate() call:
err = db.AutoMigrate(
&model.User{}, &model.Profile{}, &model.File{},
&model.Menu{}, &model.Role{}, &model.RoleMenu{},
&model.Organization{}, &model.UserOrganization{},
// AI models
&model.AIProvider{}, &model.AIModel{}, &model.AIApiKey{},
&model.AIConversation{}, &model.AIChatMessage{},
&model.AIUsageRecord{}, &model.AIUserQuota{},
)
Step 3: Verify compilation
Run: cd D:\APPS\base\backend && go build ./...
Expected: no errors
Step 4: Commit
cd D:\APPS\base
git add backend/model/ai_*.go backend/internal/svc/servicecontext.go
git commit -m "feat: add 7 AI entity models with AutoMigrate"
Task 3: Create model CRUD functions
Files:
- Create:
backend/model/ai_provider_model.go - Create:
backend/model/ai_model_model.go - Create:
backend/model/ai_api_key_model.go - Create:
backend/model/ai_conversation_model.go - Create:
backend/model/ai_chat_message_model.go - Create:
backend/model/ai_usage_record_model.go - Create:
backend/model/ai_user_quota_model.go
Follow the existing pattern from backend/model/file_model.go. Each model file should have: Insert, FindOne (by ID), FindList (paginated), Update, Delete functions.
Key special functions:
ai_provider_model.go: AIProviderFindByName(ctx, db, name), AIProviderFindAllActive(ctx, db)
ai_model_model.go: AIModelFindByModelId(ctx, db, modelId), AIModelFindByProvider(ctx, db, providerId), AIModelFindAllActive(ctx, db)
ai_api_key_model.go: AIApiKeyFindByProviderAndUser(ctx, db, providerId, userId), AIApiKeyFindSystemKeys(ctx, db, providerId)
ai_conversation_model.go: AIConversationFindByUser(ctx, db, userId, page, pageSize), plus standard CRUD
ai_chat_message_model.go: AIChatMessageFindByConversation(ctx, db, conversationId) — returns all messages ordered by created_at ASC
ai_usage_record_model.go: AIUsageRecordInsert(ctx, db, record), AIUsageRecordFindByUser(ctx, db, userId, page, pageSize)
ai_user_quota_model.go: AIUserQuotaFindByUser(ctx, db, userId), AIUserQuotaEnsure(ctx, db, userId) (find-or-create), AIUserQuotaFreeze(ctx, db, userId, amount), AIUserQuotaSettle(ctx, db, userId, frozenAmount, actualCost), AIUserQuotaUnfreeze(ctx, db, userId, amount)
The freeze/settle/unfreeze must use db.Model(&AIUserQuota{}).Where("user_id = ?", userId).Updates(...) with GORM expressions for atomic updates:
// Freeze
db.Model(&AIUserQuota{}).Where("user_id = ? AND balance >= ?", userId, amount).
Updates(map[string]interface{}{
"balance": gorm.Expr("balance - ?", amount),
"frozen_amount": gorm.Expr("frozen_amount + ?", amount),
})
Step 1: Create all 7 model files
(Complete code for each — see entity patterns above)
Step 2: Verify compilation
Run: cd D:\APPS\base\backend && go build ./...
Expected: no errors
Step 3: Commit
cd D:\APPS\base
git add backend/model/ai_*_model.go
git commit -m "feat: add AI model CRUD functions (7 models)"
Task 4: Create Provider abstraction layer
Files:
- Create:
backend/internal/ai/provider/types.go - Create:
backend/internal/ai/provider/provider.go - Create:
backend/internal/ai/provider/openai.go - Create:
backend/internal/ai/provider/anthropic.go - Create:
backend/internal/ai/provider/factory.go
Step 1: Create types.go — shared request/response types
package provider
type ChatMessage struct {
Role string `json:"role"` // user, assistant, system
Content string `json:"content"`
}
type ChatRequest struct {
Model string `json:"model"`
Messages []ChatMessage `json:"messages"`
MaxTokens int `json:"max_tokens,omitempty"`
Temperature float64 `json:"temperature,omitempty"`
Stream bool `json:"stream"`
}
type ChatResponse struct {
Content string `json:"content"`
Model string `json:"model"`
InputTokens int `json:"input_tokens"`
OutputTokens int `json:"output_tokens"`
FinishReason string `json:"finish_reason"`
}
type StreamChunk struct {
Content string `json:"content,omitempty"`
FinishReason string `json:"finish_reason,omitempty"`
// Set on final chunk
InputTokens int `json:"input_tokens,omitempty"`
OutputTokens int `json:"output_tokens,omitempty"`
Done bool `json:"done"`
}
Step 2: Create provider.go — interface definition
package provider
import "context"
type AIProvider interface {
Chat(ctx context.Context, req *ChatRequest) (*ChatResponse, error)
ChatStream(ctx context.Context, req *ChatRequest) (<-chan *StreamChunk, error)
Name() string
}
Step 3: Create openai.go — OpenAI-compatible provider
Uses github.com/sashabaranov/go-openai. This handles OpenAI, Qwen, Zhipu, DeepSeek (all OpenAI-compatible).
Key: Create openai.ClientConfig with custom BaseURL for each platform. Implement Chat() with client.CreateChatCompletion() and ChatStream() with client.CreateChatCompletionStream(). The stream method reads chunks, sends to channel, and tracks token counts.
Step 4: Create anthropic.go — Anthropic/Claude provider
Uses github.com/anthropics/anthropic-sdk-go. Implement Chat() with client.Messages.New() and ChatStream() with client.Messages.NewStreaming().
Step 5: Create factory.go — provider factory
package provider
import "fmt"
func NewProvider(sdkType, baseUrl, apiKey string) (AIProvider, error) {
switch sdkType {
case "openai_compat":
return NewOpenAIProvider(baseUrl, apiKey), nil
case "anthropic":
return NewAnthropicProvider(baseUrl, apiKey), nil
default:
return nil, fmt.Errorf("unsupported sdk_type: %s", sdkType)
}
}
Step 6: Verify compilation
Run: cd D:\APPS\base\backend && go build ./...
Expected: no errors
Step 7: Commit
cd D:\APPS\base
git add backend/internal/ai/
git commit -m "feat: add AI provider abstraction (OpenAI-compat + Anthropic)"
Task 5: Create billing module
Files:
- Create:
backend/internal/ai/billing/quota.go - Create:
backend/internal/ai/billing/usage.go
Step 1: Create quota.go
QuotaService with methods:
CheckAndFreeze(ctx, db, userId, estimatedCost) error— checks balance, freezes amountSettle(ctx, db, userId, frozenAmount, actualCost) error— unfreezes, deducts actualUnfreeze(ctx, db, userId, amount) error— full unfreeze on errorIsUserKey(apiKeyId int64) bool— if user provided key, skip billing
Uses model.AIUserQuotaFreeze/Settle/Unfreeze functions.
Step 2: Create usage.go
UsageService with:
Record(ctx, db, record *model.AIUsageRecord) error— inserts usage recordUpdateConversationStats(ctx, db, convId, tokens, cost)— updates conversation totals
Step 3: Verify compilation
Run: cd D:\APPS\base\backend && go build ./...
Step 4: Commit
cd D:\APPS\base
git add backend/internal/ai/billing/
git commit -m "feat: add AI billing module (quota freeze/settle + usage recording)"
Task 6: Create API definitions (ai.api)
Files:
- Create:
backend/api/ai.api - Modify:
backend/base.api— addimport "api/ai.api"
Step 1: Create ai.api
Define all types and routes for Phase 1. Follow the existing pattern from backend/api/file.api.
Types needed:
AIChatCompletionRequest— messages array, model, stream, max_tokens, temperature, conversation_id (optional)AIChatCompletionResponse— id, object, choices, usageAIConversationInfo,AIConversationListResponseAIConversationCreateRequest,AIConversationUpdateRequestAIModelInfo,AIModelListResponseAIQuotaInfo
Routes (3 @server blocks):
// AI Chat — Cors,Log,Auth
@server(prefix: /api/v1, group: ai, middleware: Cors,Log,Auth)
POST /ai/chat/completions (AIChatCompletionRequest)
GET /ai/conversations (AIConversationListRequest) returns (AIConversationListResponse)
POST /ai/conversation (AIConversationCreateRequest) returns (AIConversationInfo)
GET /ai/conversation/:id (AIConversationGetRequest) returns (AIConversationDetailResponse)
PUT /ai/conversation/:id (AIConversationUpdateRequest) returns (AIConversationInfo)
DELETE /ai/conversation/:id (AIConversationDeleteRequest) returns (Response)
GET /ai/models returns (AIModelListResponse)
GET /ai/quota/me returns (AIQuotaInfo)
Step 2: Add import to base.api
Add import "api/ai.api" after the existing imports in backend/base.api.
Step 3: Run goctl to generate handlers/types
cd D:\APPS\base\backend
goctl api go -api base.api -dir .
This generates:
internal/types/types.go— updated with AI typesinternal/handler/ai/*.go— handler stubsinternal/handler/routes.go— updated with AI routes
Step 4: Verify compilation
Run: cd D:\APPS\base\backend && go build ./...
Expected: compilation errors in generated logic stubs (expected — we implement them next)
Step 5: Commit
cd D:\APPS\base
git add backend/api/ai.api backend/base.api backend/internal/types/types.go backend/internal/handler/
git commit -m "feat: add AI API definitions and goctl-generated handlers"
Task 7: Implement core chat logic (SSE streaming)
Files:
- Modify:
backend/internal/handler/ai/aichatcompletionshandler.go— custom SSE handler (NOT goctl default) - Create:
backend/internal/logic/ai/aichatcompletionslogic.go
This is the core task. The handler must:
- Parse the request body manually (since SSE bypasses standard response)
- Determine if
stream: true - If stream: set SSE headers, call logic.ChatStream(), loop over channel writing
data: {...}\n\n - If not stream: call logic.Chat(), return JSON
Step 1: Replace the goctl-generated handler with custom SSE handler
The handler should:
func AiChatCompletionsHandler(svcCtx *svc.ServiceContext) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
var req types.AIChatCompletionRequest
if err := httpx.ParseJsonBody(r, &req); err != nil {
httpx.ErrorCtx(r.Context(), w, err)
return
}
l := ai.NewAiChatCompletionsLogic(r.Context(), svcCtx)
if req.Stream {
// SSE mode
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
w.Header().Set("Access-Control-Allow-Origin", "*")
flusher, ok := w.(http.Flusher)
if !ok {
http.Error(w, "streaming not supported", http.StatusInternalServerError)
return
}
streamChan, err := l.ChatStream(&req)
if err != nil {
// Write error as SSE
fmt.Fprintf(w, "data: {\"error\":\"%s\"}\n\n", err.Error())
flusher.Flush()
return
}
for chunk := range streamChan {
data, _ := json.Marshal(chunk)
fmt.Fprintf(w, "data: %s\n\n", data)
flusher.Flush()
}
fmt.Fprintf(w, "data: [DONE]\n\n")
flusher.Flush()
} else {
// Normal mode
resp, err := l.Chat(&req)
if err != nil {
httpx.ErrorCtx(r.Context(), w, err)
} else {
httpx.OkJsonCtx(r.Context(), w, resp)
}
}
}
}
Step 2: Implement AiChatCompletionsLogic
The logic must:
- Get userId from context
- Look up model → get provider info
- Select API key (user key > system key)
- If using system key: check balance, freeze estimated cost
- Build provider via factory
- For Chat(): call provider.Chat(), record usage, settle billing, save messages
- For ChatStream(): call provider.ChatStream(), return channel — after stream ends (in a goroutine), record usage, settle billing, save messages
Step 3: Verify compilation
Run: cd D:\APPS\base\backend && go build ./...
Step 4: Commit
cd D:\APPS\base
git add backend/internal/handler/ai/ backend/internal/logic/ai/
git commit -m "feat: implement AI chat completions with SSE streaming"
Task 8: Implement conversation CRUD logic
Files:
- Modify:
backend/internal/logic/ai/aiconversationlistlogic.go - Modify:
backend/internal/logic/ai/aiconversationcreatologic.go - Modify:
backend/internal/logic/ai/aiconversationgetlogic.go - Modify:
backend/internal/logic/ai/aiconversationupdatelogic.go - Modify:
backend/internal/logic/ai/aiconversationdeletelogic.go
Step 1: Implement all 5 conversation logic files
These follow the standard pattern from backend/internal/logic/file/. Each gets userId from context and only operates on conversations belonging to that user.
- List:
model.AIConversationFindByUser(ctx, db, userId, page, pageSize)ordered by updated_at DESC - Create: Creates new conversation with title "新对话" and specified model
- Get (detail): Returns conversation + all messages via
model.AIChatMessageFindByConversation() - Update: Updates title only
- Delete: Soft delete (or hard delete) conversation and its messages
Step 2: Implement models list logic
Return all active models with their provider info via model.AIModelFindAllActive().
Step 3: Implement quota/me logic
Return current user's quota via model.AIUserQuotaEnsure() (find-or-create).
Step 4: Verify compilation
Run: cd D:\APPS\base\backend && go build ./...
Step 5: Commit
cd D:\APPS\base
git add backend/internal/logic/ai/
git commit -m "feat: implement conversation CRUD + model list + quota logic"
Task 9: Seed data (providers, models, Casbin policies, AI menu)
Files:
- Modify:
backend/internal/svc/servicecontext.go
Step 1: Add seedAIProviders function
Seed 5 providers: openai, claude, qwen, zhipu, deepseek. Use find-or-create pattern (check by name).
Step 2: Add seedAIModels function
Seed 9 models as specified in the design doc. Use find-or-create pattern (check by model_id).
Step 3: Add AI Casbin policies to seedCasbinPolicies
Add all AI policies from the design doc to the existing policies slice:
// AI: all authenticated users
{"user", "/api/v1/ai/chat/completions", "POST"},
{"user", "/api/v1/ai/conversations", "GET"},
{"user", "/api/v1/ai/conversation", "POST"},
{"user", "/api/v1/ai/conversation/:id", "GET"},
{"user", "/api/v1/ai/conversation/:id", "PUT"},
{"user", "/api/v1/ai/conversation/:id", "DELETE"},
{"user", "/api/v1/ai/models", "GET"},
{"user", "/api/v1/ai/quota/me", "GET"},
Step 4: Add AI menu to seedMenus
Add "AI 对话" menu item:
{Name: "AI 对话", Path: "/ai/chat", Icon: "Bot", Type: "config", SortOrder: 5, Visible: true, Status: 1},
Step 5: Call seedAIProviders and seedAIModels in NewServiceContext
Add after existing seed calls:
seedAIProviders(db)
seedAIModels(db)
Step 6: Verify backend starts
Run: cd D:\APPS\base\backend && go run base.go -f etc/base-api.yaml
Expected: logs show "AI Providers seeded", "AI Models seeded", no errors
Step 7: Commit
cd D:\APPS\base
git add backend/internal/svc/servicecontext.go
git commit -m "feat: seed AI providers, models, Casbin policies, and menu"
Task 10: Backend integration test
Step 1: Start backend and test endpoints with curl
# Login
TOKEN=$(curl -s -X POST http://localhost:8888/api/v1/login \
-H "Content-Type: application/json" \
-d '{"account":"admin","password":"admin123"}' | jq -r '.token')
# Get available models
curl -s http://localhost:8888/api/v1/ai/models \
-H "Authorization: Bearer $TOKEN" | jq
# Get my quota
curl -s http://localhost:8888/api/v1/ai/quota/me \
-H "Authorization: Bearer $TOKEN" | jq
# Create conversation
curl -s -X POST http://localhost:8888/api/v1/ai/conversation \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"modelId":"gpt-4o","title":"Test"}' | jq
# List conversations
curl -s http://localhost:8888/api/v1/ai/conversations \
-H "Authorization: Bearer $TOKEN" | jq
Expected: All return 200 with valid JSON. Chat completions will fail without API key (expected — returns error about no key).
Step 2: Commit any fixes
Task 11: Frontend types + API client
Files:
- Modify:
frontend/react-shadcn/pc/src/types/index.ts - Modify:
frontend/react-shadcn/pc/src/services/api.ts
Step 1: Add AI types to types/index.ts
// AI Types
export interface AIProviderInfo {
id: number
name: string
displayName: string
baseUrl: string
sdkType: string
isActive: boolean
sortOrder: number
}
export interface AIModelInfo {
id: number
providerId: number
modelId: string
displayName: string
inputPrice: number
outputPrice: number
maxTokens: number
contextWindow: number
supportsStream: boolean
supportsVision: boolean
isActive: boolean
providerName?: string
}
export interface AIConversation {
id: number
userId: number
title: string
modelId: string
providerId: number
totalTokens: number
totalCost: number
isArchived: boolean
createdAt: string
updatedAt: string
}
export interface AIChatMessage {
id: number
conversationId: number
role: 'user' | 'assistant' | 'system'
content: string
tokenCount: number
cost: number
modelId: string
latencyMs: number
createdAt: string
}
export interface AIQuotaInfo {
userId: number
balance: number
totalRecharged: number
totalConsumed: number
frozenAmount: number
}
export interface AIChatCompletionRequest {
model: string
messages: { role: string; content: string }[]
stream?: boolean
max_tokens?: number
temperature?: number
conversation_id?: number
}
Step 2: Add AI methods to api.ts
// AI Chat (SSE streaming)
async *chatStream(req: AIChatCompletionRequest): AsyncGenerator<string> {
const url = `${API_BASE_URL}/ai/chat/completions`
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${this.token}`,
},
body: JSON.stringify({ ...req, stream: true }),
})
const reader = response.body!.getReader()
const decoder = new TextDecoder()
let buffer = ''
while (true) {
const { done, value } = await reader.read()
if (done) break
buffer += decoder.decode(value, { stream: true })
const lines = buffer.split('\n')
buffer = lines.pop() || ''
for (const line of lines) {
if (line.startsWith('data: ') && line !== 'data: [DONE]') {
yield line.slice(6)
}
}
}
}
// AI Models
async getAIModels(): Promise<ApiResponse<{ list: AIModelInfo[] }>> { ... }
// AI Conversations
async getAIConversations(page?: number, pageSize?: number): Promise<ApiResponse<{ list: AIConversation[], total: number }>> { ... }
async createAIConversation(modelId: string, title?: string): Promise<ApiResponse<AIConversation>> { ... }
async getAIConversation(id: number): Promise<ApiResponse<{ conversation: AIConversation, messages: AIChatMessage[] }>> { ... }
async updateAIConversation(id: number, title: string): Promise<ApiResponse<AIConversation>> { ... }
async deleteAIConversation(id: number): Promise<ApiResponse<void>> { ... }
// AI Quota
async getAIQuota(): Promise<ApiResponse<AIQuotaInfo>> { ... }
Step 3: Verify TypeScript compilation
Run: cd D:\APPS\base\frontend\react-shadcn\pc && npm run build
Expected: no type errors
Step 4: Commit
cd D:\APPS\base
git add frontend/react-shadcn/pc/src/types/index.ts frontend/react-shadcn/pc/src/services/api.ts
git commit -m "feat: add AI types and API client methods (incl SSE streaming)"
Task 12: Create AIChatPage (frontend)
Files:
- Create:
frontend/react-shadcn/pc/src/pages/AIChatPage.tsx
Step 1: Build the full chat page
The page has 3 areas:
- Left sidebar (~280px): conversation list, "新对话" button, current balance
- Center chat area: message list with markdown rendering, auto-scroll
- Bottom input: textarea (Shift+Enter newline, Enter send) + model selector + send button
Key implementation details:
- Use
apiClient.chatStream()AsyncGenerator for SSE streaming - Accumulate streamed content into a "typing" message that updates in real-time
- Use
useStatefor messages, conversations, current conversation, selected model - Render messages with role-based styling (user = right-aligned sky bubble, assistant = left-aligned)
- Code blocks: use
<pre><code>with monospace font - Loading state: show animated dots while waiting for first chunk
- Auto-scroll to bottom on new messages
- On conversation switch, load messages from API
- Model selector dropdown at top of chat area
Follow existing page patterns from FileManagementPage.tsx for Card/Button/Input usage and Tailwind classes. Use the same bg-card, text-foreground, border-border semantic classes.
Step 2: Verify renders
Run: cd D:\APPS\base\frontend\react-shadcn\pc && npm run dev
Navigate to /ai/chat — should show layout (will have API errors until backend running)
Step 3: Commit
cd D:\APPS\base
git add frontend/react-shadcn/pc/src/pages/AIChatPage.tsx
git commit -m "feat: add AI Chat page with SSE streaming support"
Task 13: Register route in App.tsx
Files:
- Modify:
frontend/react-shadcn/pc/src/App.tsx
Step 1: Add import and route
Add import:
import { AIChatPage } from './pages/AIChatPage'
Add route inside the protected layout routes (after organizations):
<Route path="ai/chat" element={<RouteGuard><AIChatPage /></RouteGuard>} />
Step 2: Verify navigation
Run dev server, login, click "AI 对话" in sidebar → should navigate to /ai/chat
Step 3: Commit
cd D:\APPS\base
git add frontend/react-shadcn/pc/src/App.tsx
git commit -m "feat: register AI Chat route in App.tsx"
Task 14: End-to-end verification
Step 1: Start backend
cd D:\APPS\base\backend
go run base.go -f etc/base-api.yaml
Check logs for:
- "AI Providers seeded"
- "AI Models seeded"
- No migration errors
Step 2: Start frontend
cd D:\APPS\base\frontend\react-shadcn\pc
npm run dev
Step 3: Verify full flow via Playwright MCP
- Login as admin/admin123
- Verify "AI 对话" appears in sidebar menu
- Navigate to
/ai/chat - Verify model selector shows seeded models (gpt-4o, qwen-max, etc.)
- Verify conversation list shows (initially empty)
- Create a new conversation
- Verify quota shows (0.00 balance for new user)
- Try sending a message (will fail with "no API key" — this is expected without real keys)
- Test conversation CRUD (create, rename, delete)
Step 4: Add system API key for testing (optional)
If you have a real API key, add it via curl:
curl -X POST http://localhost:8888/api/v1/ai/key \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"providerId":5,"keyValue":"sk-xxx","remark":"test deepseek key"}'
Then test actual chat with streaming.
Step 5: Final commit
cd D:\APPS\base
git add -A
git commit -m "feat: AI API proxy Phase 1 — core chat with SSE streaming"
Summary
| Task | Description | Files | Est. |
|---|---|---|---|
| 1 | Add Go SDK deps | go.mod, go.sum | 2 min |
| 2 | 7 entity models + AutoMigrate | 7 new + 1 mod | 10 min |
| 3 | 7 model CRUD functions | 7 new | 15 min |
| 4 | Provider abstraction (OpenAI + Anthropic) | 5 new | 20 min |
| 5 | Billing module (quota + usage) | 2 new | 10 min |
| 6 | API definitions + goctl generate | 1 new + 1 mod + generated | 10 min |
| 7 | Core chat logic (SSE streaming) | 1 mod + 1 new | 25 min |
| 8 | Conversation CRUD + models + quota logic | 7 mod | 15 min |
| 9 | Seed data (providers, models, Casbin, menu) | 1 mod | 10 min |
| 10 | Backend integration test | - | 5 min |
| 11 | Frontend types + API client | 2 mod | 10 min |
| 12 | AIChatPage with SSE streaming | 1 new | 30 min |
| 13 | Route registration | 1 mod | 2 min |
| 14 | E2E verification | - | 10 min |
Total: 14 tasks, ~35 new files, ~8 modified files