reinit
This commit is contained in:
@@ -1,17 +0,0 @@
|
||||
# Environment
|
||||
ENV=development
|
||||
DEBUG=True
|
||||
|
||||
# PostgreSQL
|
||||
POSTGRES_USER=admin
|
||||
POSTGRES_PASSWORD=password
|
||||
POSTGRES_DB=postgres
|
||||
|
||||
# Redis
|
||||
REDIS_URL=redis://redis:6379/0
|
||||
|
||||
# gRPC
|
||||
GRPC_PORT=50051
|
||||
|
||||
# HTTP
|
||||
HTTP_PORT=8000
|
||||
8
asset_helper_backend/.gitignore
vendored
8
asset_helper_backend/.gitignore
vendored
@@ -1,8 +0,0 @@
|
||||
.env
|
||||
__pycache__
|
||||
.venv
|
||||
*.pyc
|
||||
dist/
|
||||
build/
|
||||
*.log
|
||||
docker-compose.override.yml
|
||||
@@ -1,27 +0,0 @@
|
||||
all: build up
|
||||
|
||||
build:
|
||||
docker-compose build
|
||||
|
||||
up:
|
||||
docker-compose up -d
|
||||
|
||||
down:
|
||||
docker-compose down
|
||||
|
||||
logs:
|
||||
docker-compose logs -f
|
||||
|
||||
proto-gen:
|
||||
chmod +x ./scripts/proto-gen.sh
|
||||
./scripts/proto-gen.sh
|
||||
|
||||
init-db:
|
||||
chmod +x ./scripts/init-db.sh
|
||||
./scripts/init-db.sh
|
||||
|
||||
test:
|
||||
docker-compose run --rm gateway python -m pytest
|
||||
|
||||
lint:
|
||||
docker-compose run --rm gateway python -m ruff check .
|
||||
@@ -1,104 +0,0 @@
|
||||
# Asset Helper Backend 启动验证步骤
|
||||
|
||||
本文档提供了 Asset Helper Backend 项目的本地启动和验证步骤,确保您能够在无额外环境的情况下(仅需安装Docker和Docker Compose)成功启动项目并验证其功能。
|
||||
|
||||
## 一、环境准备
|
||||
|
||||
### 1. 安装 Docker
|
||||
- **Windows**: 下载并安装 [Docker Desktop for Windows](https://www.docker.com/products/docker-desktop)
|
||||
- **macOS**: 下载并安装 [Docker Desktop for Mac](https://www.docker.com/products/docker-desktop)
|
||||
- **Linux**: 按照 [官方文档](https://docs.docker.com/engine/install/) 安装 Docker
|
||||
|
||||
### 2. 安装 Docker Compose
|
||||
- Docker Desktop 已包含 Docker Compose,无需单独安装
|
||||
- 若使用 Linux 系统且未安装 Docker Compose,请按照 [官方文档](https://docs.docker.com/compose/install/) 安装
|
||||
|
||||
## 二、项目启动步骤
|
||||
|
||||
### 1. 进入项目目录
|
||||
```bash
|
||||
cd asset_helper_backend
|
||||
```
|
||||
|
||||
### 2. 配置环境变量
|
||||
复制环境变量模板文件为实际环境变量文件:
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 3. 启动服务
|
||||
使用 Makefile 命令一键启动所有服务:
|
||||
```bash
|
||||
make up
|
||||
```
|
||||
|
||||
该命令会:
|
||||
- 构建所有服务的 Docker 镜像
|
||||
- 启动 PostgreSQL、Redis、user-service 和 gateway 服务
|
||||
- 配置服务间的网络连接和依赖关系
|
||||
|
||||
## 三、服务验证步骤
|
||||
|
||||
### 1. 验证服务状态
|
||||
检查所有服务是否正常运行:
|
||||
```bash
|
||||
docker-compose ps
|
||||
```
|
||||
预期结果:所有服务状态为 `Up`
|
||||
|
||||
### 2. 验证 Gateway HTTP 服务
|
||||
- 打开浏览器访问:`http://localhost:8000`
|
||||
- 预期结果:显示 `{"message": "Asset Helper Gateway is running"}`
|
||||
|
||||
### 3. 验证 Swagger 文档
|
||||
- 打开浏览器访问:`http://localhost:8000/docs`
|
||||
- 预期结果:显示 FastAPI 自动生成的 Swagger 文档,包含用户相关的 HTTP 接口
|
||||
|
||||
### 4. 验证 WebSocket 连接
|
||||
使用 WebSocket 客户端工具(如 [WebSocket Echo Test](https://www.websocket.org/echo.html))连接:
|
||||
- WebSocket URL: `ws://localhost:8000/ws`
|
||||
- 发送消息,预期结果:收到服务器的回复消息
|
||||
|
||||
### 5. 验证 gRPC 服务连通性
|
||||
使用 gRPC 客户端工具(如 `grpcurl`)测试 user-service:
|
||||
```bash
|
||||
docker run --network asset_helper_backend_asset_helper_net --rm fullstorydev/grpcurl -plaintext user-service:50051 list user.UserService
|
||||
```
|
||||
预期结果:列出 UserService 的所有方法
|
||||
|
||||
## 四、常见问题排查
|
||||
|
||||
### 1. 服务启动失败
|
||||
- **检查日志**:使用 `make logs` 查看服务日志
|
||||
- **检查端口占用**:确保 8000、50051、5432、6379 端口未被占用
|
||||
- **检查环境变量**:确保 .env 文件配置正确
|
||||
|
||||
### 2. 数据库连接失败
|
||||
- 执行 `make init-db` 初始化数据库
|
||||
- 检查 PostgreSQL 服务是否正常运行
|
||||
|
||||
### 3. gRPC 服务调用失败
|
||||
- 检查 user-service 是否正常启动
|
||||
- 检查服务间网络连接是否正常
|
||||
|
||||
### 4. WebSocket 连接失败
|
||||
- 检查 gateway 服务是否正常运行
|
||||
- 检查 WebSocket 端点是否正确
|
||||
|
||||
## 五、停止服务
|
||||
|
||||
当验证完成后,可以停止所有服务:
|
||||
```bash
|
||||
make down
|
||||
```
|
||||
|
||||
## 六、生产环境部署
|
||||
|
||||
使用生产环境配置文件启动:
|
||||
```bash
|
||||
docker-compose -f docker-compose.prod.yml up -d
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
通过以上步骤,您可以成功启动和验证 Asset Helper Backend 项目的所有功能。如果遇到任何问题,请参考常见问题排查部分或查看服务日志获取详细信息。
|
||||
@@ -1,78 +0,0 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
asset_helper_net:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
redis_data:
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:18.3-alpine3.23
|
||||
environment:
|
||||
POSTGRES_USER: ${POSTGRES_USER:-admin}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
|
||||
POSTGRES_DB: ${POSTGRES_DB:-postgres}
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U admin"]
|
||||
interval: 30s
|
||||
timeout: 3s
|
||||
start_period: 5s
|
||||
retries: 3
|
||||
restart: always
|
||||
networks:
|
||||
- asset_helper_net
|
||||
|
||||
redis:
|
||||
image: redis:8.6.2-alpine
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping"]
|
||||
interval: 30s
|
||||
timeout: 3s
|
||||
start_period: 5s
|
||||
retries: 3
|
||||
restart: always
|
||||
networks:
|
||||
- asset_helper_net
|
||||
|
||||
user-service:
|
||||
build:
|
||||
context: ./services/user-service
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
- POSTGRES_USER=${POSTGRES_USER:-admin}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-password}
|
||||
- REDIS_URL=${REDIS_URL:-redis://redis:6379/0}
|
||||
- GRPC_PORT=${GRPC_PORT:-50051}
|
||||
restart: always
|
||||
networks:
|
||||
- asset_helper_net
|
||||
|
||||
gateway:
|
||||
build:
|
||||
context: ./services/gateway
|
||||
ports:
|
||||
- "8000:8000"
|
||||
depends_on:
|
||||
user-service:
|
||||
condition: service_started
|
||||
redis:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
- REDIS_URL=${REDIS_URL:-redis://redis:6379/0}
|
||||
- HTTP_PORT=${HTTP_PORT:-8000}
|
||||
- DEBUG=False
|
||||
- ENV=production
|
||||
restart: always
|
||||
networks:
|
||||
- asset_helper_net
|
||||
@@ -1,78 +0,0 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
asset_helper_net:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
redis_data:
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:18.3-alpine3.23
|
||||
environment:
|
||||
POSTGRES_USER: ${POSTGRES_USER:-admin}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
|
||||
POSTGRES_DB: ${POSTGRES_DB:-postgres}
|
||||
ports:
|
||||
- "5432:5432"
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U admin"]
|
||||
interval: 30s
|
||||
timeout: 3s
|
||||
start_period: 5s
|
||||
retries: 3
|
||||
networks:
|
||||
- asset_helper_net
|
||||
|
||||
redis:
|
||||
image: redis:8.6.2-alpine
|
||||
ports:
|
||||
- "6379:6379"
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping"]
|
||||
interval: 30s
|
||||
timeout: 3s
|
||||
start_period: 5s
|
||||
retries: 3
|
||||
networks:
|
||||
- asset_helper_net
|
||||
|
||||
user-service:
|
||||
build:
|
||||
context: ./services/user-service
|
||||
ports:
|
||||
- "50051:50051"
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
- POSTGRES_USER=${POSTGRES_USER:-admin}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-password}
|
||||
- REDIS_URL=${REDIS_URL:-redis://redis:6379/0}
|
||||
- GRPC_PORT=${GRPC_PORT:-50051}
|
||||
networks:
|
||||
- asset_helper_net
|
||||
|
||||
gateway:
|
||||
build:
|
||||
context: ./services/gateway
|
||||
ports:
|
||||
- "8000:8000"
|
||||
depends_on:
|
||||
user-service:
|
||||
condition: service_started
|
||||
redis:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
- REDIS_URL=${REDIS_URL:-redis://redis:6379/0}
|
||||
- HTTP_PORT=${HTTP_PORT:-8000}
|
||||
networks:
|
||||
- asset_helper_net
|
||||
@@ -1,49 +0,0 @@
|
||||
# Nginx 配置文件
|
||||
|
||||
user nginx;
|
||||
worker_processes auto;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /var/log/nginx/access.log main;
|
||||
error_log /var/log/nginx/error.log warn;
|
||||
|
||||
sendfile on;
|
||||
keepalive_timeout 65;
|
||||
|
||||
upstream gateway {
|
||||
server gateway:8000;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name localhost;
|
||||
|
||||
location / {
|
||||
proxy_pass http://gateway;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host $host;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
}
|
||||
|
||||
location /ws {
|
||||
proxy_pass http://gateway;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
-- 创建用户和数据库
|
||||
CREATE USER user_service WITH PASSWORD 'password';
|
||||
CREATE DATABASE user_db OWNER user_service;
|
||||
GRANT ALL PRIVILEGES ON DATABASE user_db TO user_service;
|
||||
|
||||
-- 连接到 user_db 并创建表
|
||||
\c user_db;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id SERIAL PRIMARY KEY,
|
||||
username VARCHAR(255) UNIQUE NOT NULL,
|
||||
password_hash VARCHAR(255) NOT NULL,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);
|
||||
@@ -1,21 +0,0 @@
|
||||
# Redis 配置文件
|
||||
|
||||
# 绑定地址
|
||||
bind 0.0.0.0
|
||||
|
||||
# 端口
|
||||
port 6379
|
||||
|
||||
# 数据持久化
|
||||
appendonly yes
|
||||
appendfsync everysec
|
||||
|
||||
# 内存限制
|
||||
maxmemory 128mb
|
||||
maxmemory-policy allkeys-lru
|
||||
|
||||
# 日志级别
|
||||
loglevel notice
|
||||
|
||||
# 日志文件
|
||||
logfile "/var/log/redis/redis.log"
|
||||
@@ -1,41 +0,0 @@
|
||||
[project]
|
||||
name = "asset_helper_backend"
|
||||
version = "0.1.0"
|
||||
description = "Asset Helper Backend Monorepo"
|
||||
authors = [
|
||||
{ name = "Author", email = "author@example.com" }
|
||||
]
|
||||
requires-python = ">=3.13.7"
|
||||
|
||||
[tool.uv.workspaces]
|
||||
members = [
|
||||
"shared",
|
||||
"services/gateway",
|
||||
"services/user-service"
|
||||
]
|
||||
|
||||
[tool.uv.resolver]
|
||||
strict = true
|
||||
|
||||
[tool.uv.dependencies]
|
||||
fastapi = "*"
|
||||
uvicorn = "*"
|
||||
grpcio = "*"
|
||||
grpcio-tools = "*"
|
||||
pydantic = "*"
|
||||
pydantic-settings = "*"
|
||||
sqlalchemy = "*"
|
||||
asyncpg = "*"
|
||||
redis = "*"
|
||||
python-dotenv = "*"
|
||||
loguru = "*"
|
||||
|
||||
[tool.uv.dev-dependencies]
|
||||
ruff = "*"
|
||||
pytest = "*"
|
||||
pytest-asyncio = "*"
|
||||
pre-commit = "*"
|
||||
|
||||
[build-system]
|
||||
requires = ["setuptools"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
@@ -1,15 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# 初始化数据库脚本
|
||||
|
||||
echo "开始初始化数据库..."
|
||||
|
||||
# 等待 PostgreSQL 服务启动
|
||||
sleep 5
|
||||
|
||||
# 连接到 PostgreSQL 并创建用户和数据库
|
||||
docker-compose exec postgres psql -U admin -c "CREATE USER user_service WITH PASSWORD 'password';"
|
||||
docker-compose exec postgres psql -U admin -c "CREATE DATABASE user_db OWNER user_service;"
|
||||
docker-compose exec postgres psql -U admin -c "GRANT ALL PRIVILEGES ON DATABASE user_db TO user_service;"
|
||||
|
||||
echo "数据库初始化完成!"
|
||||
@@ -1,24 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# 编译 gRPC 协议文件脚本
|
||||
|
||||
echo "开始编译 gRPC 协议文件..."
|
||||
|
||||
# 创建输出目录
|
||||
mkdir -p ./services/gateway/app/grpc_generated
|
||||
mkdir -p ./services/user-service/app/grpc_generated
|
||||
|
||||
# 编译 user.proto
|
||||
python -m grpc_tools.protoc \
|
||||
--proto_path=./shared/src/shared/proto \
|
||||
--python_out=./services/gateway/app/grpc_generated \
|
||||
--grpc_python_out=./services/gateway/app/grpc_generated \
|
||||
./shared/src/shared/proto/user.proto
|
||||
|
||||
python -m grpc_tools.protoc \
|
||||
--proto_path=./shared/src/shared/proto \
|
||||
--python_out=./services/user-service/app/grpc_generated \
|
||||
--grpc_python_out=./services/user-service/app/grpc_generated \
|
||||
./shared/src/shared/proto/user.proto
|
||||
|
||||
echo "gRPC 协议文件编译完成!"
|
||||
@@ -1,19 +0,0 @@
|
||||
FROM python:3.13.7-alpine3.22
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY app/ .
|
||||
COPY ../../shared/ /shared/
|
||||
|
||||
# 安装共享包
|
||||
RUN pip install -e /shared
|
||||
|
||||
EXPOSE 8000
|
||||
|
||||
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||
CMD python -c "import socket; s = socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.connect(('localhost', 8000)); s.close(); print('Healthy')" || exit 1
|
||||
|
||||
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
|
||||
@@ -1,40 +0,0 @@
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from app.grpc_generated import user_pb2, user_pb2_grpc
|
||||
from app.dependencies import get_user_service_client
|
||||
from typing import List
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
@router.get("/users/{user_id}")
|
||||
async def get_user(user_id: int, client: user_pb2_grpc.UserServiceStub = Depends(get_user_service_client)):
|
||||
request = user_pb2.GetUserRequest(id=user_id)
|
||||
response = await client.GetUser(request)
|
||||
if not response.user.id:
|
||||
raise HTTPException(status_code=404, detail="User not found")
|
||||
return response.user
|
||||
|
||||
@router.post("/users")
|
||||
async def create_user(username: str, password_hash: str, client: user_pb2_grpc.UserServiceStub = Depends(get_user_service_client)):
|
||||
request = user_pb2.CreateUserRequest(username=username, password_hash=password_hash)
|
||||
response = await client.CreateUser(request)
|
||||
return response.user
|
||||
|
||||
@router.put("/users/{user_id}")
|
||||
async def update_user(user_id: int, username: str, password_hash: str, client: user_pb2_grpc.UserServiceStub = Depends(get_user_service_client)):
|
||||
request = user_pb2.UpdateUserRequest(id=user_id, username=username, password_hash=password_hash)
|
||||
response = await client.UpdateUser(request)
|
||||
if not response.user.id:
|
||||
raise HTTPException(status_code=404, detail="User not found")
|
||||
return response.user
|
||||
|
||||
@router.delete("/users/{user_id}")
|
||||
async def delete_user(user_id: int, client: user_pb2_grpc.UserServiceStub = Depends(get_user_service_client)):
|
||||
request = user_pb2.DeleteUserRequest(id=user_id)
|
||||
await client.DeleteUser(request)
|
||||
return {"message": "User deleted successfully"}
|
||||
|
||||
@router.get("/users")
|
||||
async def list_users(page: int = 1, page_size: int = 10, client: user_pb2_grpc.UserServiceStub = Depends(get_user_service_client)):
|
||||
request = user_pb2.ListUsersRequest(page=page, page_size=page_size)
|
||||
response = await client.ListUsers(request)
|
||||
return {"users": response.users, "total": response.total}
|
||||
@@ -1,7 +0,0 @@
|
||||
from shared.utils.config import Settings
|
||||
|
||||
class GatewaySettings(Settings):
|
||||
# 继承基础配置,可添加服务特定配置
|
||||
service_name: str = "gateway"
|
||||
|
||||
settings = GatewaySettings()
|
||||
@@ -1,7 +0,0 @@
|
||||
from shared.utils.grpc_client import GrpcClient
|
||||
from app.grpc_generated import user_pb2_grpc
|
||||
|
||||
async def get_user_service_client():
|
||||
with GrpcClient("user-service", 50051) as channel:
|
||||
client = user_pb2_grpc.UserServiceStub(channel)
|
||||
yield client
|
||||
@@ -1,42 +0,0 @@
|
||||
from fastapi import FastAPI, WebSocket
|
||||
from app.api.v1 import users
|
||||
from app.ws.handlers import websocket_handler
|
||||
from app.core.config import settings
|
||||
from shared.middleware import CorrelationIdMiddleware, LoggingMiddleware, ExceptionMiddleware
|
||||
from loguru import logger
|
||||
|
||||
app = FastAPI(
|
||||
title="Asset Helper Gateway",
|
||||
version="0.1.0",
|
||||
description="Asset Helper Backend Gateway"
|
||||
)
|
||||
|
||||
# 添加中间件
|
||||
app.add_middleware(CorrelationIdMiddleware)
|
||||
app.add_middleware(LoggingMiddleware)
|
||||
app.add_middleware(ExceptionMiddleware)
|
||||
|
||||
# 注册路由
|
||||
app.include_router(users.router, prefix="/api/v1")
|
||||
|
||||
# WebSocket 端点
|
||||
@app.websocket("/ws")
|
||||
async def websocket_endpoint(websocket: WebSocket):
|
||||
await websocket_handler(websocket)
|
||||
|
||||
@app.get("/")
|
||||
async def root():
|
||||
return {"message": "Asset Helper Gateway is running"}
|
||||
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
return {"status": "healthy"}
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(
|
||||
"main:app",
|
||||
host="0.0.0.0",
|
||||
port=settings.http_port,
|
||||
reload=True
|
||||
)
|
||||
@@ -1,22 +0,0 @@
|
||||
from fastapi import WebSocket, WebSocketDisconnect
|
||||
from app.ws.manager import manager
|
||||
import uuid
|
||||
|
||||
async def websocket_handler(websocket: WebSocket):
|
||||
client_id = str(uuid.uuid4())
|
||||
await manager.connect(websocket, client_id)
|
||||
|
||||
try:
|
||||
while True:
|
||||
data = await websocket.receive()
|
||||
|
||||
if "text" in data:
|
||||
message = data["text"]
|
||||
await manager.send_personal_message(f"You said: {message}", client_id)
|
||||
await manager.broadcast(f"Client {client_id} said: {message}")
|
||||
elif "bytes" in data:
|
||||
# 处理二进制消息
|
||||
await websocket.send_bytes(data["bytes"])
|
||||
except WebSocketDisconnect:
|
||||
manager.disconnect(client_id)
|
||||
await manager.broadcast(f"Client {client_id} disconnected")
|
||||
@@ -1,24 +0,0 @@
|
||||
from fastapi import WebSocket
|
||||
from typing import Dict, List
|
||||
|
||||
class ConnectionManager:
|
||||
def __init__(self):
|
||||
self.active_connections: Dict[str, WebSocket] = {}
|
||||
|
||||
async def connect(self, websocket: WebSocket, client_id: str):
|
||||
await websocket.accept()
|
||||
self.active_connections[client_id] = websocket
|
||||
|
||||
def disconnect(self, client_id: str):
|
||||
if client_id in self.active_connections:
|
||||
del self.active_connections[client_id]
|
||||
|
||||
async def send_personal_message(self, message: str, client_id: str):
|
||||
if client_id in self.active_connections:
|
||||
await self.active_connections[client_id].send_text(message)
|
||||
|
||||
async def broadcast(self, message: str):
|
||||
for connection in self.active_connections.values():
|
||||
await connection.send_text(message)
|
||||
|
||||
manager = ConnectionManager()
|
||||
@@ -1,13 +0,0 @@
|
||||
fastapi==0.104.1
|
||||
uvicorn==0.24.0
|
||||
grpcio==1.59.0
|
||||
grpcio-tools==1.59.0
|
||||
pydantic==2.5.0
|
||||
pydantic-settings==2.1.0
|
||||
sqlalchemy==2.0.23
|
||||
asyncpg==0.28.0
|
||||
redis==5.0.1
|
||||
python-dotenv==1.0.0
|
||||
loguru==0.7.2
|
||||
passlib==1.7.4
|
||||
bcrypt==4.1.2
|
||||
@@ -1,19 +0,0 @@
|
||||
FROM python:3.13.7-alpine3.22
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY app/ .
|
||||
COPY ../../shared/ /shared/
|
||||
|
||||
# 安装共享包
|
||||
RUN pip install -e /shared
|
||||
|
||||
EXPOSE 50051
|
||||
|
||||
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||
CMD python -c "import socket; s = socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.connect(('localhost', 50051)); s.close(); print('Healthy')" || exit 1
|
||||
|
||||
CMD ["python", "main.py"]
|
||||
@@ -1,112 +0,0 @@
|
||||
import grpc
|
||||
from concurrent import futures
|
||||
from app.grpc_generated import user_pb2, user_pb2_grpc
|
||||
from app.db.models import User
|
||||
from app.db.session import AsyncSessionLocal
|
||||
from sqlalchemy import select
|
||||
from datetime import datetime
|
||||
|
||||
class UserService(user_pb2_grpc.UserServiceServicer):
|
||||
async def GetUser(self, request, context):
|
||||
async with AsyncSessionLocal() as session:
|
||||
result = await session.execute(select(User).where(User.id == request.id))
|
||||
user = result.scalar_one_or_none()
|
||||
if not user:
|
||||
context.set_code(grpc.StatusCode.NOT_FOUND)
|
||||
context.set_details("User not found")
|
||||
return user_pb2.UserResponse()
|
||||
|
||||
return user_pb2.UserResponse(
|
||||
user=user_pb2.User(
|
||||
id=user.id,
|
||||
username=user.username,
|
||||
password_hash=user.password_hash,
|
||||
created_at=user.created_at.isoformat(),
|
||||
updated_at=user.updated_at.isoformat()
|
||||
)
|
||||
)
|
||||
|
||||
async def CreateUser(self, request, context):
|
||||
async with AsyncSessionLocal() as session:
|
||||
user = User(
|
||||
username=request.username,
|
||||
password_hash=request.password_hash
|
||||
)
|
||||
session.add(user)
|
||||
await session.commit()
|
||||
await session.refresh(user)
|
||||
|
||||
return user_pb2.UserResponse(
|
||||
user=user_pb2.User(
|
||||
id=user.id,
|
||||
username=user.username,
|
||||
password_hash=user.password_hash,
|
||||
created_at=user.created_at.isoformat(),
|
||||
updated_at=user.updated_at.isoformat()
|
||||
)
|
||||
)
|
||||
|
||||
async def UpdateUser(self, request, context):
|
||||
async with AsyncSessionLocal() as session:
|
||||
result = await session.execute(select(User).where(User.id == request.id))
|
||||
user = result.scalar_one_or_none()
|
||||
if not user:
|
||||
context.set_code(grpc.StatusCode.NOT_FOUND)
|
||||
context.set_details("User not found")
|
||||
return user_pb2.UserResponse()
|
||||
|
||||
user.username = request.username
|
||||
user.password_hash = request.password_hash
|
||||
await session.commit()
|
||||
await session.refresh(user)
|
||||
|
||||
return user_pb2.UserResponse(
|
||||
user=user_pb2.User(
|
||||
id=user.id,
|
||||
username=user.username,
|
||||
password_hash=user.password_hash,
|
||||
created_at=user.created_at.isoformat(),
|
||||
updated_at=user.updated_at.isoformat()
|
||||
)
|
||||
)
|
||||
|
||||
async def DeleteUser(self, request, context):
|
||||
async with AsyncSessionLocal() as session:
|
||||
result = await session.execute(select(User).where(User.id == request.id))
|
||||
user = result.scalar_one_or_none()
|
||||
if not user:
|
||||
context.set_code(grpc.StatusCode.NOT_FOUND)
|
||||
context.set_details("User not found")
|
||||
return user_pb2.EmptyResponse()
|
||||
|
||||
await session.delete(user)
|
||||
await session.commit()
|
||||
|
||||
return user_pb2.EmptyResponse()
|
||||
|
||||
async def ListUsers(self, request, context):
|
||||
async with AsyncSessionLocal() as session:
|
||||
offset = (request.page - 1) * request.page_size
|
||||
result = await session.execute(
|
||||
select(User).offset(offset).limit(request.page_size)
|
||||
)
|
||||
users = result.scalars().all()
|
||||
|
||||
total_result = await session.execute(select(User))
|
||||
total = len(total_result.scalars().all())
|
||||
|
||||
user_list = [
|
||||
user_pb2.User(
|
||||
id=user.id,
|
||||
username=user.username,
|
||||
password_hash=user.password_hash,
|
||||
created_at=user.created_at.isoformat(),
|
||||
updated_at=user.updated_at.isoformat()
|
||||
)
|
||||
for user in users
|
||||
]
|
||||
|
||||
return user_pb2.UsersResponse(
|
||||
users=user_list,
|
||||
total=total
|
||||
)
|
||||
@@ -1,7 +0,0 @@
|
||||
from shared.utils.config import Settings
|
||||
|
||||
class UserServiceSettings(Settings):
|
||||
# 继承基础配置,可添加服务特定配置
|
||||
service_name: str = "user-service"
|
||||
|
||||
settings = UserServiceSettings()
|
||||
@@ -1,8 +0,0 @@
|
||||
from shared.models import BaseDBModel
|
||||
from sqlalchemy import Column, String
|
||||
|
||||
class User(BaseDBModel):
|
||||
__tablename__ = "users"
|
||||
|
||||
username = Column(String, unique=True, index=True, nullable=False)
|
||||
password_hash = Column(String, nullable=False)
|
||||
@@ -1,16 +0,0 @@
|
||||
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from services.user_service.app.core.config import settings
|
||||
|
||||
# 注意:这里使用 user_db 数据库
|
||||
DATABASE_URL = f"postgresql+asyncpg://user_service:password@postgres:5432/user_db"
|
||||
|
||||
engine = create_async_engine(DATABASE_URL)
|
||||
AsyncSessionLocal = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)
|
||||
|
||||
async def get_db():
|
||||
async with AsyncSessionLocal() as session:
|
||||
try:
|
||||
yield session
|
||||
finally:
|
||||
await session.close()
|
||||
@@ -1,29 +0,0 @@
|
||||
import grpc
|
||||
from concurrent import futures
|
||||
from app.api.user_service import UserService
|
||||
from app.grpc_generated import user_pb2_grpc
|
||||
from app.core.config import settings
|
||||
from app.db.models import BaseDBModel
|
||||
from app.db.session import engine
|
||||
import asyncio
|
||||
from loguru import logger
|
||||
|
||||
async def init_db():
|
||||
# 创建数据库表
|
||||
async with engine.begin() as conn:
|
||||
await conn.run_sync(BaseDBModel.metadata.create_all)
|
||||
|
||||
async def serve():
|
||||
server = grpc.aio.server(futures.ThreadPoolExecutor(max_workers=10))
|
||||
user_pb2_grpc.add_UserServiceServicer_to_server(UserService(), server)
|
||||
server.add_insecure_port(f"0.0.0.0:{settings.grpc_port}")
|
||||
|
||||
# 初始化数据库
|
||||
await init_db()
|
||||
|
||||
logger.info(f"Starting gRPC server on port {settings.grpc_port}")
|
||||
await server.start()
|
||||
await server.wait_for_termination()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(serve())
|
||||
@@ -1,13 +0,0 @@
|
||||
fastapi==0.104.1
|
||||
uvicorn==0.24.0
|
||||
grpcio==1.59.0
|
||||
grpcio-tools==1.59.0
|
||||
pydantic==2.5.0
|
||||
pydantic-settings==2.1.0
|
||||
sqlalchemy==2.0.23
|
||||
asyncpg==0.28.0
|
||||
redis==5.0.1
|
||||
python-dotenv==1.0.0
|
||||
loguru==0.7.2
|
||||
passlib==1.7.4
|
||||
bcrypt==4.1.2
|
||||
@@ -1,23 +0,0 @@
|
||||
[project]
|
||||
name = "asset_helper_shared"
|
||||
version = "0.1.0"
|
||||
description = "Asset Helper Shared Package"
|
||||
authors = [
|
||||
{ name = "Author", email = "author@example.com" }
|
||||
]
|
||||
requires-python = ">=3.13.7"
|
||||
|
||||
[tool.uv.dependencies]
|
||||
fastapi = "*"
|
||||
pydantic = "*"
|
||||
pydantic-settings = "*"
|
||||
grpcio = "*"
|
||||
sqlalchemy = "*"
|
||||
asyncpg = "*"
|
||||
redis = "*"
|
||||
loguru = "*"
|
||||
python-dotenv = "*"
|
||||
|
||||
[build-system]
|
||||
requires = ["setuptools"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
@@ -1,10 +0,0 @@
|
||||
from .base import AppException, NotFoundError, UnauthorizedError, ForbiddenError, BadRequestError, InternalError
|
||||
|
||||
__all__ = [
|
||||
"AppException",
|
||||
"NotFoundError",
|
||||
"UnauthorizedError",
|
||||
"ForbiddenError",
|
||||
"BadRequestError",
|
||||
"InternalError"
|
||||
]
|
||||
@@ -1,33 +0,0 @@
|
||||
from fastapi import HTTPException, Request, status
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
class AppException(Exception):
|
||||
def __init__(self, status_code: int, detail: str):
|
||||
self.status_code = status_code
|
||||
self.detail = detail
|
||||
|
||||
class NotFoundError(AppException):
|
||||
def __init__(self, detail: str = "资源不存在"):
|
||||
super().__init__(status.HTTP_404_NOT_FOUND, detail)
|
||||
|
||||
class UnauthorizedError(AppException):
|
||||
def __init__(self, detail: str = "未授权"):
|
||||
super().__init__(status.HTTP_401_UNAUTHORIZED, detail)
|
||||
|
||||
class ForbiddenError(AppException):
|
||||
def __init__(self, detail: str = "禁止访问"):
|
||||
super().__init__(status.HTTP_403_FORBIDDEN, detail)
|
||||
|
||||
class BadRequestError(AppException):
|
||||
def __init__(self, detail: str = "请求参数错误"):
|
||||
super().__init__(status.HTTP_400_BAD_REQUEST, detail)
|
||||
|
||||
class InternalError(AppException):
|
||||
def __init__(self, detail: str = "服务器内部错误"):
|
||||
super().__init__(status.HTTP_500_INTERNAL_SERVER_ERROR, detail)
|
||||
|
||||
async def exception_handler(request: Request, exc: AppException):
|
||||
return JSONResponse(
|
||||
status_code=exc.status_code,
|
||||
content={"detail": exc.detail}
|
||||
)
|
||||
@@ -1,5 +0,0 @@
|
||||
from .correlation_id import CorrelationIdMiddleware
|
||||
from .logging import LoggingMiddleware
|
||||
from .exception import ExceptionMiddleware
|
||||
|
||||
__all__ = ["CorrelationIdMiddleware", "LoggingMiddleware", "ExceptionMiddleware"]
|
||||
@@ -1,10 +0,0 @@
|
||||
from fastapi import Request, Response
|
||||
import uuid
|
||||
|
||||
class CorrelationIdMiddleware:
|
||||
async def __call__(self, request: Request, call_next):
|
||||
correlation_id = request.headers.get("X-Correlation-ID", str(uuid.uuid4()))
|
||||
request.state.correlation_id = correlation_id
|
||||
response = await call_next(request)
|
||||
response.headers["X-Correlation-ID"] = correlation_id
|
||||
return response
|
||||
@@ -1,23 +0,0 @@
|
||||
from fastapi import Request, HTTPException
|
||||
from fastapi.responses import JSONResponse
|
||||
from loguru import logger
|
||||
from shared.exceptions import AppException, exception_handler
|
||||
|
||||
class ExceptionMiddleware:
|
||||
async def __call__(self, request: Request, call_next):
|
||||
try:
|
||||
response = await call_next(request)
|
||||
return response
|
||||
except AppException as exc:
|
||||
return await exception_handler(request, exc)
|
||||
except HTTPException as exc:
|
||||
return JSONResponse(
|
||||
status_code=exc.status_code,
|
||||
content={"detail": exc.detail}
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.error(f"Unhandled exception: {exc}")
|
||||
return JSONResponse(
|
||||
status_code=500,
|
||||
content={"detail": "服务器内部错误"}
|
||||
)
|
||||
@@ -1,29 +0,0 @@
|
||||
from fastapi import Request
|
||||
from loguru import logger
|
||||
import time
|
||||
|
||||
class LoggingMiddleware:
|
||||
async def __call__(self, request: Request, call_next):
|
||||
start_time = time.time()
|
||||
correlation_id = getattr(request.state, "correlation_id", "-")
|
||||
|
||||
logger.info(
|
||||
f"Request started",
|
||||
method=request.method,
|
||||
url=request.url.path,
|
||||
correlation_id=correlation_id
|
||||
)
|
||||
|
||||
response = await call_next(request)
|
||||
|
||||
process_time = time.time() - start_time
|
||||
logger.info(
|
||||
f"Request completed",
|
||||
method=request.method,
|
||||
url=request.url.path,
|
||||
status_code=response.status_code,
|
||||
process_time=process_time,
|
||||
correlation_id=correlation_id
|
||||
)
|
||||
|
||||
return response
|
||||
@@ -1,3 +0,0 @@
|
||||
from .base import BaseModel, BaseDBModel
|
||||
|
||||
__all__ = ["BaseModel", "BaseDBModel"]
|
||||
@@ -1,17 +0,0 @@
|
||||
from pydantic import BaseModel as PydanticBaseModel
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy import Column, Integer, DateTime
|
||||
from sqlalchemy.sql import func
|
||||
|
||||
class BaseModel(PydanticBaseModel):
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
Base = declarative_base()
|
||||
|
||||
class BaseDBModel(Base):
|
||||
__abstract__ = True
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
|
||||
@@ -1,55 +0,0 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package user;
|
||||
|
||||
message User {
|
||||
int32 id = 1;
|
||||
string username = 2;
|
||||
string password_hash = 3;
|
||||
string created_at = 4;
|
||||
string updated_at = 5;
|
||||
}
|
||||
|
||||
message GetUserRequest {
|
||||
int32 id = 1;
|
||||
}
|
||||
|
||||
message CreateUserRequest {
|
||||
string username = 1;
|
||||
string password_hash = 2;
|
||||
}
|
||||
|
||||
message UpdateUserRequest {
|
||||
int32 id = 1;
|
||||
string username = 2;
|
||||
string password_hash = 3;
|
||||
}
|
||||
|
||||
message DeleteUserRequest {
|
||||
int32 id = 1;
|
||||
}
|
||||
|
||||
message ListUsersRequest {
|
||||
int32 page = 1;
|
||||
int32 page_size = 2;
|
||||
}
|
||||
|
||||
message UserResponse {
|
||||
User user = 1;
|
||||
}
|
||||
|
||||
message UsersResponse {
|
||||
repeated User users = 1;
|
||||
int32 total = 2;
|
||||
}
|
||||
|
||||
message EmptyResponse {
|
||||
}
|
||||
|
||||
service UserService {
|
||||
rpc GetUser(GetUserRequest) returns (UserResponse);
|
||||
rpc CreateUser(CreateUserRequest) returns (UserResponse);
|
||||
rpc UpdateUser(UpdateUserRequest) returns (UserResponse);
|
||||
rpc DeleteUser(DeleteUserRequest) returns (EmptyResponse);
|
||||
rpc ListUsers(ListUsersRequest) returns (UsersResponse);
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
from .config import settings
|
||||
from .logger import logger
|
||||
from .security import create_access_token, verify_token, get_password_hash, verify_password
|
||||
from .grpc_client import GrpcClient
|
||||
from .redis_client import RedisClient
|
||||
|
||||
__all__ = [
|
||||
"settings",
|
||||
"logger",
|
||||
"create_access_token",
|
||||
"verify_token",
|
||||
"get_password_hash",
|
||||
"verify_password",
|
||||
"GrpcClient",
|
||||
"RedisClient"
|
||||
]
|
||||
@@ -1,33 +0,0 @@
|
||||
from pydantic_settings import BaseSettings
|
||||
from typing import Optional
|
||||
|
||||
class Settings(BaseSettings):
|
||||
# Environment
|
||||
env: str = "development"
|
||||
debug: bool = True
|
||||
|
||||
# PostgreSQL
|
||||
postgres_user: str = "admin"
|
||||
postgres_password: str = "password"
|
||||
postgres_db: str = "postgres"
|
||||
postgres_host: str = "postgres"
|
||||
postgres_port: int = 5432
|
||||
|
||||
# Redis
|
||||
redis_url: str = "redis://redis:6379/0"
|
||||
|
||||
# gRPC
|
||||
grpc_port: int = 50051
|
||||
|
||||
# HTTP
|
||||
http_port: int = 8000
|
||||
|
||||
@property
|
||||
def database_url(self):
|
||||
return f"postgresql+asyncpg://{self.postgres_user}:{self.postgres_password}@{self.postgres_host}:{self.postgres_port}/{self.postgres_db}"
|
||||
|
||||
class Config:
|
||||
env_file = ".env"
|
||||
case_sensitive = False
|
||||
|
||||
settings = Settings()
|
||||
@@ -1,15 +0,0 @@
|
||||
import grpc
|
||||
|
||||
class GrpcClient:
|
||||
def __init__(self, host: str, port: int):
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.channel = None
|
||||
|
||||
def __enter__(self):
|
||||
self.channel = grpc.insecure_channel(f"{self.host}:{self.port}")
|
||||
return self.channel
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
if self.channel:
|
||||
self.channel.close()
|
||||
@@ -1,20 +0,0 @@
|
||||
from loguru import logger
|
||||
import sys
|
||||
|
||||
# 配置日志
|
||||
logger.remove()
|
||||
logger.add(
|
||||
sys.stdout,
|
||||
level="INFO",
|
||||
format="{time:YYYY-MM-DD HH:mm:ss} | {level: <8} | {name}:{function}:{line} | {message}",
|
||||
filter=None,
|
||||
colorize=True
|
||||
)
|
||||
|
||||
logger.add(
|
||||
"app.log",
|
||||
rotation="500 MB",
|
||||
compression="zip",
|
||||
level="DEBUG",
|
||||
format="{time:YYYY-MM-DD HH:mm:ss} | {level: <8} | {name}:{function}:{line} | {message}"
|
||||
)
|
||||
@@ -1,34 +0,0 @@
|
||||
import redis.asyncio as redis
|
||||
from shared.utils.config import settings
|
||||
|
||||
class RedisClient:
|
||||
def __init__(self):
|
||||
self.redis_url = settings.redis_url
|
||||
self.pool = None
|
||||
|
||||
async def connect(self):
|
||||
if not self.pool:
|
||||
self.pool = redis.from_url(self.redis_url, encoding="utf-8", decode_responses=True)
|
||||
return self.pool
|
||||
|
||||
async def disconnect(self):
|
||||
if self.pool:
|
||||
await self.pool.close()
|
||||
self.pool = None
|
||||
|
||||
async def get(self, key: str):
|
||||
pool = await self.connect()
|
||||
return await pool.get(key)
|
||||
|
||||
async def set(self, key: str, value: str, expire: int = None):
|
||||
pool = await self.connect()
|
||||
if expire:
|
||||
await pool.set(key, value, ex=expire)
|
||||
else:
|
||||
await pool.set(key, value)
|
||||
|
||||
async def delete(self, key: str):
|
||||
pool = await self.connect()
|
||||
await pool.delete(key)
|
||||
|
||||
redis_client = RedisClient()
|
||||
@@ -1,32 +0,0 @@
|
||||
import jwt
|
||||
from datetime import datetime, timedelta
|
||||
from passlib.context import CryptContext
|
||||
|
||||
SECRET_KEY = "your-secret-key"
|
||||
ALGORITHM = "HS256"
|
||||
ACCESS_TOKEN_EXPIRE_MINUTES = 30
|
||||
|
||||
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
|
||||
|
||||
def create_access_token(data: dict, expires_delta: timedelta = None):
|
||||
to_encode = data.copy()
|
||||
if expires_delta:
|
||||
expire = datetime.utcnow() + expires_delta
|
||||
else:
|
||||
expire = datetime.utcnow() + timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
|
||||
to_encode.update({"exp": expire})
|
||||
encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
|
||||
return encoded_jwt
|
||||
|
||||
def verify_token(token: str):
|
||||
try:
|
||||
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
|
||||
return payload
|
||||
except jwt.PyJWTError:
|
||||
return None
|
||||
|
||||
def get_password_hash(password: str):
|
||||
return pwd_context.hash(password)
|
||||
|
||||
def verify_password(plain_password: str, hashed_password: str):
|
||||
return pwd_context.verify(plain_password, hashed_password)
|
||||
@@ -1,192 +0,0 @@
|
||||
# asset_helper_backend 微服务基础架构(V1)
|
||||
|
||||
请一次性完整生成 asset_helper_backend Monorepo 微服务后端项目,严格遵循以下所有要求,确保所有文件可直接编译运行、Docker Compose 本地一键启动,无任何报错,代码风格统一、类型完整,完全适配单人+AI(Trae)的开发模式;同时需支持本机无额外环境(仅需安装Docker和Docker Compose)即可启动,且包含清晰的启动验证步骤,方便我确认项目启动效果;**本项目为第一版基础架构开发,仅搭建架构骨架,不包含任何业务逻辑**。
|
||||
|
||||
## 一、严格锁定技术栈版本(一字不差,不可修改)
|
||||
|
||||
1. Gateway: FastAPI + WebSocket(唯一对外入口,负责HTTP请求转发和WebSocket实时通信,基于Starlette框架实现异步高并发通信,支持文本与二进制消息传输)
|
||||
|
||||
2. 微服务内部通信: FastAPI + gRPC(高性能跨服务通信,基于HTTP/2协议,支持双向流、头部压缩和多路复用,吞吐量优于传统REST API)
|
||||
|
||||
3. 数据库: PostgreSQL 18.3-alpine3.23(每个微服务独立DB实例,支持数据持久化和初始化脚本执行,配置健康检查确保服务可用性)
|
||||
|
||||
4. 缓存/消息: Redis 8.6.2-alpine(轻量级缓存,支持高并发访问,配置健康检查)
|
||||
|
||||
5. 容器化: Docker + Docker Compose(实现服务隔离与快速部署,区分本地开发和生产环境配置)
|
||||
|
||||
6. Python: 3.13.7-alpine3.22(轻量级基础镜像,体积小巧,适合容器化部署,支持异步I/O操作)
|
||||
|
||||
7. 依赖管理: pyproject.toml + uv(全局统一依赖管理,启用uv workspaces,整合所有子服务依赖)
|
||||
|
||||
8. 开发模式: 单人+AI(Trae),Monorepo架构,微服务独立部署、可扩展
|
||||
|
||||
## 二、严格遵循目录结构(一字不差,所有文件必须生成,无遗漏)
|
||||
|
||||
asset_helper_backend/
|
||||
|
||||
├── docker-compose.yml # 本地开发配置(不变,包含所有服务及健康检查)
|
||||
|
||||
├── docker-compose.prod.yml # 生产环境配置(不变,无挂载、配置重启策略)
|
||||
|
||||
├── .env.example # 环境变量模板(包含所有必要环境变量,可直接复制为.env使用)
|
||||
|
||||
├── Makefile # 单人+AI通用命令入口(实现所有核心命令,可直接执行)
|
||||
|
||||
├── .gitignore # 避免AI提交冲突(忽略.env、虚拟环境、编译文件等)
|
||||
|
||||
├── pyproject.toml # 全局Python依赖规范(启用uv workspaces,统一依赖版本)
|
||||
|
||||
├── scripts/ # 可直接执行脚本(权限配置为可执行,无需手动修改)
|
||||
|
||||
│ ├── init-db.sh # 初始化数据库,为每个微服务创建独立库、用户和权限
|
||||
|
||||
│ └── proto-gen.sh # 编译gRPC协议文件,输出到各服务指定目录
|
||||
|
||||
├── shared/ # 共享核心包(纯工具、无业务逻辑,可被所有服务依赖,贴合第一版基础架构定位)
|
||||
|
||||
│ ├── src/shared/
|
||||
|
||||
│ │ ├── models/ # Pydantic基模型和SQLAlchemy基类
|
||||
|
||||
│ │ ├── exceptions/ # 统一异常类和异常处理器
|
||||
|
||||
│ │ ├── middleware/ # 通用中间件(关联ID、日志、异常处理)
|
||||
|
||||
│ │ ├── utils/ # 工具类(配置加载、日志、安全、gRPC/Redis客户端)
|
||||
|
||||
│ │ └── proto/ # gRPC协议定义文件(先创建user.proto)
|
||||
|
||||
│ ├── pyproject.toml # 共享包依赖配置
|
||||
|
||||
│ └── Dockerfile # 共享包构建配置
|
||||
|
||||
├── services/ # 微服务目录(单人+AI开发,可无限新增)
|
||||
|
||||
│ ├── gateway/ # 对外入口:HTTP + WebSocket,仅搭建基础框架,无业务逻辑处理
|
||||
|
||||
│ │ ├── app/
|
||||
|
||||
│ │ │ ├── api/ # HTTP路由(转发到gRPC服务)
|
||||
|
||||
│ │ │ ├── ws/ # WebSocket处理(连接管理、消息处理)
|
||||
|
||||
│ │ │ ├── core/ # 配置、启动、依赖注入
|
||||
|
||||
│ │ │ └── main.py # FastAPI实例启动(包含HTTP和WebSocket)
|
||||
|
||||
│ │ ├── Dockerfile # 基于python:3.13.7-alpine3.22构建,启动uvicorn
|
||||
|
||||
│ │ └── requirements.txt # 网关服务依赖
|
||||
|
||||
│ └── user-service/ # 用户微服务:纯gRPC接口,无HTTP,仅搭建基础框架,无业务逻辑
|
||||
|
||||
│ ├── app/
|
||||
|
||||
│ │ ├── api/ # gRPC服务实现(对应proto定义的所有方法)
|
||||
|
||||
│ │ ├── core/ # 配置(继承shared.config)
|
||||
|
||||
│ │ ├── db/ # 异步SQLAlchemy会话和User模型
|
||||
|
||||
│ │ └── main.py # gRPC服务器启动
|
||||
|
||||
│ ├── Dockerfile # 基于python:3.13.7-alpine3.22构建,启动gRPC服务
|
||||
|
||||
│ └── requirements.txt # 用户服务依赖
|
||||
|
||||
└── infra/ # 基础设施(不变,包含数据库、缓存、反向代理配置)
|
||||
|
||||
├── postgres/ # PostgreSQL相关配置(数据持久化、初始化)
|
||||
|
||||
├── redis/ # Redis相关配置(数据持久化)
|
||||
|
||||
└── nginx/ # Nginx反向代理配置
|
||||
|
||||
## 三、核心文件具体要求(必须完全实现,不可省略)
|
||||
|
||||
### 1. 根目录文件
|
||||
|
||||
(1)pyproject.toml:指定Python 3.13.7,全局依赖包含fastapi、uvicorn、grpcio、grpcio-tools、pydantic、sqlalchemy、asyncpg、redis、python-dotenv、loguru;开发依赖包含ruff、pytest、pytest-asyncio、pre-commit;启用uv workspaces,包含shared/、services/gateway/、services/user-service/。
|
||||
|
||||
(2)Makefile:必须实现all、build、up、down、logs、proto-gen、init-db、test、lint命令,命令可直接执行,无需手动修改。
|
||||
|
||||
(3).gitignore:忽略.env、__pycache__、.venv、*.pyc、dist/、build/、*.log、docker-compose.override.yml。
|
||||
|
||||
(4)scripts/proto-gen.sh:自动编译shared/src/shared/proto/**/*.proto文件,输出到各服务的app/grpc_generated/目录,添加可执行权限,可直接运行。
|
||||
|
||||
(5)scripts/init-db.sh:为每个微服务创建独立PostgreSQL库、用户、权限,添加可执行权限,可直接运行,支持数据库初始化脚本执行。
|
||||
|
||||
(6).env.example:包含ENV、DEBUG、POSTGRES_USER、POSTGRES_PASSWORD、REDIS_URL、GRPC_PORT、HTTP_PORT等必要环境变量,标注默认值。
|
||||
|
||||
### 2. shared共享包
|
||||
|
||||
(1)shared/pyproject.toml:名称为asset_helper_shared,依赖包含fastapi、pydantic、grpcio、sqlalchemy、asyncpg、redis、loguru。
|
||||
|
||||
(2)shared.models:实现BaseModel(Pydantic基类)、BaseDBModel(SQLAlchemy基类,含id、created_at、updated_at字段)。
|
||||
|
||||
(3)shared.exceptions:实现AppException(基类),以及NotFoundError、UnauthorizedError、ForbiddenError、BadRequestError、InternalError等具体异常,同时实现全局异常处理器(可被gateway直接使用)。
|
||||
|
||||
(4)shared.middleware:实现CorrelationIdMiddleware、LoggingMiddleware、ExceptionMiddleware,适配FastAPI异步请求流程。
|
||||
|
||||
(5)shared.utils:实现config.py(Pydantic Settings加载环境变量)、logger.py(loguru全局日志配置)、security.py(JWT生成与验证、密码哈希)、grpc_client.py(gRPC客户端基类,支持服务调用)、redis_client.py(Redis连接池配置,支持异步操作)。
|
||||
|
||||
(6)shared/proto/user.proto:定义UserService服务,包含GetUser、CreateUser、UpdateUser、DeleteUser、ListUsers五个接口,指定proto3版本,定义请求和响应消息结构,字段编号合理分配;仅定义基础接口模板,不包含任何业务规则和业务字段设计,贴合基础架构定位。
|
||||
|
||||
### 3. user-service(纯gRPC微服务)
|
||||
|
||||
(1)技术与端口:Python 3.13.7-alpine3.22,gRPC服务监听0.0.0.0:50051,独立PostgreSQL库为user_db。
|
||||
|
||||
(2)目录实现:app/main.py启动gRPC服务器;app/core/config.py继承shared.config;app/db/session.py实现异步SQLAlchemy会话;app/db/models.py实现User模型(继承shared.models.BaseDBModel),仅包含基础字段,无业务相关字段;app/api/user_service.py实现UserService所有gRPC方法(仅模拟基础数据库CRUD操作,无任何业务逻辑和业务规则);app/grpc_generated/目录用于存放自动生成的gRPC代码。
|
||||
|
||||
(3)Dockerfile:基于python:3.13.7-alpine3.22,安装依赖、复制代码、启动gRPC服务,配置健康检查。
|
||||
|
||||
### 4. gateway(对外入口)
|
||||
|
||||
(1)技术与端口:FastAPI + WebSocket,HTTP服务监听0.0.0.0:8000,WebSocket端点为/ws,内部调用user-service:50051的gRPC服务。
|
||||
|
||||
(2)目录实现:app/main.py创建FastAPI实例,集成HTTP和WebSocket;app/core/config.py配置服务参数;app/api/v1/users.py实现用户HTTP CRUD路由(仅负责转发到gRPC服务,无任何业务逻辑处理);app/ws/manager.py实现WebSocket连接管理器(支持多连接管理);app/ws/handlers.py实现用户消息处理(支持文本和二进制消息,无业务相关消息处理逻辑);app/dependencies.py实现gRPC客户端注入。
|
||||
|
||||
(3)Dockerfile:基于python:3.13.7-alpine3.22,安装依赖、复制代码、启动uvicorn(支持WebSocket),配置健康检查。
|
||||
|
||||
### 5. Docker Compose配置
|
||||
|
||||
(1)docker-compose.yml(本地):包含postgres(18.3-alpine3.23)、redis(8.6.2-alpine)、user-service、gateway四个服务;所有服务加入asset_helper_net网络;配置服务依赖健康检查(postgres使用pg_isready检查,redis使用redis-cli ping检查);环境变量从.env读取;配置数据持久化卷;无需本机安装Python、PostgreSQL、Redis等依赖,仅需Docker和Docker Compose即可启动;新增启动前置说明(告知需先安装Docker和Docker Compose,以及复制.env.example为.env的步骤)。
|
||||
|
||||
(2)docker-compose.prod.yml(生产):无本地文件挂载;配置服务重启策略为always;仅暴露gateway:8000端口,其他服务不对外暴露端口;强化安全配置,移除开发环境冗余参数。
|
||||
|
||||
## 四、最终交付要求(必须满足,否则需重新调整)
|
||||
|
||||
1. 所有技术栈版本严格匹配,无任何版本偏差;
|
||||
|
||||
2. 目录结构与要求完全一致,无遗漏、无多余文件;
|
||||
|
||||
3. 执行make proto-gen可正常生成gRPC代码,无编译错误;
|
||||
|
||||
4. 执行make init-db可正常初始化所有微服务的独立数据库;
|
||||
|
||||
5. 执行make up可一键启动所有容器,所有服务健康运行,无报错;
|
||||
|
||||
6. 访问gateway:8000/docs可正常打开Swagger文档,查看HTTP接口;
|
||||
|
||||
7. gateway的WebSocket端点/ws可正常连接,支持消息收发;
|
||||
|
||||
8. user-service的gRPC端口50051可正常连通,gateway能正常调用其gRPC接口;
|
||||
|
||||
9. shared共享包可被所有服务正常导入,无导入错误;
|
||||
|
||||
10. 代码类型标注完整,符合Python规范,无语法错误、无冗余代码;
|
||||
|
||||
11. 所有脚本可直接执行,无需手动修改权限或配置;
|
||||
|
||||
12. 项目支持后续新增微服务,扩展灵活,不影响现有服务运行;
|
||||
|
||||
13. 本机无Python、PostgreSQL、Redis等环境也可正常启动,仅需安装Docker和Docker Compose,Trae需在项目根目录生成清晰的「本地启动&验证步骤文档」(命名为STARTUP_VERIFY.md);
|
||||
|
||||
14. STARTUP_VERIFY.md需包含:Docker和Docker Compose安装指引(极简版)、复制.env.example为.env的操作步骤、执行make up启动服务的命令、服务启动后验证步骤(含Swagger访问、WebSocket连接、gRPC连通性的具体操作和预期结果)、常见启动报错排查方法;
|
||||
|
||||
15. 启动验证步骤需简单易懂,无需专业技术背景,可直接按步骤操作确认项目启动效果,确保每一步验证都有明确的操作命令和预期反馈;
|
||||
|
||||
16. 全程严格遵循“第一版基础架构”定位,所有代码仅包含架构层、工具层实现,不涉及任何业务逻辑、业务规则、业务字段,不添加任何与具体业务相关的代码或配置;
|
||||
|
||||
17. user-service的User模型仅包含基础通用字段(如id、created_at、updated_at、username、password_hash),不添加任何业务相关字段(如角色、权限、业务关联信息等),gRPC方法仅实现基础CRUD,无业务逻辑判断。
|
||||
> (注:文档部分内容可能由 AI 生成)
|
||||
Reference in New Issue
Block a user