feat (infra): Trae 完成 asset_helper_backend 微服务基础架构 V1 初始化

核心实现:搭建 Monorepo 架构,完成 shared 共享包、gateway、user-service 基础框架开发
技术落地:严格匹配指定技术栈版本,完成 Docker、gRPC、FastAPI、PostgreSQL、Redis 等配置,实现服务间基础连通
配套文件:生成 Makefile、环境变量模板、数据库 / 脚本初始化文件及启动验证文档
架构定位:仅搭建基础架构骨架,无任何业务逻辑、业务规则及业务相关字段,为后续业务开发提供支撑
This commit is contained in:
fish
2026-03-27 20:38:10 +08:00
parent 1ad8ec9be5
commit 3f4165fe78
44 changed files with 1407 additions and 0 deletions

View File

@@ -0,0 +1,17 @@
# Environment
ENV=development
DEBUG=True
# PostgreSQL
POSTGRES_USER=admin
POSTGRES_PASSWORD=password
POSTGRES_DB=postgres
# Redis
REDIS_URL=redis://redis:6379/0
# gRPC
GRPC_PORT=50051
# HTTP
HTTP_PORT=8000

8
asset_helper_backend/.gitignore vendored Normal file
View File

@@ -0,0 +1,8 @@
.env
__pycache__
.venv
*.pyc
dist/
build/
*.log
docker-compose.override.yml

View File

@@ -0,0 +1,27 @@
all: build up
build:
docker-compose build
up:
docker-compose up -d
down:
docker-compose down
logs:
docker-compose logs -f
proto-gen:
chmod +x ./scripts/proto-gen.sh
./scripts/proto-gen.sh
init-db:
chmod +x ./scripts/init-db.sh
./scripts/init-db.sh
test:
docker-compose run --rm gateway python -m pytest
lint:
docker-compose run --rm gateway python -m ruff check .

View File

@@ -0,0 +1,104 @@
# Asset Helper Backend 启动验证步骤
本文档提供了 Asset Helper Backend 项目的本地启动和验证步骤确保您能够在无额外环境的情况下仅需安装Docker和Docker Compose成功启动项目并验证其功能。
## 一、环境准备
### 1. 安装 Docker
- **Windows**: 下载并安装 [Docker Desktop for Windows](https://www.docker.com/products/docker-desktop)
- **macOS**: 下载并安装 [Docker Desktop for Mac](https://www.docker.com/products/docker-desktop)
- **Linux**: 按照 [官方文档](https://docs.docker.com/engine/install/) 安装 Docker
### 2. 安装 Docker Compose
- Docker Desktop 已包含 Docker Compose无需单独安装
- 若使用 Linux 系统且未安装 Docker Compose请按照 [官方文档](https://docs.docker.com/compose/install/) 安装
## 二、项目启动步骤
### 1. 进入项目目录
```bash
cd asset_helper_backend
```
### 2. 配置环境变量
复制环境变量模板文件为实际环境变量文件:
```bash
cp .env.example .env
```
### 3. 启动服务
使用 Makefile 命令一键启动所有服务:
```bash
make up
```
该命令会:
- 构建所有服务的 Docker 镜像
- 启动 PostgreSQL、Redis、user-service 和 gateway 服务
- 配置服务间的网络连接和依赖关系
## 三、服务验证步骤
### 1. 验证服务状态
检查所有服务是否正常运行:
```bash
docker-compose ps
```
预期结果:所有服务状态为 `Up`
### 2. 验证 Gateway HTTP 服务
- 打开浏览器访问:`http://localhost:8000`
- 预期结果:显示 `{"message": "Asset Helper Gateway is running"}`
### 3. 验证 Swagger 文档
- 打开浏览器访问:`http://localhost:8000/docs`
- 预期结果:显示 FastAPI 自动生成的 Swagger 文档,包含用户相关的 HTTP 接口
### 4. 验证 WebSocket 连接
使用 WebSocket 客户端工具(如 [WebSocket Echo Test](https://www.websocket.org/echo.html))连接:
- WebSocket URL: `ws://localhost:8000/ws`
- 发送消息,预期结果:收到服务器的回复消息
### 5. 验证 gRPC 服务连通性
使用 gRPC 客户端工具(如 `grpcurl`)测试 user-service
```bash
docker run --network asset_helper_backend_asset_helper_net --rm fullstorydev/grpcurl -plaintext user-service:50051 list user.UserService
```
预期结果:列出 UserService 的所有方法
## 四、常见问题排查
### 1. 服务启动失败
- **检查日志**:使用 `make logs` 查看服务日志
- **检查端口占用**:确保 8000、50051、5432、6379 端口未被占用
- **检查环境变量**:确保 .env 文件配置正确
### 2. 数据库连接失败
- 执行 `make init-db` 初始化数据库
- 检查 PostgreSQL 服务是否正常运行
### 3. gRPC 服务调用失败
- 检查 user-service 是否正常启动
- 检查服务间网络连接是否正常
### 4. WebSocket 连接失败
- 检查 gateway 服务是否正常运行
- 检查 WebSocket 端点是否正确
## 五、停止服务
当验证完成后,可以停止所有服务:
```bash
make down
```
## 六、生产环境部署
使用生产环境配置文件启动:
```bash
docker-compose -f docker-compose.prod.yml up -d
```
---
通过以上步骤,您可以成功启动和验证 Asset Helper Backend 项目的所有功能。如果遇到任何问题,请参考常见问题排查部分或查看服务日志获取详细信息。

View File

@@ -0,0 +1,78 @@
version: '3.8'
networks:
asset_helper_net:
driver: bridge
volumes:
postgres_data:
redis_data:
services:
postgres:
image: postgres:18.3-alpine3.23
environment:
POSTGRES_USER: ${POSTGRES_USER:-admin}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_DB: ${POSTGRES_DB:-postgres}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin"]
interval: 30s
timeout: 3s
start_period: 5s
retries: 3
restart: always
networks:
- asset_helper_net
redis:
image: redis:8.6.2-alpine
volumes:
- redis_data:/data
healthcheck:
test: ["CMD-SHELL", "redis-cli ping"]
interval: 30s
timeout: 3s
start_period: 5s
retries: 3
restart: always
networks:
- asset_helper_net
user-service:
build:
context: ./services/user-service
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
environment:
- POSTGRES_USER=${POSTGRES_USER:-admin}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-password}
- REDIS_URL=${REDIS_URL:-redis://redis:6379/0}
- GRPC_PORT=${GRPC_PORT:-50051}
restart: always
networks:
- asset_helper_net
gateway:
build:
context: ./services/gateway
ports:
- "8000:8000"
depends_on:
user-service:
condition: service_started
redis:
condition: service_healthy
environment:
- REDIS_URL=${REDIS_URL:-redis://redis:6379/0}
- HTTP_PORT=${HTTP_PORT:-8000}
- DEBUG=False
- ENV=production
restart: always
networks:
- asset_helper_net

View File

@@ -0,0 +1,78 @@
version: '3.8'
networks:
asset_helper_net:
driver: bridge
volumes:
postgres_data:
redis_data:
services:
postgres:
image: postgres:18.3-alpine3.23
environment:
POSTGRES_USER: ${POSTGRES_USER:-admin}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_DB: ${POSTGRES_DB:-postgres}
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin"]
interval: 30s
timeout: 3s
start_period: 5s
retries: 3
networks:
- asset_helper_net
redis:
image: redis:8.6.2-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD-SHELL", "redis-cli ping"]
interval: 30s
timeout: 3s
start_period: 5s
retries: 3
networks:
- asset_helper_net
user-service:
build:
context: ./services/user-service
ports:
- "50051:50051"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
environment:
- POSTGRES_USER=${POSTGRES_USER:-admin}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-password}
- REDIS_URL=${REDIS_URL:-redis://redis:6379/0}
- GRPC_PORT=${GRPC_PORT:-50051}
networks:
- asset_helper_net
gateway:
build:
context: ./services/gateway
ports:
- "8000:8000"
depends_on:
user-service:
condition: service_started
redis:
condition: service_healthy
environment:
- REDIS_URL=${REDIS_URL:-redis://redis:6379/0}
- HTTP_PORT=${HTTP_PORT:-8000}
networks:
- asset_helper_net

View File

@@ -0,0 +1,49 @@
# Nginx 配置文件
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
sendfile on;
keepalive_timeout 65;
upstream gateway {
server gateway:8000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /ws {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
}

View File

@@ -0,0 +1,17 @@
-- 创建用户和数据库
CREATE USER user_service WITH PASSWORD 'password';
CREATE DATABASE user_db OWNER user_service;
GRANT ALL PRIVILEGES ON DATABASE user_db TO user_service;
-- 连接到 user_db 并创建表
\c user_db;
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
username VARCHAR(255) UNIQUE NOT NULL,
password_hash VARCHAR(255) NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);

View File

@@ -0,0 +1,21 @@
# Redis 配置文件
# 绑定地址
bind 0.0.0.0
# 端口
port 6379
# 数据持久化
appendonly yes
appendfsync everysec
# 内存限制
maxmemory 128mb
maxmemory-policy allkeys-lru
# 日志级别
loglevel notice
# 日志文件
logfile "/var/log/redis/redis.log"

View File

@@ -0,0 +1,41 @@
[project]
name = "asset_helper_backend"
version = "0.1.0"
description = "Asset Helper Backend Monorepo"
authors = [
{ name = "Author", email = "author@example.com" }
]
requires-python = ">=3.13.7"
[tool.uv.workspaces]
members = [
"shared",
"services/gateway",
"services/user-service"
]
[tool.uv.resolver]
strict = true
[tool.uv.dependencies]
fastapi = "*"
uvicorn = "*"
grpcio = "*"
grpcio-tools = "*"
pydantic = "*"
pydantic-settings = "*"
sqlalchemy = "*"
asyncpg = "*"
redis = "*"
python-dotenv = "*"
loguru = "*"
[tool.uv.dev-dependencies]
ruff = "*"
pytest = "*"
pytest-asyncio = "*"
pre-commit = "*"
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"

View File

@@ -0,0 +1,15 @@
#!/bin/bash
# 初始化数据库脚本
echo "开始初始化数据库..."
# 等待 PostgreSQL 服务启动
sleep 5
# 连接到 PostgreSQL 并创建用户和数据库
docker-compose exec postgres psql -U admin -c "CREATE USER user_service WITH PASSWORD 'password';"
docker-compose exec postgres psql -U admin -c "CREATE DATABASE user_db OWNER user_service;"
docker-compose exec postgres psql -U admin -c "GRANT ALL PRIVILEGES ON DATABASE user_db TO user_service;"
echo "数据库初始化完成!"

View File

@@ -0,0 +1,24 @@
#!/bin/bash
# 编译 gRPC 协议文件脚本
echo "开始编译 gRPC 协议文件..."
# 创建输出目录
mkdir -p ./services/gateway/app/grpc_generated
mkdir -p ./services/user-service/app/grpc_generated
# 编译 user.proto
python -m grpc_tools.protoc \
--proto_path=./shared/src/shared/proto \
--python_out=./services/gateway/app/grpc_generated \
--grpc_python_out=./services/gateway/app/grpc_generated \
./shared/src/shared/proto/user.proto
python -m grpc_tools.protoc \
--proto_path=./shared/src/shared/proto \
--python_out=./services/user-service/app/grpc_generated \
--grpc_python_out=./services/user-service/app/grpc_generated \
./shared/src/shared/proto/user.proto
echo "gRPC 协议文件编译完成!"

View File

@@ -0,0 +1,19 @@
FROM python:3.13.7-alpine3.22
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app/ .
COPY ../../shared/ /shared/
# 安装共享包
RUN pip install -e /shared
EXPOSE 8000
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD python -c "import socket; s = socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.connect(('localhost', 8000)); s.close(); print('Healthy')" || exit 1
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]

View File

@@ -0,0 +1,40 @@
from fastapi import APIRouter, Depends, HTTPException
from app.grpc_generated import user_pb2, user_pb2_grpc
from app.dependencies import get_user_service_client
from typing import List
router = APIRouter()
@router.get("/users/{user_id}")
async def get_user(user_id: int, client: user_pb2_grpc.UserServiceStub = Depends(get_user_service_client)):
request = user_pb2.GetUserRequest(id=user_id)
response = await client.GetUser(request)
if not response.user.id:
raise HTTPException(status_code=404, detail="User not found")
return response.user
@router.post("/users")
async def create_user(username: str, password_hash: str, client: user_pb2_grpc.UserServiceStub = Depends(get_user_service_client)):
request = user_pb2.CreateUserRequest(username=username, password_hash=password_hash)
response = await client.CreateUser(request)
return response.user
@router.put("/users/{user_id}")
async def update_user(user_id: int, username: str, password_hash: str, client: user_pb2_grpc.UserServiceStub = Depends(get_user_service_client)):
request = user_pb2.UpdateUserRequest(id=user_id, username=username, password_hash=password_hash)
response = await client.UpdateUser(request)
if not response.user.id:
raise HTTPException(status_code=404, detail="User not found")
return response.user
@router.delete("/users/{user_id}")
async def delete_user(user_id: int, client: user_pb2_grpc.UserServiceStub = Depends(get_user_service_client)):
request = user_pb2.DeleteUserRequest(id=user_id)
await client.DeleteUser(request)
return {"message": "User deleted successfully"}
@router.get("/users")
async def list_users(page: int = 1, page_size: int = 10, client: user_pb2_grpc.UserServiceStub = Depends(get_user_service_client)):
request = user_pb2.ListUsersRequest(page=page, page_size=page_size)
response = await client.ListUsers(request)
return {"users": response.users, "total": response.total}

View File

@@ -0,0 +1,7 @@
from shared.utils.config import Settings
class GatewaySettings(Settings):
# 继承基础配置,可添加服务特定配置
service_name: str = "gateway"
settings = GatewaySettings()

View File

@@ -0,0 +1,7 @@
from shared.utils.grpc_client import GrpcClient
from app.grpc_generated import user_pb2_grpc
async def get_user_service_client():
with GrpcClient("user-service", 50051) as channel:
client = user_pb2_grpc.UserServiceStub(channel)
yield client

View File

@@ -0,0 +1,42 @@
from fastapi import FastAPI, WebSocket
from app.api.v1 import users
from app.ws.handlers import websocket_handler
from app.core.config import settings
from shared.middleware import CorrelationIdMiddleware, LoggingMiddleware, ExceptionMiddleware
from loguru import logger
app = FastAPI(
title="Asset Helper Gateway",
version="0.1.0",
description="Asset Helper Backend Gateway"
)
# 添加中间件
app.add_middleware(CorrelationIdMiddleware)
app.add_middleware(LoggingMiddleware)
app.add_middleware(ExceptionMiddleware)
# 注册路由
app.include_router(users.router, prefix="/api/v1")
# WebSocket 端点
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket_handler(websocket)
@app.get("/")
async def root():
return {"message": "Asset Helper Gateway is running"}
@app.get("/health")
async def health_check():
return {"status": "healthy"}
if __name__ == "__main__":
import uvicorn
uvicorn.run(
"main:app",
host="0.0.0.0",
port=settings.http_port,
reload=True
)

View File

@@ -0,0 +1,22 @@
from fastapi import WebSocket, WebSocketDisconnect
from app.ws.manager import manager
import uuid
async def websocket_handler(websocket: WebSocket):
client_id = str(uuid.uuid4())
await manager.connect(websocket, client_id)
try:
while True:
data = await websocket.receive()
if "text" in data:
message = data["text"]
await manager.send_personal_message(f"You said: {message}", client_id)
await manager.broadcast(f"Client {client_id} said: {message}")
elif "bytes" in data:
# 处理二进制消息
await websocket.send_bytes(data["bytes"])
except WebSocketDisconnect:
manager.disconnect(client_id)
await manager.broadcast(f"Client {client_id} disconnected")

View File

@@ -0,0 +1,24 @@
from fastapi import WebSocket
from typing import Dict, List
class ConnectionManager:
def __init__(self):
self.active_connections: Dict[str, WebSocket] = {}
async def connect(self, websocket: WebSocket, client_id: str):
await websocket.accept()
self.active_connections[client_id] = websocket
def disconnect(self, client_id: str):
if client_id in self.active_connections:
del self.active_connections[client_id]
async def send_personal_message(self, message: str, client_id: str):
if client_id in self.active_connections:
await self.active_connections[client_id].send_text(message)
async def broadcast(self, message: str):
for connection in self.active_connections.values():
await connection.send_text(message)
manager = ConnectionManager()

View File

@@ -0,0 +1,13 @@
fastapi==0.104.1
uvicorn==0.24.0
grpcio==1.59.0
grpcio-tools==1.59.0
pydantic==2.5.0
pydantic-settings==2.1.0
sqlalchemy==2.0.23
asyncpg==0.28.0
redis==5.0.1
python-dotenv==1.0.0
loguru==0.7.2
passlib==1.7.4
bcrypt==4.1.2

View File

@@ -0,0 +1,19 @@
FROM python:3.13.7-alpine3.22
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app/ .
COPY ../../shared/ /shared/
# 安装共享包
RUN pip install -e /shared
EXPOSE 50051
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD python -c "import socket; s = socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.connect(('localhost', 50051)); s.close(); print('Healthy')" || exit 1
CMD ["python", "main.py"]

View File

@@ -0,0 +1,112 @@
import grpc
from concurrent import futures
from app.grpc_generated import user_pb2, user_pb2_grpc
from app.db.models import User
from app.db.session import AsyncSessionLocal
from sqlalchemy import select
from datetime import datetime
class UserService(user_pb2_grpc.UserServiceServicer):
async def GetUser(self, request, context):
async with AsyncSessionLocal() as session:
result = await session.execute(select(User).where(User.id == request.id))
user = result.scalar_one_or_none()
if not user:
context.set_code(grpc.StatusCode.NOT_FOUND)
context.set_details("User not found")
return user_pb2.UserResponse()
return user_pb2.UserResponse(
user=user_pb2.User(
id=user.id,
username=user.username,
password_hash=user.password_hash,
created_at=user.created_at.isoformat(),
updated_at=user.updated_at.isoformat()
)
)
async def CreateUser(self, request, context):
async with AsyncSessionLocal() as session:
user = User(
username=request.username,
password_hash=request.password_hash
)
session.add(user)
await session.commit()
await session.refresh(user)
return user_pb2.UserResponse(
user=user_pb2.User(
id=user.id,
username=user.username,
password_hash=user.password_hash,
created_at=user.created_at.isoformat(),
updated_at=user.updated_at.isoformat()
)
)
async def UpdateUser(self, request, context):
async with AsyncSessionLocal() as session:
result = await session.execute(select(User).where(User.id == request.id))
user = result.scalar_one_or_none()
if not user:
context.set_code(grpc.StatusCode.NOT_FOUND)
context.set_details("User not found")
return user_pb2.UserResponse()
user.username = request.username
user.password_hash = request.password_hash
await session.commit()
await session.refresh(user)
return user_pb2.UserResponse(
user=user_pb2.User(
id=user.id,
username=user.username,
password_hash=user.password_hash,
created_at=user.created_at.isoformat(),
updated_at=user.updated_at.isoformat()
)
)
async def DeleteUser(self, request, context):
async with AsyncSessionLocal() as session:
result = await session.execute(select(User).where(User.id == request.id))
user = result.scalar_one_or_none()
if not user:
context.set_code(grpc.StatusCode.NOT_FOUND)
context.set_details("User not found")
return user_pb2.EmptyResponse()
await session.delete(user)
await session.commit()
return user_pb2.EmptyResponse()
async def ListUsers(self, request, context):
async with AsyncSessionLocal() as session:
offset = (request.page - 1) * request.page_size
result = await session.execute(
select(User).offset(offset).limit(request.page_size)
)
users = result.scalars().all()
total_result = await session.execute(select(User))
total = len(total_result.scalars().all())
user_list = [
user_pb2.User(
id=user.id,
username=user.username,
password_hash=user.password_hash,
created_at=user.created_at.isoformat(),
updated_at=user.updated_at.isoformat()
)
for user in users
]
return user_pb2.UsersResponse(
users=user_list,
total=total
)

View File

@@ -0,0 +1,7 @@
from shared.utils.config import Settings
class UserServiceSettings(Settings):
# 继承基础配置,可添加服务特定配置
service_name: str = "user-service"
settings = UserServiceSettings()

View File

@@ -0,0 +1,8 @@
from shared.models import BaseDBModel
from sqlalchemy import Column, String
class User(BaseDBModel):
__tablename__ = "users"
username = Column(String, unique=True, index=True, nullable=False)
password_hash = Column(String, nullable=False)

View File

@@ -0,0 +1,16 @@
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker
from services.user_service.app.core.config import settings
# 注意:这里使用 user_db 数据库
DATABASE_URL = f"postgresql+asyncpg://user_service:password@postgres:5432/user_db"
engine = create_async_engine(DATABASE_URL)
AsyncSessionLocal = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)
async def get_db():
async with AsyncSessionLocal() as session:
try:
yield session
finally:
await session.close()

View File

@@ -0,0 +1,29 @@
import grpc
from concurrent import futures
from app.api.user_service import UserService
from app.grpc_generated import user_pb2_grpc
from app.core.config import settings
from app.db.models import BaseDBModel
from app.db.session import engine
import asyncio
from loguru import logger
async def init_db():
# 创建数据库表
async with engine.begin() as conn:
await conn.run_sync(BaseDBModel.metadata.create_all)
async def serve():
server = grpc.aio.server(futures.ThreadPoolExecutor(max_workers=10))
user_pb2_grpc.add_UserServiceServicer_to_server(UserService(), server)
server.add_insecure_port(f"0.0.0.0:{settings.grpc_port}")
# 初始化数据库
await init_db()
logger.info(f"Starting gRPC server on port {settings.grpc_port}")
await server.start()
await server.wait_for_termination()
if __name__ == "__main__":
asyncio.run(serve())

View File

@@ -0,0 +1,13 @@
fastapi==0.104.1
uvicorn==0.24.0
grpcio==1.59.0
grpcio-tools==1.59.0
pydantic==2.5.0
pydantic-settings==2.1.0
sqlalchemy==2.0.23
asyncpg==0.28.0
redis==5.0.1
python-dotenv==1.0.0
loguru==0.7.2
passlib==1.7.4
bcrypt==4.1.2

View File

@@ -0,0 +1,23 @@
[project]
name = "asset_helper_shared"
version = "0.1.0"
description = "Asset Helper Shared Package"
authors = [
{ name = "Author", email = "author@example.com" }
]
requires-python = ">=3.13.7"
[tool.uv.dependencies]
fastapi = "*"
pydantic = "*"
pydantic-settings = "*"
grpcio = "*"
sqlalchemy = "*"
asyncpg = "*"
redis = "*"
loguru = "*"
python-dotenv = "*"
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"

View File

@@ -0,0 +1,10 @@
from .base import AppException, NotFoundError, UnauthorizedError, ForbiddenError, BadRequestError, InternalError
__all__ = [
"AppException",
"NotFoundError",
"UnauthorizedError",
"ForbiddenError",
"BadRequestError",
"InternalError"
]

View File

@@ -0,0 +1,33 @@
from fastapi import HTTPException, Request, status
from fastapi.responses import JSONResponse
class AppException(Exception):
def __init__(self, status_code: int, detail: str):
self.status_code = status_code
self.detail = detail
class NotFoundError(AppException):
def __init__(self, detail: str = "资源不存在"):
super().__init__(status.HTTP_404_NOT_FOUND, detail)
class UnauthorizedError(AppException):
def __init__(self, detail: str = "未授权"):
super().__init__(status.HTTP_401_UNAUTHORIZED, detail)
class ForbiddenError(AppException):
def __init__(self, detail: str = "禁止访问"):
super().__init__(status.HTTP_403_FORBIDDEN, detail)
class BadRequestError(AppException):
def __init__(self, detail: str = "请求参数错误"):
super().__init__(status.HTTP_400_BAD_REQUEST, detail)
class InternalError(AppException):
def __init__(self, detail: str = "服务器内部错误"):
super().__init__(status.HTTP_500_INTERNAL_SERVER_ERROR, detail)
async def exception_handler(request: Request, exc: AppException):
return JSONResponse(
status_code=exc.status_code,
content={"detail": exc.detail}
)

View File

@@ -0,0 +1,5 @@
from .correlation_id import CorrelationIdMiddleware
from .logging import LoggingMiddleware
from .exception import ExceptionMiddleware
__all__ = ["CorrelationIdMiddleware", "LoggingMiddleware", "ExceptionMiddleware"]

View File

@@ -0,0 +1,10 @@
from fastapi import Request, Response
import uuid
class CorrelationIdMiddleware:
async def __call__(self, request: Request, call_next):
correlation_id = request.headers.get("X-Correlation-ID", str(uuid.uuid4()))
request.state.correlation_id = correlation_id
response = await call_next(request)
response.headers["X-Correlation-ID"] = correlation_id
return response

View File

@@ -0,0 +1,23 @@
from fastapi import Request, HTTPException
from fastapi.responses import JSONResponse
from loguru import logger
from shared.exceptions import AppException, exception_handler
class ExceptionMiddleware:
async def __call__(self, request: Request, call_next):
try:
response = await call_next(request)
return response
except AppException as exc:
return await exception_handler(request, exc)
except HTTPException as exc:
return JSONResponse(
status_code=exc.status_code,
content={"detail": exc.detail}
)
except Exception as exc:
logger.error(f"Unhandled exception: {exc}")
return JSONResponse(
status_code=500,
content={"detail": "服务器内部错误"}
)

View File

@@ -0,0 +1,29 @@
from fastapi import Request
from loguru import logger
import time
class LoggingMiddleware:
async def __call__(self, request: Request, call_next):
start_time = time.time()
correlation_id = getattr(request.state, "correlation_id", "-")
logger.info(
f"Request started",
method=request.method,
url=request.url.path,
correlation_id=correlation_id
)
response = await call_next(request)
process_time = time.time() - start_time
logger.info(
f"Request completed",
method=request.method,
url=request.url.path,
status_code=response.status_code,
process_time=process_time,
correlation_id=correlation_id
)
return response

View File

@@ -0,0 +1,3 @@
from .base import BaseModel, BaseDBModel
__all__ = ["BaseModel", "BaseDBModel"]

View File

@@ -0,0 +1,17 @@
from pydantic import BaseModel as PydanticBaseModel
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, DateTime
from sqlalchemy.sql import func
class BaseModel(PydanticBaseModel):
class Config:
from_attributes = True
Base = declarative_base()
class BaseDBModel(Base):
__abstract__ = True
id = Column(Integer, primary_key=True, index=True)
created_at = Column(DateTime(timezone=True), server_default=func.now())
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())

View File

@@ -0,0 +1,55 @@
syntax = "proto3";
package user;
message User {
int32 id = 1;
string username = 2;
string password_hash = 3;
string created_at = 4;
string updated_at = 5;
}
message GetUserRequest {
int32 id = 1;
}
message CreateUserRequest {
string username = 1;
string password_hash = 2;
}
message UpdateUserRequest {
int32 id = 1;
string username = 2;
string password_hash = 3;
}
message DeleteUserRequest {
int32 id = 1;
}
message ListUsersRequest {
int32 page = 1;
int32 page_size = 2;
}
message UserResponse {
User user = 1;
}
message UsersResponse {
repeated User users = 1;
int32 total = 2;
}
message EmptyResponse {
}
service UserService {
rpc GetUser(GetUserRequest) returns (UserResponse);
rpc CreateUser(CreateUserRequest) returns (UserResponse);
rpc UpdateUser(UpdateUserRequest) returns (UserResponse);
rpc DeleteUser(DeleteUserRequest) returns (EmptyResponse);
rpc ListUsers(ListUsersRequest) returns (UsersResponse);
}

View File

@@ -0,0 +1,16 @@
from .config import settings
from .logger import logger
from .security import create_access_token, verify_token, get_password_hash, verify_password
from .grpc_client import GrpcClient
from .redis_client import RedisClient
__all__ = [
"settings",
"logger",
"create_access_token",
"verify_token",
"get_password_hash",
"verify_password",
"GrpcClient",
"RedisClient"
]

View File

@@ -0,0 +1,33 @@
from pydantic_settings import BaseSettings
from typing import Optional
class Settings(BaseSettings):
# Environment
env: str = "development"
debug: bool = True
# PostgreSQL
postgres_user: str = "admin"
postgres_password: str = "password"
postgres_db: str = "postgres"
postgres_host: str = "postgres"
postgres_port: int = 5432
# Redis
redis_url: str = "redis://redis:6379/0"
# gRPC
grpc_port: int = 50051
# HTTP
http_port: int = 8000
@property
def database_url(self):
return f"postgresql+asyncpg://{self.postgres_user}:{self.postgres_password}@{self.postgres_host}:{self.postgres_port}/{self.postgres_db}"
class Config:
env_file = ".env"
case_sensitive = False
settings = Settings()

View File

@@ -0,0 +1,15 @@
import grpc
class GrpcClient:
def __init__(self, host: str, port: int):
self.host = host
self.port = port
self.channel = None
def __enter__(self):
self.channel = grpc.insecure_channel(f"{self.host}:{self.port}")
return self.channel
def __exit__(self, exc_type, exc_val, exc_tb):
if self.channel:
self.channel.close()

View File

@@ -0,0 +1,20 @@
from loguru import logger
import sys
# 配置日志
logger.remove()
logger.add(
sys.stdout,
level="INFO",
format="{time:YYYY-MM-DD HH:mm:ss} | {level: <8} | {name}:{function}:{line} | {message}",
filter=None,
colorize=True
)
logger.add(
"app.log",
rotation="500 MB",
compression="zip",
level="DEBUG",
format="{time:YYYY-MM-DD HH:mm:ss} | {level: <8} | {name}:{function}:{line} | {message}"
)

View File

@@ -0,0 +1,34 @@
import redis.asyncio as redis
from shared.utils.config import settings
class RedisClient:
def __init__(self):
self.redis_url = settings.redis_url
self.pool = None
async def connect(self):
if not self.pool:
self.pool = redis.from_url(self.redis_url, encoding="utf-8", decode_responses=True)
return self.pool
async def disconnect(self):
if self.pool:
await self.pool.close()
self.pool = None
async def get(self, key: str):
pool = await self.connect()
return await pool.get(key)
async def set(self, key: str, value: str, expire: int = None):
pool = await self.connect()
if expire:
await pool.set(key, value, ex=expire)
else:
await pool.set(key, value)
async def delete(self, key: str):
pool = await self.connect()
await pool.delete(key)
redis_client = RedisClient()

View File

@@ -0,0 +1,32 @@
import jwt
from datetime import datetime, timedelta
from passlib.context import CryptContext
SECRET_KEY = "your-secret-key"
ALGORITHM = "HS256"
ACCESS_TOKEN_EXPIRE_MINUTES = 30
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
def create_access_token(data: dict, expires_delta: timedelta = None):
to_encode = data.copy()
if expires_delta:
expire = datetime.utcnow() + expires_delta
else:
expire = datetime.utcnow() + timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
to_encode.update({"exp": expire})
encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
return encoded_jwt
def verify_token(token: str):
try:
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
return payload
except jwt.PyJWTError:
return None
def get_password_hash(password: str):
return pwd_context.hash(password)
def verify_password(plain_password: str, hashed_password: str):
return pwd_context.verify(plain_password, hashed_password)