Initial commit of Deep Research Mode

This commit is contained in:
xyz 2026-01-07 11:02:05 +08:00
commit cb97f7c49a
29 changed files with 1674 additions and 0 deletions

11
.env.example Normal file
View File

@ -0,0 +1,11 @@
# 复制此文件为 .env 并填入你的 API Key
# API Key 会自动从环境变量加载,无需在界面中输入
# AIHubMix API Key (默认使用)
AIHUBMIX_API_KEY=sk-your-api-key-here
# Anthropic API Key (可选)
ANTHROPIC_API_KEY=your-anthropic-api-key-here
# OpenAI API Key (可选)
OPENAI_API_KEY=your-openai-api-key-here

156
Project_Design.md Normal file
View File

@ -0,0 +1,156 @@
# 多 Agent 决策工作坊 (Multi-Agent Decision Workshop)
## 🎯 一句话描述
**Multi-Agent Decision Workshop** 是一个面向**产品经理、团队负责人、创业者**的 AI 辅助决策工具通过模拟多角色CEO、CTO、CFO、用户代言人、风险分析师等从不同视角对方案进行辩论帮助用户获得全面的决策洞察。
---
## 👤 目标用户与痛点
| 用户角色 | 真实痛点 |
|---------|---------|
| 产品经理 | 方案评审时容易陷入单一视角,忽略技术/成本/用户体验的平衡 |
| 创业者 | 独自决策缺乏多元反馈,容易盲目乐观或过度保守 |
| 团队负责人 | 会议中难以让所有人充分表达,强势声音主导决策 |
| 学生/个人 | 重要人生决策(职业、投资)缺乏专业视角指导 |
---
## 🔧 核心功能 (MVP - 3个必须有的功能)
### 1. 📝 决策议题输入
- 用户输入待决策的问题/方案
- 可选择决策类型(产品方案、商业决策、技术选型、个人规划)
- 支持上传背景资料(可选)
### 2. 🎭 多角色辩论模拟
- 系统自动分配 4-6 个不同视角的 Agent
- 每个 Agent 代表一个角色立场发表观点
- Agent 之间可以相互质疑和回应(多轮辩论)
### 3. 📊 决策报告生成
- 汇总各方观点的支持/反对理由
- 提炼关键决策要点和风险点
- 给出建议的决策框架和下一步行动
---
## 🎭 预设 Agent 角色库
| 角色 | 视角定位 | 关注点 |
|------|---------|--------|
| 🧑‍💼 CEO | 战略全局 | 愿景、市场机会、竞争格局 |
| 👨‍💻 CTO | 技术可行性 | 技术难度、资源需求、技术债 |
| 💰 CFO | 财务健康 | ROI、成本、现金流、盈利模式 |
| 👥 用户代言人 | 用户体验 | 用户需求、痛点、使用场景 |
| ⚠️ 风险分析师 | 风险控制 | 潜在风险、失败模式、应急预案 |
| 🚀 增长黑客 | 快速验证 | MVP思维、增长杠杆、数据驱动 |
| 🎨 产品设计师 | 产品体验 | 交互设计、用户旅程、差异化 |
| 📈 市场分析师 | 市场洞察 | 市场规模、趋势、竞品分析 |
---
## 🔄 用户交互流程
```
┌─────────────────────────────────────────────────────────────────┐
│ 用户打开 App │
│ ↓ │
│ [选择决策类型] 产品方案 / 商业决策 / 技术选型 / 个人规划 │
│ ↓ │
│ [输入决策议题] "我们是否应该在Q2推出AI助手功能" │
│ ↓ │
│ [选择参与角色] ☑CEO ☑CTO ☑CFO ☑️用户代言人 (可自定义) │
│ ↓ │
│ [开始辩论] → 观看多Agent实时辩论流式输出
│ ↓ │
│ [生成报告] → 下载决策要点 PDF / Markdown │
└─────────────────────────────────────────────────────────────────┘
```
---
## 🏗️ 技术架构
```
┌──────────────────────────────────────────────────────────────┐
│ Frontend (Streamlit) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ 议题输入区 │ │ 辩论展示区 │ │ 决策报告区 │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
└────────────────────────────┬─────────────────────────────────┘
┌──────────────────────────────────────────────────────────────┐
│ Backend (Python) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Agent管理器 │ │ 辩论编排器 │ │ 报告生成器 │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
└────────────────────────────┬─────────────────────────────────┘
┌──────────────────────────────────────────────────────────────┐
│ LLM API Layer │
│ Claude API / OpenAI API / 本地模型 │
└──────────────────────────────────────────────────────────────┘
```
---
## 📁 项目文件结构
```
multi_agent_workshop/
├── app.py # Streamlit 主入口
├── config.py # 配置文件API Key、模型设置
├── requirements.txt # 依赖包
├── agents/ # Agent 相关
│ ├── __init__.py
│ ├── base_agent.py # Agent 基类
│ ├── agent_factory.py # Agent 工厂(创建不同角色)
│ └── agent_profiles.py # 角色定义和 Prompt 模板
├── orchestrator/ # 辩论编排
│ ├── __init__.py
│ ├── debate_manager.py # 辩论流程管理
│ └── turn_strategy.py # 发言顺序策略
├── report/ # 报告生成
│ ├── __init__.py
│ ├── summarizer.py # 观点汇总
│ └── report_generator.py # 报告输出
├── ui/ # UI 组件
│ ├── __init__.py
│ ├── input_panel.py # 输入面板
│ ├── debate_panel.py # 辩论展示
│ └── report_panel.py # 报告展示
└── utils/ # 工具函数
├── __init__.py
└── llm_client.py # LLM API 封装
```
---
## ⏱️ 开发里程碑
| 阶段 | 目标 | 预计时间 |
|------|------|---------|
| Phase 1 | 单 Agent 问答(验证 API 调用) | 30 分钟 |
| Phase 2 | 多 Agent 顺序发言 | 1 小时 |
| Phase 3 | Agent 交互辩论 | 1.5 小时 |
| Phase 4 | 决策报告生成 | 1 小时 |
| Phase 5 | UI 美化 + 导出功能 | 1 小时 |
---
## 🚀 扩展功能Nice to Have
- [ ] 自定义 Agent 角色
- [ ] 保存历史决策会话
- [ ] 决策追踪(后续验证决策效果)
- [ ] 团队协作模式(多人实时参与)
- [ ] 知识库集成(基于公司内部文档决策)

Binary file not shown.

Binary file not shown.

17
agents/__init__.py Normal file
View File

@ -0,0 +1,17 @@
"""Agents 模块"""
from agents.base_agent import BaseAgent, AgentMessage
from agents.agent_profiles import (
AGENT_PROFILES,
get_agent_profile,
get_all_agents,
get_recommended_agents
)
__all__ = [
"BaseAgent",
"AgentMessage",
"AGENT_PROFILES",
"get_agent_profile",
"get_all_agents",
"get_recommended_agents"
]

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

195
agents/agent_profiles.py Normal file
View File

@ -0,0 +1,195 @@
"""
Agent 角色配置 - 定义各个角色的视角和 Prompt 模板
"""
AGENT_PROFILES = {
"ceo": {
"name": "CEO 战略顾问",
"emoji": "🧑‍💼",
"perspective": "战略全局视角",
"focus_areas": ["愿景对齐", "市场机会", "竞争格局", "资源分配", "长期价值"],
"system_prompt": """你是一位经验丰富的 CEO 战略顾问,擅长从全局视角分析决策。
你的思考维度
- 这个决策是否符合公司/个人的长期愿景
- 市场时机是否合适竞争对手在做什么
- 资源投入是否值得机会成本是什么
- 这个决策的战略杠杆点在哪里
沟通风格
- 高屋建瓴关注大局
- 用数据和案例支撑观点
- 敢于提出尖锐问题
- 简洁有力直击要害"""
},
"cto": {
"name": "CTO 技术专家",
"emoji": "👨‍💻",
"perspective": "技术可行性视角",
"focus_areas": ["技术难度", "资源需求", "技术债务", "可扩展性", "技术趋势"],
"system_prompt": """你是一位资深的 CTO 技术专家,擅长评估技术方案的可行性和风险。
你的思考维度
- 技术实现难度如何需要什么技术栈
- 团队是否具备相关能力需要多少开发资源
- 会引入哪些技术债务如何控制复杂度
- 系统的可扩展性和可维护性如何
- 是否符合技术发展趋势
沟通风格
- 技术视角务实分析
- 明确指出技术风险和挑战
- 提供具体的技术建议
- 用技术语言但确保非技术人员能理解"""
},
"cfo": {
"name": "CFO 财务顾问",
"emoji": "💰",
"perspective": "财务健康视角",
"focus_areas": ["投资回报", "成本结构", "现金流", "盈利模式", "财务风险"],
"system_prompt": """你是一位精明的 CFO 财务顾问,擅长从财务角度评估决策的可行性。
你的思考维度
- 预期投资回报率(ROI)是多少回收期多长
- 成本结构如何固定成本和变动成本分别是多少
- 对现金流有什么影响是否会造成资金压力
- 盈利模式是否清晰可行
- 财务风险敞口有多大
沟通风格
- 数据驱动用数字说话
- 关注投入产出比
- 提醒隐藏成本和财务风险
- 理性客观不被情怀裹挟"""
},
"user_advocate": {
"name": "用户代言人",
"emoji": "👥",
"perspective": "用户体验视角",
"focus_areas": ["用户需求", "使用场景", "痛点解决", "用户旅程", "竞品对比"],
"system_prompt": """你是用户的代言人,始终站在用户角度思考问题。
你的思考维度
- 用户真的需要这个吗解决的是真痛点还是伪需求
- 用户会在什么场景下使用使用频率如何
- 用户体验是否流畅有没有不必要的摩擦
- 相比现有方案用户为什么要选择我们
- 用户愿意为此付费吗付多少
沟通风格
- 始终以用户视角发言
- 用用户的语言描述问题
- 善于讲用户故事和场景
- 对伪需求保持警惕"""
},
"risk_analyst": {
"name": "风险分析师",
"emoji": "⚠️",
"perspective": "风险控制视角",
"focus_areas": ["潜在风险", "失败模式", "应急预案", "依赖关系", "最坏情况"],
"system_prompt": """你是一位专业的风险分析师,擅长识别和评估潜在风险。
你的思考维度
- 可能出现哪些失败情况概率和影响如何
- 有哪些关键依赖如果依赖失效会怎样
- 最坏情况是什么我们能承受吗
- 有没有应急预案Plan B 是什么
- 如何降低风险哪些风险是可接受的
沟通风格
- 思维缜密考虑周全
- 善于发现隐藏风险
- 不是否定派而是帮助做好准备
- 提供风险缓解建议"""
},
"growth_hacker": {
"name": "增长黑客",
"emoji": "🚀",
"perspective": "快速验证视角",
"focus_areas": ["MVP思维", "增长杠杆", "数据驱动", "迭代速度", "病毒传播"],
"system_prompt": """你是一位增长黑客,信奉快速验证和数据驱动。
你的思考维度
- 最小可行产品(MVP)是什么如何最快验证假设
- 增长杠杆在哪里有没有病毒传播的可能
- 如何设计实验成功/失败的衡量标准是什么
- 迭代周期能压缩到多短
- 有没有低成本快速试错的方法
沟通风格
- 行动导向反对过度分析
- 强调快速迭代和验证
- 用数据说话关注转化漏斗
- 推崇精益创业方法论"""
},
"product_designer": {
"name": "产品设计师",
"emoji": "🎨",
"perspective": "产品体验视角",
"focus_areas": ["交互设计", "用户旅程", "视觉体验", "差异化", "情感连接"],
"system_prompt": """你是一位产品设计师,追求极致的产品体验。
你的思考维度
- 产品的核心体验是什么如何让用户""一下
- 用户旅程是否流畅有没有惊喜时刻
- 视觉和交互设计是否一致且有品位
- 产品有什么独特的差异化特征
- 用户会对这个产品产生情感连接吗
沟通风格
- 关注细节和体验
- 用场景和故事表达
- 追求简洁和优雅
- 善于发现设计机会"""
},
"market_analyst": {
"name": "市场分析师",
"emoji": "📈",
"perspective": "市场洞察视角",
"focus_areas": ["市场规模", "行业趋势", "竞品分析", "定位策略", "进入时机"],
"system_prompt": """你是一位市场分析师,擅长市场研究和竞争分析。
你的思考维度
- 目标市场规模有多大增长趋势如何
- 行业有什么新趋势我们是否踩中了
- 竞争对手在做什么我们的差异化在哪
- 市场定位是否清晰目标客群是谁
- 进入时机是否合适先发优势 vs 后发优势
沟通风格
- 数据驱动引用市场研究
- 关注趋势和变化
- 善于对比分析
- 提供市场策略建议"""
}
}
# 决策类型对应的推荐角色组合
RECOMMENDED_AGENTS = {
"product": ["ceo", "cto", "user_advocate", "product_designer", "growth_hacker"],
"business": ["ceo", "cfo", "market_analyst", "risk_analyst", "growth_hacker"],
"tech": ["cto", "ceo", "risk_analyst", "growth_hacker", "user_advocate"],
"personal": ["ceo", "risk_analyst", "user_advocate", "growth_hacker", "cfo"]
}
def get_agent_profile(agent_id: str) -> dict:
"""获取指定 Agent 的配置"""
return AGENT_PROFILES.get(agent_id, None)
def get_all_agents() -> list:
"""获取所有可用的 Agent 列表"""
return [
{"id": k, "name": v["name"], "emoji": v["emoji"]}
for k, v in AGENT_PROFILES.items()
]
def get_recommended_agents(decision_type: str) -> list:
"""根据决策类型获取推荐的 Agent 组合"""
return RECOMMENDED_AGENTS.get(decision_type, list(AGENT_PROFILES.keys())[:5])

131
agents/base_agent.py Normal file
View File

@ -0,0 +1,131 @@
"""
Agent 基类 - 定义 Agent 的基本行为
"""
from dataclasses import dataclass
from typing import Generator
from agents.agent_profiles import get_agent_profile
@dataclass
class AgentMessage:
"""Agent 发言消息"""
agent_id: str
agent_name: str
emoji: str
content: str
round_num: int
class BaseAgent:
"""Agent 基类"""
def __init__(self, agent_id: str, llm_client):
"""
初始化 Agent
Args:
agent_id: Agent 标识符 ( 'ceo', 'cto')
llm_client: LLM 客户端实例
"""
self.agent_id = agent_id
self.llm_client = llm_client
profile = get_agent_profile(agent_id)
if not profile:
raise ValueError(f"未知的 Agent ID: {agent_id}")
self.name = profile["name"]
self.emoji = profile["emoji"]
self.perspective = profile["perspective"]
self.focus_areas = profile["focus_areas"]
self.system_prompt = profile["system_prompt"]
# 存储对话历史
self.conversation_history = []
def generate_response(
self,
topic: str,
context: str = "",
previous_speeches: list = None,
round_num: int = 1
) -> Generator[str, None, None]:
"""
生成 Agent 的发言流式输出
Args:
topic: 讨论议题
context: 背景信息
previous_speeches: 之前其他 Agent 的发言列表
round_num: 当前轮次
Yields:
str: 流式输出的文本片段
"""
# 构建对话 prompt
user_prompt = self._build_user_prompt(topic, context, previous_speeches, round_num)
# 调用 LLM 生成回复
full_response = ""
for chunk in self.llm_client.chat_stream(
system_prompt=self.system_prompt,
user_prompt=user_prompt
):
full_response += chunk
yield chunk
# 保存到历史
self.conversation_history.append({
"round": round_num,
"content": full_response
})
def _build_user_prompt(
self,
topic: str,
context: str,
previous_speeches: list,
round_num: int
) -> str:
"""构建用户 prompt"""
prompt_parts = [f"## 讨论议题\n{topic}"]
if context:
prompt_parts.append(f"\n## 背景信息\n{context}")
if previous_speeches and len(previous_speeches) > 0:
prompt_parts.append("\n## 其他人的观点")
for speech in previous_speeches:
prompt_parts.append(
f"\n**{speech['emoji']} {speech['name']}**:\n{speech['content']}"
)
if round_num == 1:
prompt_parts.append(
f"\n## 你的任务\n"
f"作为 {self.name},请从你的专业视角({self.perspective})对这个议题发表看法。\n"
f"重点关注:{', '.join(self.focus_areas)}\n"
f"请给出 2-3 个核心观点,每个观点用 1-2 句话阐述。保持简洁有力。"
)
else:
prompt_parts.append(
f"\n## 你的任务\n"
f"这是第 {round_num} 轮讨论。请针对其他人的观点进行回应:\n"
f"- 你同意或反对哪些观点?为什么?\n"
f"- 有没有被忽略的重要问题?\n"
f"- 你的立场有没有调整?\n"
f"请保持简洁,聚焦于最重要的 1-2 个点。"
)
return "\n".join(prompt_parts)
def get_summary(self) -> str:
"""获取该 Agent 所有发言的摘要"""
if not self.conversation_history:
return "暂无发言"
return "\n---\n".join([
f"{h['round']} 轮: {h['content']}"
for h in self.conversation_history
])

44
agents/research_agent.py Normal file
View File

@ -0,0 +1,44 @@
from typing import Generator, List, Dict
from utils.llm_client import LLMClient
import config
class ResearchAgent:
"""研究模式专用 Agent"""
def __init__(self, role: str, llm_client: LLMClient):
self.role = role
self.llm_client = llm_client
self.role_config = config.RESEARCH_MODEL_ROLES.get(role, {})
self.name = self.role_config.get("name", role.capitalize())
def _get_system_prompt(self, context: str = "") -> str:
if self.role == "planner":
return f"""You are a Senior Research Planner.
Your goal is to break down a complex user topic into a structured research plan.
You must create a clear, step-by-step plan that covers different angles of the topic.
Format your output as a Markdown list of steps.
Context: {context}"""
elif self.role == "researcher":
return f"""You are a Deep Researcher.
Your goal is to execute a specific research step and provide detailed, in-depth analysis.
Use your vast knowledge to provide specific facts, figures, and logical reasoning.
Do not be superficial. Go deep.
Context: {context}"""
elif self.role == "writer":
return f"""You are a Senior Report Writer.
Your goal is to synthesize multiple research findings into a cohesive, high-quality report.
The report should be well-structured, easy to read, and provide actionable insights.
Context: {context}"""
else:
return "You are a helpful assistant."
def generate(self, prompt: str, context: str = "") -> Generator[str, None, None]:
"""Generate response stream"""
system_prompt = self._get_system_prompt(context)
yield from self.llm_client.chat_stream(
system_prompt=system_prompt,
user_prompt=prompt
)

556
app.py Normal file
View File

@ -0,0 +1,556 @@
"""
Multi-Agent Decision Workshop - 主应用
Agent 决策工作坊通过多角色辩论帮助用户做出更好的决策
"""
import streamlit as st
import os
from dotenv import load_dotenv
# 加载环境变量
load_dotenv()
from agents import get_all_agents, get_recommended_agents, AGENT_PROFILES
from orchestrator import DebateManager, DebateConfig
from orchestrator.research_manager import ResearchManager, ResearchConfig
from report import ReportGenerator
from utils import LLMClient
import config
# ==================== 页面配置 ====================
st.set_page_config(
page_title="🎭 多 Agent 决策工作坊",
page_icon="🎭",
layout="wide",
initial_sidebar_state="expanded"
)
# ==================== 样式 ====================
st.markdown("""
<style>
.agent-card {
padding: 1rem;
border-radius: 0.5rem;
margin-bottom: 0.5rem;
border-left: 4px solid #4A90A4;
background-color: #f8f9fa;
}
.speech-bubble {
background-color: #f0f2f6;
padding: 1rem;
border-radius: 0.5rem;
margin: 0.5rem 0;
}
.round-header {
background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);
color: white;
padding: 0.5rem 1rem;
border-radius: 0.5rem;
margin: 1rem 0;
}
.custom-agent-form {
background-color: #e8f4f8;
padding: 1rem;
border-radius: 0.5rem;
margin: 0.5rem 0;
}
.research-step {
border-left: 3px solid #FF4B4B;
padding-left: 10px;
margin-bottom: 10px;
}
</style>
""", unsafe_allow_html=True)
# ==================== 常量定义 ====================
# 从环境变量读取 API Key隐藏在 .env 文件中)
DEFAULT_API_KEY = os.getenv("AIHUBMIX_API_KEY", "")
# 支持的模型列表
AVAILABLE_MODELS = {
"gpt-4o": "GPT-4o (推荐)",
"gpt-4o-mini": "GPT-4o Mini (快速)",
"gpt-4-turbo": "GPT-4 Turbo",
"gpt-3.5-turbo": "GPT-3.5 Turbo (经济)",
"claude-3-5-sonnet-20241022": "Claude 3.5 Sonnet",
"claude-3-opus-20240229": "Claude 3 Opus",
"claude-3-haiku-20240307": "Claude 3 Haiku (快速)",
"deepseek-chat": "DeepSeek Chat",
"deepseek-coder": "DeepSeek Coder",
"gemini-1.5-pro": "Gemini 1.5 Pro",
"gemini-1.5-flash": "Gemini 1.5 Flash",
"qwen-turbo": "通义千问 Turbo",
"qwen-plus": "通义千问 Plus",
"glm-4": "智谱 GLM-4",
"moonshot-v1-8k": "Moonshot (月之暗面)",
}
# 决策类型
DECISION_TYPES = {
"product": "产品方案",
"business": "商业决策",
"tech": "技术选型",
"personal": "个人规划"
}
# ==================== 初始化 Session State ====================
if "mode" not in st.session_state:
st.session_state.mode = "Deep Research"
# Debate State
if "debate_started" not in st.session_state:
st.session_state.debate_started = False
if "debate_finished" not in st.session_state:
st.session_state.debate_finished = False
if "speeches" not in st.session_state:
st.session_state.speeches = []
if "report" not in st.session_state:
st.session_state.report = ""
if "custom_agents" not in st.session_state:
st.session_state.custom_agents = {}
# Research State
if "research_plan" not in st.session_state:
st.session_state.research_plan = ""
if "research_started" not in st.session_state:
st.session_state.research_started = False
if "research_output" not in st.session_state:
st.session_state.research_output = "" # Final report
if "research_steps_output" not in st.session_state:
st.session_state.research_steps_output = [] # List of step results
# ==================== 侧边栏:配置 ====================
with st.sidebar:
st.header("⚙️ 设置")
# 全局 API Key 设置
with st.expander("🔑 API Key 设置", expanded=True):
use_custom_key = st.checkbox("使用自定义 API Key")
if use_custom_key:
api_key = st.text_input(
"API Key",
type="password",
help="留空则使用环境变量中的 Key"
)
else:
api_key = DEFAULT_API_KEY
st.divider()
# 模式选择
mode = st.radio(
"📊 选择模式",
["Deep Research", "Debate Workshop"],
index=0 if st.session_state.mode == "Deep Research" else 1
)
st.session_state.mode = mode
st.divider()
if mode == "Deep Research":
st.subheader("🧪 研究模型配置")
# 3 个角色的模型配置
roles_config = {}
for role_key, role_info in config.RESEARCH_MODEL_ROLES.items():
roles_config[role_key] = st.selectbox(
f"{role_info['name']} ({role_info['description']})",
options=list(AVAILABLE_MODELS.keys()),
index=list(AVAILABLE_MODELS.keys()).index(role_info['default_model']) if role_info['default_model'] in AVAILABLE_MODELS else 0,
key=f"model_{role_key}"
)
else: # Debate Workshop
# 模型选择
model = st.selectbox(
"🤖 选择通用模型",
options=list(AVAILABLE_MODELS.keys()),
format_func=lambda x: AVAILABLE_MODELS[x],
index=0,
help="选择用于辩论的 AI 模型"
)
# 辩论配置
max_rounds = st.slider(
"🔄 辩论轮数",
min_value=1,
max_value=4,
value=2,
help="每轮所有 Agent 都会发言一次"
)
st.divider()
# ==================== 自定义角色 (Debate Only) ====================
st.subheader("✨ 自定义角色")
with st.expander(" 添加新角色", expanded=False):
new_agent_name = st.text_input("角色名称", placeholder="如:法务顾问", key="new_agent_name")
new_agent_emoji = st.text_input("角色 Emoji", value="🎯", max_chars=2, key="new_agent_emoji")
new_agent_perspective = st.text_input("视角定位", placeholder="如:法律合规视角", key="new_agent_perspective")
new_agent_focus = st.text_input("关注点(逗号分隔)", placeholder="如:合规风险, 法律条款", key="new_agent_focus")
new_agent_prompt = st.text_area("角色设定 Prompt", placeholder="描述这个角色的思考方式...", height=100, key="new_agent_prompt")
if st.button("✅ 添加角色", use_container_width=True):
if new_agent_name and new_agent_prompt:
agent_id = f"custom_{len(st.session_state.custom_agents)}"
st.session_state.custom_agents[agent_id] = {
"name": new_agent_name,
"emoji": new_agent_emoji,
"perspective": new_agent_perspective or "自定义视角",
"focus_areas": [f.strip() for f in new_agent_focus.split(",") if f.strip()],
"system_prompt": new_agent_prompt
}
st.success(f"已添加角色: {new_agent_emoji} {new_agent_name}")
st.rerun()
else:
st.warning("请至少填写角色名称和 Prompt")
# 显示已添加的自定义角色
if st.session_state.custom_agents:
st.markdown("**已添加的自定义角色:**")
for agent_id, agent_info in list(st.session_state.custom_agents.items()):
col1, col2 = st.columns([3, 1])
with col1:
st.markdown(f"{agent_info['emoji']} {agent_info['name']}")
with col2:
if st.button("🗑️", key=f"del_{agent_id}"):
del st.session_state.custom_agents[agent_id]
st.rerun()
# ==================== 主界面逻辑 ====================
if mode == "Deep Research":
st.title("🧪 Deep Research Mode")
st.markdown("*深度研究模式:规划 -> 研究 -> 报告*")
# Input
research_topic = st.text_area("研究主题", placeholder="请输入你想深入研究的主题...", height=100)
research_context = st.text_area("补充背景 (可选)", placeholder="任何额外的背景信息...", height=80)
generate_plan_btn = st.button("📝 生成研究计划", type="primary", disabled=not research_topic)
if generate_plan_btn and research_topic:
st.session_state.research_started = False
st.session_state.research_output = ""
st.session_state.research_steps_output = []
manager = ResearchManager(api_key=api_key)
config_obj = ResearchConfig(
topic=research_topic,
context=research_context,
planner_model=roles_config['planner'],
researcher_model=roles_config['researcher'],
writer_model=roles_config['writer']
)
manager.create_agents(config_obj)
with st.spinner("正在制定研究计划..."):
plan_text = ""
for chunk in manager.generate_plan(research_topic, research_context):
plan_text += chunk
st.session_state.research_plan = plan_text
# Plan Review & Edit
if st.session_state.research_plan:
st.divider()
st.subheader("📋 研究计划确认")
edited_plan = st.text_area("请审查并编辑计划 (Markdown格式)", value=st.session_state.research_plan, height=300)
st.session_state.research_plan = edited_plan
start_research_btn = st.button("🚀 开始深度研究", type="primary")
if start_research_btn:
st.session_state.research_started = True
st.session_state.research_steps_output = [] # Reset steps
# Parse plan lines to get steps (simple heuristic: lines starting with - or 1.)
steps = [line.strip() for line in edited_plan.split('\n') if line.strip().startswith(('-', '*', '1.', '2.', '3.', '4.', '5.'))]
if not steps:
steps = [edited_plan] # Fallback if no list format
manager = ResearchManager(api_key=api_key)
config_obj = ResearchConfig(
topic=research_topic,
context=research_context,
planner_model=roles_config['planner'],
researcher_model=roles_config['researcher'],
writer_model=roles_config['writer']
)
manager.create_agents(config_obj)
# Execute Steps
previous_findings = ""
st.divider()
st.subheader("🔍 研究进行中...")
step_progress = st.container()
for i, step in enumerate(steps):
with step_progress:
with st.status(f"正在研究: {step}", expanded=True):
findings_text = ""
placeholder = st.empty()
for chunk in manager.execute_step(step, previous_findings):
findings_text += chunk
placeholder.markdown(findings_text)
st.session_state.research_steps_output.append(f"### {step}\n{findings_text}")
previous_findings += f"\n\nFinding for '{step}':\n{findings_text}"
# Final Report
st.divider()
st.subheader("📄 最终报告生成中...")
report_placeholder = st.empty()
final_report = ""
for chunk in manager.generate_report(research_topic, previous_findings):
final_report += chunk
report_placeholder.markdown(final_report)
st.session_state.research_output = final_report
st.success("✅ 研究完成")
# Show Final Report if available
if st.session_state.research_output:
st.divider()
st.subheader("📄 最终研究报告")
st.markdown(st.session_state.research_output)
st.download_button("📥 下载报告", st.session_state.research_output, "research_report.md")
elif mode == "Debate Workshop":
# ==================== 原始 Debate UI 逻辑 ====================
st.title("🎭 多 Agent 决策工作坊")
st.markdown("*让多个 AI 角色从不同视角辩论,帮助你做出更全面的决策*")
# ==================== 输入区域 ====================
col1, col2 = st.columns([2, 1])
with col1:
st.subheader("📝 决策议题")
# 决策类型选择
decision_type = st.selectbox(
"决策类型",
options=list(DECISION_TYPES.keys()),
format_func=lambda x: DECISION_TYPES[x],
index=0
)
# 议题输入
topic = st.text_area(
"请描述你的决策议题",
placeholder="例如:我们是否应该在 Q2 推出 AI 助手功能?\n\n或者:我应该接受这份新工作 offer 吗?",
height=120
)
# 背景信息(可选)
with st.expander(" 添加背景信息(可选)"):
context = st.text_area(
"背景信息",
placeholder="提供更多上下文信息,如:\n- 当前状况\n- 已有的资源和限制\n- 相关数据和事实",
height=100
)
context = context if 'context' in dir() else ""
with col2:
st.subheader("🎭 选择参与角色")
# 获取推荐的角色
recommended = get_recommended_agents(decision_type)
all_agents = get_all_agents()
# 预设角色选择
st.markdown("**预设角色:**")
selected_agents = []
for agent in all_agents:
is_recommended = agent["id"] in recommended
default_checked = is_recommended
if st.checkbox(
f"{agent['emoji']} {agent['name']}",
value=default_checked,
key=f"agent_{agent['id']}"
):
selected_agents.append(agent["id"])
# 自定义角色选择
if st.session_state.custom_agents:
st.markdown("**自定义角色:**")
for agent_id, agent_info in st.session_state.custom_agents.items():
if st.checkbox(
f"{agent_info['emoji']} {agent_info['name']}",
value=True,
key=f"agent_{agent_id}"
):
selected_agents.append(agent_id)
# 角色数量提示
if len(selected_agents) < 2:
st.warning("请至少选择 2 个角色")
elif len(selected_agents) > 6:
st.warning("建议不超过 6 个角色")
else:
st.info(f"已选择 {len(selected_agents)} 个角色")
# ==================== 辩论控制 ====================
st.divider()
col_btn1, col_btn2, col_btn3 = st.columns([1, 1, 2])
with col_btn1:
start_btn = st.button(
"🚀 开始辩论",
disabled=(not topic or len(selected_agents) < 2 or not api_key),
type="primary",
use_container_width=True
)
with col_btn2:
reset_btn = st.button(
"🔄 重置",
use_container_width=True
)
if reset_btn:
st.session_state.debate_started = False
st.session_state.debate_finished = False
st.session_state.speeches = []
st.session_state.report = ""
st.rerun()
# ==================== 辩论展示区 ====================
if start_btn and topic and len(selected_agents) >= 2:
st.session_state.debate_started = True
st.session_state.speeches = []
st.divider()
st.subheader("🎬 辩论进行中...")
# 临时将自定义角色添加到 agent_profiles
from agents import agent_profiles
original_profiles = dict(agent_profiles.AGENT_PROFILES)
agent_profiles.AGENT_PROFILES.update(st.session_state.custom_agents)
try:
# 初始化客户端和管理器
provider_val = "aihubmix" # Debate mode default to aihubmix or logic needs to be robust.
# Note: in sidebar "model" and "api_key" were set. "provider" variable is now inside the Sidebar logic block if mode==Debate.
# But wait, I removed the "Advanced Settings" block from the global scope and put it into sub-scope?
# Let's check my sidebar logic above.
# Refactoring check:
# I removed the provider selection logic from the global sidebar. I should probably add it back or assume a default.
# In the original code, provider selection was in "Advanced Settings".
llm_client = LLMClient(
provider="aihubmix",
api_key=api_key,
base_url="https://aihubmix.com/v1",
model=model
)
debate_manager = DebateManager(llm_client)
# 配置辩论
debate_config = DebateConfig(
topic=topic,
context=context,
agent_ids=selected_agents,
max_rounds=max_rounds
)
debate_manager.setup_debate(debate_config)
# 运行辩论(流式)
current_round = 0
speech_placeholder = None
for event in debate_manager.run_debate_stream():
if event["type"] == "round_start":
current_round = event["round"]
st.markdown(
f'<div class="round-header">📢 第 {current_round} 轮讨论</div>',
unsafe_allow_html=True
)
elif event["type"] == "speech_start":
st.markdown(f"**{event['emoji']} {event['agent_name']}**")
speech_placeholder = st.empty()
current_content = ""
elif event["type"] == "speech_chunk":
current_content += event["chunk"]
speech_placeholder.markdown(current_content)
elif event["type"] == "speech_end":
st.session_state.speeches.append({
"agent_id": event["agent_id"],
"content": event["content"],
"round": current_round
})
st.divider()
elif event["type"] == "debate_end":
st.session_state.debate_finished = True
st.success("✅ 辩论结束!正在生成决策报告...")
# 生成报告
if st.session_state.debate_finished:
report_generator = ReportGenerator(llm_client)
speeches = debate_manager.get_all_speeches()
st.subheader("📊 决策报告")
report_placeholder = st.empty()
report_content = ""
for chunk in report_generator.generate_report_stream(
topic=topic,
speeches=speeches,
context=context
):
report_content += chunk
report_placeholder.markdown(report_content)
st.session_state.report = report_content
# 下载按钮
st.download_button(
label="📥 下载报告 (Markdown)",
data=report_content,
file_name="decision_report.md",
mime="text/markdown"
)
except Exception as e:
st.error(f"发生错误: {str(e)}")
import traceback
st.code(traceback.format_exc())
st.info("请检查你的 API Key 和模型设置是否正确")
finally:
# 恢复原始角色配置
agent_profiles.AGENT_PROFILES = original_profiles
# ==================== 历史报告展示 ====================
elif st.session_state.report and not start_btn:
st.divider()
st.subheader("📊 上次的决策报告")
st.markdown(st.session_state.report)
st.download_button(
label="📥 下载报告 (Markdown)",
data=st.session_state.report,
file_name="decision_report.md",
mime="text/markdown"
)
# ==================== 底部信息 ====================
st.divider()
col_footer1, col_footer2, col_footer3 = st.columns(3)
with col_footer2:
st.markdown(
"<div style='text-align: center; color: #888;'>"
"🎭 Multi-Agent Decision Workshop<br>多 Agent 决策工作坊"
"</div>",
unsafe_allow_html=True
)

50
config.py Normal file
View File

@ -0,0 +1,50 @@
"""
配置文件 - API Keys 和模型设置
"""
import os
from dotenv import load_dotenv
load_dotenv()
# API 配置
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY", "")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "")
AIHUBMIX_API_KEY = os.getenv("AIHUBMIX_API_KEY", "sk-yd8Tik0nFW5emKYcBdFc433b7c8b4dC182848f76819bBe73")
# AIHubMix 配置
AIHUBMIX_BASE_URL = "https://aihubmix.com/v1"
# 模型配置
DEFAULT_MODEL = "gpt-4o" # AIHubMix 支持的模型
LLM_PROVIDER = "aihubmix" # 默认使用 AIHubMix
# 辩论配置
MAX_DEBATE_ROUNDS = 3 # 最大辩论轮数
MAX_AGENTS = 6 # 最大参与 Agent 数量
# 研究模式模型角色配置
RESEARCH_MODEL_ROLES = {
"planner": {
"name": "Planner",
"default_model": "gpt-4o",
"description": "负责拆解问题,制定研究计划"
},
"researcher": {
"name": "Researcher",
"default_model": "gemini-1.5-pro",
"description": "负责执行具体的研究步骤,深度分析"
},
"writer": {
"name": "Writer",
"default_model": "claude-3-5-sonnet-20241022",
"description": "负责汇总信息,撰写最终报告"
}
}
# 决策类型
DECISION_TYPES = {
"product": "产品方案",
"business": "商业决策",
"tech": "技术选型",
"personal": "个人规划"
}

4
orchestrator/__init__.py Normal file
View File

@ -0,0 +1,4 @@
"""Orchestrator 模块"""
from orchestrator.debate_manager import DebateManager, DebateConfig, SpeechRecord
__all__ = ["DebateManager", "DebateConfig", "SpeechRecord"]

Binary file not shown.

View File

@ -0,0 +1,160 @@
"""
辩论管理器 - 编排多 Agent 辩论流程
"""
from typing import List, Generator, Callable
from dataclasses import dataclass
from agents.base_agent import BaseAgent
from agents.agent_profiles import get_agent_profile
from utils.llm_client import LLMClient
import config
@dataclass
class DebateConfig:
"""辩论配置"""
topic: str
context: str = ""
agent_ids: List[str] = None
max_rounds: int = 2
@dataclass
class SpeechRecord:
"""发言记录"""
agent_id: str
agent_name: str
emoji: str
content: str
round_num: int
class DebateManager:
"""辩论管理器"""
def __init__(self, llm_client: LLMClient = None):
"""
初始化辩论管理器
Args:
llm_client: LLM 客户端实例
"""
self.llm_client = llm_client or LLMClient()
self.agents: List[BaseAgent] = []
self.speech_records: List[SpeechRecord] = []
self.current_round = 0
def setup_debate(self, debate_config: DebateConfig) -> None:
"""
设置辩论
Args:
debate_config: 辩论配置
"""
self.config = debate_config
self.agents = []
self.speech_records = []
self.current_round = 0
# 创建参与的 Agent
for agent_id in debate_config.agent_ids:
agent = BaseAgent(agent_id, self.llm_client)
self.agents.append(agent)
def run_debate_stream(
self,
on_speech_start: Callable = None,
on_speech_chunk: Callable = None,
on_speech_end: Callable = None,
on_round_end: Callable = None
) -> Generator[dict, None, None]:
"""
运行辩论流式
Args:
on_speech_start: 发言开始回调
on_speech_chunk: 发言片段回调
on_speech_end: 发言结束回调
on_round_end: 轮次结束回调
Yields:
dict: 事件信息
"""
for round_num in range(1, self.config.max_rounds + 1):
self.current_round = round_num
yield {
"type": "round_start",
"round": round_num,
"total_rounds": self.config.max_rounds
}
for agent in self.agents:
# 获取之前的发言(排除自己)
previous_speeches = [
{
"name": r.agent_name,
"emoji": r.emoji,
"content": r.content
}
for r in self.speech_records
if r.agent_id != agent.agent_id
]
yield {
"type": "speech_start",
"agent_id": agent.agent_id,
"agent_name": agent.name,
"emoji": agent.emoji,
"round": round_num
}
# 流式生成发言
full_content = ""
for chunk in agent.generate_response(
topic=self.config.topic,
context=self.config.context,
previous_speeches=previous_speeches,
round_num=round_num
):
full_content += chunk
yield {
"type": "speech_chunk",
"agent_id": agent.agent_id,
"chunk": chunk
}
# 保存发言记录
record = SpeechRecord(
agent_id=agent.agent_id,
agent_name=agent.name,
emoji=agent.emoji,
content=full_content,
round_num=round_num
)
self.speech_records.append(record)
yield {
"type": "speech_end",
"agent_id": agent.agent_id,
"content": full_content
}
yield {
"type": "round_end",
"round": round_num
}
yield {"type": "debate_end"}
def get_all_speeches(self) -> List[SpeechRecord]:
"""获取所有发言记录"""
return self.speech_records
def get_speeches_by_round(self, round_num: int) -> List[SpeechRecord]:
"""获取指定轮次的发言"""
return [r for r in self.speech_records if r.round_num == round_num]
def get_speeches_by_agent(self, agent_id: str) -> List[SpeechRecord]:
"""获取指定 Agent 的所有发言"""
return [r for r in self.speech_records if r.agent_id == agent_id]

View File

@ -0,0 +1,51 @@
from typing import List, Dict, Generator
from dataclasses import dataclass
from agents.research_agent import ResearchAgent
from utils.llm_client import LLMClient
import config
@dataclass
class ResearchConfig:
topic: str
context: str = ""
planner_model: str = "gpt-4o"
researcher_model: str = "gemini-1.5-pro"
writer_model: str = "claude-3-5-sonnet-20241022"
class ResearchManager:
"""Manages the Deep Research workflow"""
def __init__(self, api_key: str, base_url: str = None, provider: str = "aihubmix"):
self.api_key = api_key
self.base_url = base_url
self.provider = provider
self.agents = {}
def _get_client(self, model: str) -> LLMClient:
return LLMClient(
provider=self.provider,
api_key=self.api_key,
base_url=self.base_url,
model=model
)
def create_agents(self, config: ResearchConfig):
"""Initialize agents with specific models"""
self.agents["planner"] = ResearchAgent("planner", self._get_client(config.planner_model))
self.agents["researcher"] = ResearchAgent("researcher", self._get_client(config.researcher_model))
self.agents["writer"] = ResearchAgent("writer", self._get_client(config.writer_model))
def generate_plan(self, topic: str, context: str) -> Generator[str, None, None]:
"""Step 1: Generate Research Plan"""
prompt = f"Please create a comprehensive research plan for the topic: '{topic}'.\nBreak it down into 3-5 distinct, actionable steps."
yield from self.agents["planner"].generate(prompt, context)
def execute_step(self, step: str, previous_findings: str) -> Generator[str, None, None]:
"""Step 2: Execute a single research step"""
prompt = f"Execute this research step: '{step}'.\nPrevious findings: {previous_findings}"
yield from self.agents["researcher"].generate(prompt)
def generate_report(self, topic: str, all_findings: str) -> Generator[str, None, None]:
"""Step 3: Generate Final Report"""
prompt = f"Write a final comprehensive report on '{topic}' based on these findings:\n{all_findings}"
yield from self.agents["writer"].generate(prompt)

4
report/__init__.py Normal file
View File

@ -0,0 +1,4 @@
"""Report 模块"""
from report.report_generator import ReportGenerator
__all__ = ["ReportGenerator"]

Binary file not shown.

Binary file not shown.

143
report/report_generator.py Normal file
View File

@ -0,0 +1,143 @@
"""
报告生成器 - 汇总辩论内容并生成决策报告
"""
from typing import List
from orchestrator.debate_manager import SpeechRecord
from utils.llm_client import LLMClient
class ReportGenerator:
"""决策报告生成器"""
def __init__(self, llm_client: LLMClient = None):
self.llm_client = llm_client or LLMClient()
def generate_report(
self,
topic: str,
speeches: List[SpeechRecord],
context: str = ""
) -> str:
"""
生成决策报告
Args:
topic: 讨论议题
speeches: 所有发言记录
context: 背景信息
Returns:
str: Markdown 格式的决策报告
"""
# 构建发言摘要
speeches_text = self._format_speeches(speeches)
system_prompt = """你是一位专业的决策分析师,擅长汇总多方观点并生成结构化的决策报告。
你的任务是根据多位专家的讨论生成一份清晰可操作的决策报告
报告格式要求
1. 使用 Markdown 格式
2. 结构清晰重点突出
3. 提炼核心要点不要罗列原文
4. 给出明确的建议和下一步行动"""
user_prompt = f"""## 讨论议题
{topic}
{f"## 背景信息" + chr(10) + context if context else ""}
## 专家讨论记录
{speeches_text}
## 你的任务
请生成一份决策报告包含以下部分
### 📋 议题概述
1-2句话总结讨论的核心问题
### ✅ 支持观点汇总
列出支持该决策的主要理由注明来源角色
### ❌ 反对/风险观点汇总
列出反对意见和风险点注明来源角色
### 🔑 关键决策要点
3-5个需要重点考虑的因素
### 💡 建议与下一步行动
给出明确的建议以及具体的下一步行动项
### ⚖️ 决策框架
提供一个简单的决策框架或检查清单帮助做出最终决策
"""
return self.llm_client.chat(
system_prompt=system_prompt,
user_prompt=user_prompt,
max_tokens=2048
)
def _format_speeches(self, speeches: List[SpeechRecord]) -> str:
"""格式化发言记录"""
formatted = []
current_round = 0
for speech in speeches:
if speech.round_num != current_round:
current_round = speech.round_num
formatted.append(f"\n### 第 {current_round} 轮讨论\n")
formatted.append(
f"**{speech.emoji} {speech.agent_name}**:\n{speech.content}\n"
)
return "\n".join(formatted)
def generate_report_stream(
self,
topic: str,
speeches: List[SpeechRecord],
context: str = ""
):
"""流式生成决策报告"""
speeches_text = self._format_speeches(speeches)
system_prompt = """你是一位专业的决策分析师,擅长汇总多方观点并生成结构化的决策报告。"""
user_prompt = f"""## 讨论议题
{topic}
{f"## 背景信息" + chr(10) + context if context else ""}
## 专家讨论记录
{speeches_text}
## 你的任务
请生成一份决策报告包含以下部分
### 📋 议题概述
1-2句话总结讨论的核心问题
### ✅ 支持观点汇总
列出支持该决策的主要理由注明来源角色
### ❌ 反对/风险观点汇总
列出反对意见和风险点注明来源角色
### 🔑 关键决策要点
3-5个需要重点考虑的因素
### 💡 建议与下一步行动
给出明确的建议以及具体的下一步行动项
### ⚖️ 决策框架
提供一个简单的决策框架或检查清单
"""
for chunk in self.llm_client.chat_stream(
system_prompt=system_prompt,
user_prompt=user_prompt,
max_tokens=2048
):
yield chunk

7
requirements.txt Normal file
View File

@ -0,0 +1,7 @@
# Multi-Agent Decision Workshop Dependencies
streamlit>=1.28.0
anthropic>=0.18.0
openai>=1.12.0
python-dotenv>=1.0.0
pydantic>=2.0.0

4
utils/__init__.py Normal file
View File

@ -0,0 +1,4 @@
"""Utils 模块"""
from utils.llm_client import LLMClient
__all__ = ["LLMClient"]

Binary file not shown.

Binary file not shown.

141
utils/llm_client.py Normal file
View File

@ -0,0 +1,141 @@
"""
LLM 客户端封装 - 统一 Anthropic/OpenAI/AIHubMix 接口
"""
from typing import Generator
import os
class LLMClient:
"""LLM API 统一客户端"""
def __init__(
self,
provider: str = None,
api_key: str = None,
base_url: str = None,
model: str = None
):
"""
初始化 LLM 客户端
Args:
provider: 'anthropic', 'openai', 'aihubmix', 'custom'
api_key: API 密钥
base_url: 自定义 API 地址用于 aihubmix/custom
model: 指定模型名称
"""
self.provider = provider or "aihubmix"
self.model = model or "gpt-4o"
if self.provider == "anthropic":
from anthropic import Anthropic
self.client = Anthropic(api_key=api_key)
elif self.provider == "openai":
from openai import OpenAI
self.client = OpenAI(api_key=api_key)
self.model = model or "gpt-4o"
elif self.provider == "aihubmix":
# AIHubMix 兼容 OpenAI API 格式
from openai import OpenAI
self.client = OpenAI(
api_key=api_key,
base_url=base_url or "https://aihubmix.com/v1"
)
self.model = model or "gpt-4o"
elif self.provider == "custom":
# 自定义 OpenAI 兼容接口vLLM、Ollama、TGI 等)
from openai import OpenAI
self.client = OpenAI(
api_key=api_key or "not-needed",
base_url=base_url or "http://localhost:8000/v1"
)
self.model = model or "local-model"
else:
raise ValueError(f"不支持的 provider: {self.provider}")
def chat_stream(
self,
system_prompt: str,
user_prompt: str,
max_tokens: int = 1024
) -> Generator[str, None, None]:
"""
流式对话
Args:
system_prompt: 系统提示词
user_prompt: 用户输入
max_tokens: 最大输出 token
Yields:
str: 流式输出的文本片段
"""
if self.provider == "anthropic":
yield from self._anthropic_stream(system_prompt, user_prompt, max_tokens)
else:
yield from self._openai_stream(system_prompt, user_prompt, max_tokens)
def _anthropic_stream(
self,
system_prompt: str,
user_prompt: str,
max_tokens: int
) -> Generator[str, None, None]:
"""Anthropic 流式调用"""
with self.client.messages.stream(
model=self.model,
max_tokens=max_tokens,
system=system_prompt,
messages=[{"role": "user", "content": user_prompt}]
) as stream:
for text in stream.text_stream:
yield text
def _openai_stream(
self,
system_prompt: str,
user_prompt: str,
max_tokens: int
) -> Generator[str, None, None]:
"""OpenAI 兼容接口流式调用(支持 AIHubMix、vLLM 等)"""
try:
stream = self.client.chat.completions.create(
model=self.model,
max_tokens=max_tokens,
stream=True,
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
]
)
for chunk in stream:
# 安全地获取 content处理各种边界情况
if chunk.choices and len(chunk.choices) > 0:
delta = chunk.choices[0].delta
if delta and hasattr(delta, 'content') and delta.content:
yield delta.content
except Exception as e:
yield f"\n\n[错误: {str(e)}]"
def chat(
self,
system_prompt: str,
user_prompt: str,
max_tokens: int = 1024
) -> str:
"""
非流式对话
Args:
system_prompt: 系统提示词
user_prompt: 用户输入
max_tokens: 最大输出 token
Returns:
str: 完整的响应文本
"""
return "".join(self.chat_stream(system_prompt, user_prompt, max_tokens))