feat: 添加智能专家生成、决策场景模板和用户反馈功能

This commit is contained in:
xyz 2026-01-09 09:25:02 +08:00
parent 0de5aa038b
commit 27ec6b3d85
12 changed files with 693 additions and 78 deletions

10
.gitignore vendored
View File

@ -0,0 +1,10 @@
# Python 字节码缓存
__pycache__/
*.py[cod]
*$py.class
# 项目特定的存储/缓存文件夹
.storage/
# 环境变量文件(通常包含敏感信息)
.env

View File

@ -1,6 +1,6 @@
{
"provider": "DeepSeek",
"api_key": "sk-ca812c913baa474182f6d4e83e078302",
"base_url": "https://api.deepseek.com",
"provider": "AIHubMix",
"api_key": "sk-yd8Tik0nFW5emKYcBdFc433b7c8b4dC182848f76819bBe73",
"base_url": "https://aihubmix.com/v1",
"language": "Chinese"
}

109
README.md
View File

@ -1,76 +1,91 @@
# Multi-Agent Council & Debate Workshop (V4)
# 🍎 智能决策工作坊 (Multi-Agent Council V4)
一个极简而强大的多智能体Multi-Agent决策辅助系统。
**V4 版本**将传统的 "线性研究" 进化为 **"多模型智囊团 (Council V4)"**,支持多轮对话讨论、动态专家组建、以及多 API 平台接入。
AI驱动的多智能体决策分析系统 - 基于多模型智囊团
## ✨ 核心功能 (V4 Update)
## ✨ 核心功能
### 1. 🧪 Multi-Model Council V4 (多模型智囊团)
摒弃了单一的"规划-执行"模式,现在的系统是一个真正的**圆桌会议**
* **多轮对话讨论**: 专家不再是各自为战而是像真实会议一样进行多轮Round-Robin对话互相批判、补充观点。
* **动态专家组建**: 你可以自定义 **2-5 位** 不同的专家(如 CEO, CTO, 法务)。
* **自定义模型分配**: 为每个专家指定最擅长的模型(例如:让 DeepSeek-Coder 担任技术专家,让 GPT-4o 担任产品专家)。
* **最终决策合成**: 讨论结束后最后一位专家Synthesizer会综合全场观点生成最终决策方案并绘制 **Mermaid 路线图**
### 🧪 Multi-Model Council V4 (智囊团模式)
- **多轮对话讨论**: 专家像真实会议一样进行多轮对话,互相批判、补充观点
- **动态专家组建**: 自定义 2-5 位专家,为每位指定最擅长的模型
- **🪄 智能专家生成**: AI 根据主题自动推荐最合适的专家角色
- **最终决策合成**: 最后一位专家综合全场观点,生成方案并绘制 Mermaid 路线图
### 2. 🎭 Debate Workshop (辩论工作坊)
经典的辩论模式,让 AI 扮演不同立场的角色(如正方、反方、评审),通过激烈的辩论帮助你厘清复杂决策的利弊。
### 🎯 内置决策场景
系统预置 4 大典型决策场景,每个场景都配置了专业的典型问题:
### 3. 🌐 Multi-Provider Support (多平台支持)
不再局限于单一平台,系统原生支持多种 API 源,随心切换:
* **DeepSeek Official**: 直接连接 `api.deepseek.com`
* **SiliconFlow (硅基流动)**: 连接 `api.siliconflow.cn`
* **AIHubMix**: 聚合平台
* **OpenAI / Custom**: 支持标准 OpenAI 接口或本地 vLLM/Ollama
| 场景 | 描述 |
|------|------|
| 🚀 新产品发布评审 | 评估产品可行性、市场潜力和实施计划 |
| 💰 投资审批决策 | 分析投资项目的 ROI、风险和战略价值 |
| 🤝 合作伙伴评估 | 评估合作伙伴的匹配度和合作价值 |
| 📦 供应商评估 | 对比分析供应商的综合能力 |
### 🎭 Debate Workshop (辩论工作坊)
让 AI 扮演不同立场角色,通过辩论帮助厘清复杂决策的利弊
### 💬 用户反馈
内置用户反馈系统,收集功能建议和使用体验
### 🌐 多平台支持
- **DeepSeek**: V3, R1, Coder
- **OpenAI**: GPT-4o, GPT-4o-mini
- **Anthropic**: Claude 3.5 Sonnet
- **Google**: Gemini 1.5/2.0
- **SiliconFlow / AIHubMix / Deepseek**
---
## 🛠️ 安装
```bash
# 1. 克隆项目
# 克隆项目
git clone https://github.com/HomoDeusss/multi-agent.git
cd multi-agent
# 2. 安装依赖
pip install -r requirements.txt
# 初始化 uv 项目(如首次使用)
uv init
# 安装依赖
uv add streamlit openai anthropic python-dotenv
# 或者同步现有依赖
uv sync
```
## 🚀 快速开始
### 1. 启动应用
```bash
streamlit run app.py
uv run streamlit run app.py
```
### 2. 配置 API (V4 新特性)
无需手动修改 `.env` 文件(可选),直接在 Web 界面侧边栏配置:
1. 在侧边栏选择 **"API Provider"** (例如 `DeepSeek``SiliconFlow`)。
2. 输入对应的 **API Key**
3. 系统会自动配置好 Base URL。
### 使用步骤
### 3. 使用 Council V4 模式
1. 选择 **"Deep Research" (现已升级为 Council V4)**。
2. **设定专家**: 选择专家人数(例如 3 人),并为每位专家命名并指定模型。
* *Tip: 建议最后一位专家选一个逻辑能力强的模型(如 Claude 3.5 Sonnet作为决策者。*
3. **设定轮数**: 选择讨论轮数(建议 2-3 轮)。
4. 输入议题,点击开始。观察专家们如何互相对话!
### 4. 使用 Debate 模式
1. 切换到 **"Debate Workshop"**。
2. 输入议题(如“是否应该全职做独立开发?”)。
3. 选择参与辩论的角色。
4. 点击开始,观看唇枪舌战。
1. **配置 API**: 在侧边栏选择 Provider 并输入 API Key
2. **选择场景**: 点击预置的决策场景或自定义主题
3. **生成专家**: 点击 "🪄 根据主题自动生成专家" 或手动配置
4. **开始决策**: 观察专家们如何互相对话,生成综合方案
---
## 🤖 支持的模型 (V4 Expanded)
## 📁 项目结构
系统内置了最新的模型配置,支持在界面直接选择:
* **DeepSeek**: V3 (`deepseek-chat`), R1 (`deepseek-reasoner`), Coder V2
* **OpenAI**: GPT-4o, GPT-4o-mini
* **Anthropic**: Claude 3.5 Sonnet, Claude 3 Opus
* **Google**: Gemini 1.5 Pro/Flash
* **Meta/Alibaba**: Llama 3.3, Qwen 2.5
```
multi_agent_workshop/
├── app.py # Streamlit 主应用
├── config.py # 配置文件
├── agents/ # Agent 定义
│ ├── agent_profiles.py # 预设角色配置
│ ├── base_agent.py # 基础 Agent 类
│ └── research_agent.py # 研究型 Agent
├── orchestrator/ # 编排器
│ ├── debate_manager.py # 辩论管理
│ └── research_manager.py # 智囊团管理
├── utils/
│ ├── llm_client.py # LLM 客户端封装
│ ├── storage.py # 存储管理
│ └── auto_agent_generator.py # 智能专家生成
└── report/ # 报告生成
```
## 📝 License
[MIT License](LICENSE)

Binary file not shown.

Binary file not shown.

499
app.py
View File

@ -17,6 +17,7 @@ from report import ReportGenerator
from report import ReportGenerator
from utils import LLMClient
from utils.storage import StorageManager
from utils.auto_agent_generator import generate_experts_for_topic
import config
# ==================== 页面配置 ====================
@ -30,37 +31,146 @@ st.set_page_config(
# ==================== 样式 ====================
st.markdown("""
<style>
/* 蓝紫色渐变主题 - 模仿参考UI */
.stApp {
background: linear-gradient(180deg, #E8EEFF 0%, #F5F7FF 100%);
}
/* 标题渐变 - 蓝紫色 */
.stApp h1 {
background: linear-gradient(135deg, #4A5CDB 0%, #667eea 50%, #764ba2 100%);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
font-weight: 700;
}
.stApp h2, .stApp h3 {
background: linear-gradient(90deg, #4A5CDB 0%, #667eea 100%);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
font-weight: 600;
}
/* 正文保持深色可读性 */
.stApp .stMarkdown p, .stApp .stMarkdown li {
color: #333;
}
/* 主卡片样式 */
.main-card {
background: white;
border-radius: 1rem;
padding: 2rem;
box-shadow: 0 4px 20px rgba(74, 92, 219, 0.1);
margin: 1rem 0;
border: 1px solid rgba(74, 92, 219, 0.1);
}
/* 场景卡片 */
.scenario-card {
background: white;
border-radius: 0.75rem;
padding: 1.5rem;
margin: 0.5rem 0;
border-left: 4px solid #4A5CDB;
box-shadow: 0 2px 10px rgba(0,0,0,0.05);
}
.scenario-card h4 {
color: #4A5CDB;
margin-bottom: 0.5rem;
font-weight: 600;
}
.scenario-card p {
color: #666;
font-size: 0.9rem;
}
/* 典型问题列表 */
.typical-questions {
background: #F8F9FF;
border-radius: 0.5rem;
padding: 1rem;
margin-top: 0.5rem;
}
.typical-questions strong {
color: #4A5CDB;
}
/* 状态指示器 */
.status-indicator {
display: inline-flex;
align-items: center;
gap: 0.5rem;
background: #E8FFE8;
padding: 0.5rem 1rem;
border-radius: 0.5rem;
border: 1px solid #4CAF50;
}
.status-dot {
width: 10px;
height: 10px;
background: #4CAF50;
border-radius: 50%;
animation: pulse 2s infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
/* 原有样式保留 */
.agent-card {
padding: 1rem;
border-radius: 0.5rem;
margin-bottom: 0.5rem;
border-left: 4px solid #4A90A4;
background-color: #f8f9fa;
border-left: 4px solid #4A5CDB;
background-color: #F8F9FF;
}
.speech-bubble {
background-color: #f0f2f6;
background-color: #F8F9FF;
padding: 1rem;
border-radius: 0.5rem;
margin: 0.5rem 0;
}
.round-header {
background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);
background: linear-gradient(90deg, #4A5CDB 0%, #667eea 50%, #764ba2 100%);
color: white;
padding: 0.5rem 1rem;
border-radius: 0.5rem;
margin: 1rem 0;
}
.custom-agent-form {
background-color: #e8f4f8;
background-color: #F8F9FF;
padding: 1rem;
border-radius: 0.5rem;
margin: 0.5rem 0;
}
.research-step {
border-left: 3px solid #FF4B4B;
border-left: 3px solid #4A5CDB;
padding-left: 10px;
margin-bottom: 10px;
}
/* 按钮样式增强 */
.stButton > button {
border-radius: 0.5rem;
font-weight: 500;
}
/* 分隔线 */
hr {
border: none;
height: 1px;
background: linear-gradient(90deg, transparent, #4A5CDB, transparent);
margin: 1.5rem 0;
}
</style>
""", unsafe_allow_html=True)
@ -122,6 +232,8 @@ if "research_output" not in st.session_state:
st.session_state.research_output = "" # Final report
if "research_steps_output" not in st.session_state:
st.session_state.research_steps_output = [] # List of step results
if "generated_experts" not in st.session_state:
st.session_state.generated_experts = None # Auto-generated expert configs
# ==================== 侧边栏:配置 ====================
@ -210,7 +322,7 @@ with st.sidebar:
save_current_config()
if not api_key:
st.warning("请配置 API Key 以继续")
st.warning("⚠️ 请配置 API Key 以启用 AI 功能 (仍可查看历史档案)")
# Output Language Selection
lang_options = config.SUPPORTED_LANGUAGES
@ -251,8 +363,8 @@ with st.sidebar:
# 模式选择
mode = st.radio(
"📊 选择模式",
["Council V4 (Deep Research)", "Debate Workshop", "📜 History Archives"],
index=0 if st.session_state.mode == "Deep Research" else (1 if st.session_state.mode == "Debate Workshop" else 2)
["Council V4 (Deep Research)", "Debate Workshop", "📜 History Archives", "💬 用户反馈"],
index=0 if st.session_state.mode == "Deep Research" else (1 if st.session_state.mode == "Debate Workshop" else (2 if st.session_state.mode == "History Archives" else 3))
)
# Map selection back to internal mode string
@ -260,8 +372,10 @@ with st.sidebar:
st.session_state.mode = "Deep Research"
elif mode == "Debate Workshop":
st.session_state.mode = "Debate Workshop"
else:
elif mode == "📜 History Archives":
st.session_state.mode = "History Archives"
else:
st.session_state.mode = "Feedback"
st.divider()
@ -341,18 +455,167 @@ if st.session_state.get("bg_image_data_url"):
# ==================== 主界面逻辑 ====================
if st.session_state.mode == "Deep Research":
st.title("🧪 Multi-Model Council V4")
st.markdown("*多模型智囊团:自定义 N 个专家进行多轮对话讨论,最后由最后一位专家决策*")
# ==================== 主标题区域 ====================
st.markdown("""
<div style="text-align: center; padding: 1rem 0;">
<h1 style="font-size: 2.5rem;">🍎 智能决策工作坊</h1>
<p style="color: #666; font-size: 1.1rem;">AI驱动的多智能体决策分析系统 - 基于多模型智囊团</p>
</div>
""", unsafe_allow_html=True)
# 状态指示器和语言选择
col_status, col_lang = st.columns([2, 1])
with col_status:
if api_key:
st.markdown("""
<div class="status-indicator">
<div class="status-dot"></div>
<span style="color: #4CAF50;"> 已连接到服务器</span>
</div>
""", unsafe_allow_html=True)
else:
st.warning("⚠️ 请在侧边栏配置 API Key")
with col_lang:
st.markdown(f"**语言/Language:** {output_language}")
st.divider()
# ==================== 开始决策按钮 ====================
st.markdown("""
<div class="main-card" style="text-align: center;">
<h3>🚀 开始决策</h3>
<p style="color: #666;">选择场景或自定义主题开始多专家协作分析</p>
</div>
""", unsafe_allow_html=True)
st.divider()
# ==================== 支持的决策场景 ====================
st.markdown("""
<div class="main-card">
<h2>📋 支持的决策场景</h2>
<p style="color: #666; margin-bottom: 1.5rem;">系统支持以下决策场景每个场景都配置了专业的AI专家团队</p>
</div>
""", unsafe_allow_html=True)
# Decision scenario templates with typical questions
DECISION_SCENARIOS = {
"🚀 新产品发布评审": {
"topic": "新产品发布评审:评估产品功能完备性、市场准备度、发布时机和潜在风险",
"description": "评估新产品概念的可行性、市场潜力和实施计划",
"example": "我们计划在下个季度发布AI助手功能需要评估技术准备度、市场时机和竞争态势",
"questions": [
"这个产品的核心价值主张是什么?",
"目标用户群体是谁?需求是否真实存在?",
"技术实现难度如何?团队是否具备能力?",
"竞争对手有类似产品吗?我们的差异化在哪?"
]
},
"💰 投资审批决策": {
"topic": "投资审批决策:评估投资项目的财务回报、战略价值、风险因素和执行可行性",
"description": "分析投资项目的ROI、风险和战略价值",
"example": "公司考虑投资1000万用于数据中台建设需要评估ROI、技术风险和业务价值",
"questions": [
"预期投资回报率(ROI)是多少?",
"投资回收期需要多长时间?",
"主要风险因素有哪些?如何缓解?",
"是否有更优的替代方案?"
]
},
"🤝 合作伙伴评估": {
"topic": "合作伙伴评估:分析潜在合作方的能力、信誉、战略协同和合作风险",
"description": "评估潜在合作伙伴的匹配度和合作价值",
"example": "评估与XX公司建立战略合作的可行性包括技术互补性、市场协同和风险",
"questions": [
"合作方的核心能力是什么?",
"双方资源如何互补?",
"合作的战略协同效应有多大?",
"合作失败的风险和退出机制是什么?"
]
},
"📦 供应商评估": {
"topic": "供应商评估:评估供应商的质量、成本、交付能力、稳定性和合作风险",
"description": "对比分析供应商的综合能力",
"example": "评估更换核心零部件供应商的利弊,包括成本对比、质量风险和切换成本",
"questions": [
"供应商的质量控制体系如何?",
"价格竞争力与行业均值对比?",
"交付能力和响应速度如何?",
"供应商的财务稳定性如何?"
]
}
}
# Display scenario cards with typical questions
for scenario_name, scenario_data in DECISION_SCENARIOS.items():
st.markdown(f"""
<div class="scenario-card">
<h4>{scenario_name}</h4>
<p>{scenario_data['description']}</p>
<div class="typical-questions">
<strong>典型问题</strong>
<ul style="margin: 0.5rem 0; padding-left: 1.5rem; color: #555;">
{''.join([f'<li>{q}</li>' for q in scenario_data['questions']])}
</ul>
</div>
</div>
""", unsafe_allow_html=True)
if st.button(f"使用此场景", key=f"use_{scenario_name}", use_container_width=True):
st.session_state.selected_scenario = scenario_data
st.session_state.prefill_topic = scenario_data['topic']
st.rerun()
st.divider()
# Get prefilled topic if available
prefill_topic = st.session_state.get("prefill_topic", "")
if st.session_state.get("selected_scenario"):
prefill_topic = prefill_topic or st.session_state.selected_scenario.get("topic", "")
col1, col2 = st.columns([3, 1])
with col1:
research_topic = st.text_area("研究/决策主题", placeholder="请输入你想深入研究或决策的主题...", height=100)
research_topic = st.text_area("研究/决策主题", value=prefill_topic, placeholder="请输入你想深入研究或决策的主题...", height=100)
with col2:
max_rounds = st.number_input("讨论轮数", min_value=1, max_value=5, value=2, help="专家们进行对话的轮数")
# Expert Configuration
st.subheader("👥 专家配置")
num_experts = st.number_input("专家数量", min_value=2, max_value=5, value=3)
# Auto-generate experts row
col_num, col_auto = st.columns([2, 3])
with col_num:
num_experts = st.number_input("专家数量", min_value=2, max_value=5, value=3)
with col_auto:
st.write("") # Spacing
auto_gen_btn = st.button(
"🪄 根据主题自动生成专家",
disabled=(not research_topic or not api_key),
help="AI 将根据您的主题自动推荐合适的专家角色"
)
# Handle auto-generation
if auto_gen_btn and research_topic and api_key:
with st.spinner("🤖 AI 正在分析主题并生成专家配置..."):
try:
temp_client = LLMClient(
provider=provider_id,
api_key=api_key,
base_url=base_url,
model="gpt-4o-mini" # Use fast model for generation
)
generated = generate_experts_for_topic(
topic=research_topic,
num_experts=num_experts,
llm_client=temp_client,
language=output_language
)
st.session_state.generated_experts = generated
st.success(f"✅ 已生成 {len(generated)} 位专家配置!")
st.rerun()
except Exception as e:
st.error(f"生成失败: {e}")
experts_config = []
cols = st.columns(num_experts)
@ -360,11 +623,20 @@ if st.session_state.mode == "Deep Research":
for i in range(num_experts):
with cols[i]:
default_model_key = list(AVAILABLE_MODELS.keys())[i % len(AVAILABLE_MODELS)]
st.markdown(f"**Expert {i+1}**")
# Default names
default_name = f"Expert {i+1}"
if i == num_experts - 1:
default_name = f"Expert {i+1} (Synthesizer)"
# Use generated expert name if available
if st.session_state.generated_experts and i < len(st.session_state.generated_experts):
gen_expert = st.session_state.generated_experts[i]
default_name = gen_expert.get("name", f"Expert {i+1}")
perspective = gen_expert.get("perspective", "")
st.markdown(f"**{default_name}**")
if perspective:
st.caption(f"_{perspective}_")
else:
default_name = f"Expert {i+1}"
if i == num_experts - 1:
default_name = f"Expert {i+1} (Synthesizer)"
st.markdown(f"**Expert {i+1}**")
expert_name = st.text_input(f"名称 #{i+1}", value=default_name, key=f"expert_name_{i}")
expert_model = st.selectbox(f"模型 #{i+1}", options=list(AVAILABLE_MODELS.keys()), index=list(AVAILABLE_MODELS.keys()).index(default_model_key), key=f"expert_model_{i}")
@ -376,12 +648,58 @@ if st.session_state.mode == "Deep Research":
research_context = st.text_area("补充背景 (可选)", placeholder="任何额外的背景信息...", height=80)
start_research_btn = st.button("🚀 开始多模型协作", type="primary", disabled=not research_topic)
start_research_btn = st.button("🚀 开始多模型协作", type="primary", disabled=(not research_topic or not api_key))
if not api_key:
st.info("💡 请先在侧边栏配置 API Key 才能开始任务")
# ==================== 恢复会话逻辑 (Resume Logic) ====================
# Try to load cached session
cached_session = st.session_state.storage.load_session_state("council_cache")
# If we have a cached session, and we are NOT currently running one (research_started is False)
if cached_session and not st.session_state.research_started:
st.info(f"🔍 检测到上次未完成的会话: {cached_session.get('topic', 'Unknown Topic')}")
col_res1, col_res2 = st.columns([1, 4])
with col_res1:
if st.button("🔄 恢复会话", type="primary"):
# Restore state
st.session_state.research_started = True
st.session_state.research_output = "" # Usually empty if unfinished
st.session_state.research_steps_output = cached_session.get("steps_output", [])
# Restore inputs if possible (tricky with widgets, but we can set defaults or just rely on cache for display)
# For simplicity, we restore the viewing state. Continuing generation is harder without rebuilding the exact generator state.
# Currently, "Resume" means "Restore View". To continue adding to it would require skipping done steps in manager.
st.rerun()
with col_res2:
if st.button("🗑️ 放弃", type="secondary"):
st.session_state.storage.clear_session_state("council_cache")
st.rerun()
# ==================== 历史渲染区域 (Always visible if started) ====================
if st.session_state.research_started and st.session_state.research_steps_output and not start_research_btn:
st.subheader("🗣️ 智囊团讨论历史")
for step in st.session_state.research_steps_output:
step_name = step.get('step', 'Unknown')
content = step.get('output', '')
role_type = "assistant"
with st.chat_message(role_type, avatar="🤖"):
st.markdown(f"**{step_name}**")
st.markdown(content)
st.divider()
# ==================== 执行区域 (Triggered by Button) ====================
if start_research_btn and research_topic:
st.session_state.research_started = True
st.session_state.research_output = ""
st.session_state.research_steps_output = []
# Clear any old cache when starting fresh
st.session_state.storage.clear_session_state("council_cache")
# 使用全局页面背景(若已上传)
research_bg_path = st.session_state.get("bg_image_path")
if st.session_state.get("bg_image_data_url"):
@ -404,9 +722,7 @@ if st.session_state.mode == "Deep Research":
)
manager.create_agents(config_obj)
st.divider()
st.subheader("🗣️ 智囊团讨论中...")
chat_container = st.container()
try:
@ -418,10 +734,11 @@ if st.session_state.mode == "Deep Research":
# Create a chat message block
with chat_container:
st.markdown(f"#### {current_step_name}")
st.caption(f"🤖 {current_agent} ({current_model})")
message_placeholder = st.empty()
current_content = ""
with st.chat_message("assistant", avatar="🤖"):
st.markdown(f"**{current_step_name}**")
st.caption(f"({current_model})")
message_placeholder = st.empty()
current_content = ""
elif event["type"] == "content":
current_content += event["content"]
@ -433,7 +750,18 @@ if st.session_state.mode == "Deep Research":
"step": current_step_name,
"output": event["output"]
})
st.divider() # Separator between turns
# === AUTO-SAVE CACHE ===
# Save current progress to session cache
cache_data = {
"topic": research_topic,
"context": research_context,
"steps_output": st.session_state.research_steps_output,
"experts_config": experts_config,
"max_rounds": max_rounds
}
st.session_state.storage.save_session_state("council_cache", cache_data)
# =======================
# The last step output is the final plan
if st.session_state.research_steps_output:
@ -456,6 +784,10 @@ if st.session_state.mode == "Deep Research":
content=final_plan,
metadata=metadata
)
# Clear session cache as we finished successfully
st.session_state.storage.clear_session_state("council_cache")
st.toast("✅ 记录已保存到历史档案")
except Exception as e:
@ -624,6 +956,8 @@ elif st.session_state.mode == "Debate Workshop":
type="primary",
use_container_width=True
)
if not api_key:
st.caption("🔒 需配置 API Key")
with col_btn2:
reset_btn = st.button(
@ -863,6 +1197,117 @@ elif st.session_state.mode == "History Archives":
file_name=f"{record['type']}_{record['id']}.md"
)
# ==================== 用户反馈页面 ====================
elif st.session_state.mode == "Feedback":
st.title("💬 用户反馈")
st.markdown("*您的反馈帮助我们不断改进产品*")
# Feedback form
st.subheader("📝 提交反馈")
feedback_type = st.selectbox(
"反馈类型",
["功能建议", "Bug 报告", "使用体验", "其他"],
help="选择您要反馈的类型"
)
# Rating
st.markdown("**整体满意度**")
rating = st.slider("", 1, 5, 4, format="%d")
rating_labels = {1: "😞 非常不满意", 2: "😕 不满意", 3: "😐 一般", 4: "😊 满意", 5: "🤩 非常满意"}
st.caption(rating_labels.get(rating, ""))
# Feedback content
feedback_content = st.text_area(
"详细描述",
placeholder="请描述您的反馈内容...\n\n例如:\n- 您遇到了什么问题?\n- 您希望增加什么功能?\n- 您对哪些方面有改进建议?",
height=200
)
# Feature requests for Council V4
st.subheader("🎯 功能需求调研")
st.markdown("您最希望看到哪些新功能?(可多选)")
feature_options = {
"more_scenarios": "📋 更多决策场景模板",
"export_pdf": "📄 导出 PDF 报告",
"voice_input": "🎤 语音输入支持",
"realtime_collab": "👥 多人实时协作",
"custom_prompts": "✏️ 自定义专家 Prompt",
"api_access": "🔌 API 接口支持",
"mobile_app": "📱 移动端应用"
}
selected_features = []
cols = st.columns(3)
for idx, (key, label) in enumerate(feature_options.items()):
with cols[idx % 3]:
if st.checkbox(label, key=f"feature_{key}"):
selected_features.append(key)
# Contact info (optional)
st.subheader("📧 联系方式(可选)")
contact_email = st.text_input("邮箱", placeholder="your@email.com")
# Submit button
st.divider()
if st.button("📤 提交反馈", type="primary", use_container_width=True):
if feedback_content.strip():
# Save feedback
feedback_data = {
"type": feedback_type,
"rating": rating,
"content": feedback_content,
"features": selected_features,
"email": contact_email,
"timestamp": st.session_state.storage._get_timestamp() if hasattr(st.session_state.storage, '_get_timestamp') else ""
}
# Save to storage
try:
import json
import os
feedback_dir = os.path.join(st.session_state.storage.base_dir, "feedback")
os.makedirs(feedback_dir, exist_ok=True)
from datetime import datetime
filename = f"feedback_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
filepath = os.path.join(feedback_dir, filename)
with open(filepath, 'w', encoding='utf-8') as f:
json.dump(feedback_data, f, ensure_ascii=False, indent=2)
st.success("🎉 感谢您的反馈!我们会认真阅读并持续改进产品。")
st.balloons()
except Exception as e:
st.error(f"保存反馈时出错: {e}")
else:
st.warning("请填写反馈内容")
# Show previous feedback summary
st.divider()
with st.expander("📊 我的反馈历史"):
try:
import os
import json
feedback_dir = os.path.join(st.session_state.storage.base_dir, "feedback")
if os.path.exists(feedback_dir):
files = sorted(os.listdir(feedback_dir), reverse=True)[:5]
if files:
for f in files:
filepath = os.path.join(feedback_dir, f)
with open(filepath, 'r', encoding='utf-8') as file:
data = json.load(file)
st.markdown(f"**{data.get('timestamp', 'Unknown')}** | {data.get('type', '')} | {'' * data.get('rating', 0)}")
st.caption(data.get('content', '')[:100] + "...")
st.divider()
else:
st.info("暂无反馈记录")
else:
st.info("暂无反馈记录")
except Exception:
st.info("暂无反馈记录")
# ==================== 底部信息 ====================
st.divider()
col_footer1, col_footer2, col_footer3 = st.columns(3)

View File

@ -96,6 +96,9 @@ MAX_AGENTS = 6 # 最大参与 Agent 数量
# 支持的输出语言
SUPPORTED_LANGUAGES = ["Chinese", "English", "Japanese", "Spanish", "French", "German"]
# 生成配置
MAX_OUTPUT_TOKENS = 300 # 限制单次回复长度,保持精简
# 研究模式模型角色配置
RESEARCH_MODEL_ROLES = {
"expert_a": {

View File

@ -0,0 +1,108 @@
"""
Auto Agent Generator - 根据主题自动生成专家配置
Uses LLM to analyze the topic and suggest appropriate expert agents.
"""
import json
import re
from typing import List, Dict
from utils.llm_client import LLMClient
EXPERT_GENERATION_PROMPT = """You are an expert team composition advisor. Given a research/decision topic, you need to suggest the most appropriate team of experts to analyze it.
Instructions:
1. Analyze the topic carefully to understand its domain and key aspects
2. Generate {num_experts} distinct expert roles that would provide the most valuable perspectives
3. Each expert should have a unique focus area relevant to the topic
4. The LAST expert should always be a "Synthesizer" role who can integrate all perspectives
Output Format (MUST be valid JSON array):
[
{{"name": "Expert Name", "perspective": "Brief description of their viewpoint", "focus": "Key areas they analyze"}},
...
]
Examples of good expert names based on topic:
- For "Should we launch an e-commerce platform?": "市场渠道分析师", "电商运营专家", "供应链顾问", "数字化转型综合师"
- For "Career transition to AI field": "职业发展顾问", "AI行业专家", "技能评估分析师", "综合规划师"
IMPORTANT:
- Use {language} for all names and descriptions
- Make names specific to the topic, not generic like "Expert 1"
- The last expert MUST be a synthesizer/integrator type
Topic: {topic}
Generate exactly {num_experts} experts as a JSON array:"""
def generate_experts_for_topic(
topic: str,
num_experts: int,
llm_client: LLMClient,
language: str = "Chinese"
) -> List[Dict[str, str]]:
"""
Use LLM to generate appropriate expert configurations based on the topic.
Args:
topic: The research/decision topic
num_experts: Number of experts to generate (2-5)
llm_client: LLM client instance for API calls
language: Output language (Chinese/English)
Returns:
List of expert dicts: [{"name": "...", "perspective": "...", "focus": "..."}, ...]
"""
if not topic.strip():
return []
prompt = EXPERT_GENERATION_PROMPT.format(
topic=topic,
num_experts=num_experts,
language=language
)
try:
response = llm_client.chat(
system_prompt="You are a helpful assistant that generates JSON output only. No markdown, no explanation.",
user_prompt=prompt,
max_tokens=800
)
# Extract JSON from response (handle potential markdown wrapping)
json_match = re.search(r'\[[\s\S]*\]', response)
if json_match:
experts = json.loads(json_match.group())
# Validate structure
if isinstance(experts, list) and len(experts) >= 1:
validated = []
for exp in experts[:num_experts]:
if isinstance(exp, dict) and "name" in exp:
validated.append({
"name": exp.get("name", "Expert"),
"perspective": exp.get("perspective", ""),
"focus": exp.get("focus", "")
})
return validated
except (json.JSONDecodeError, Exception) as e:
print(f"[AutoAgentGenerator] Error parsing LLM response: {e}")
# Fallback: return generic experts
fallback = []
for i in range(num_experts):
if i == num_experts - 1:
fallback.append({"name": f"综合分析师", "perspective": "整合视角", "focus": "综合决策"})
else:
fallback.append({"name": f"专家 {i+1}", "perspective": "分析视角", "focus": "专业分析"})
return fallback
def get_default_model_for_expert(expert_index: int, total_experts: int, available_models: list) -> str:
"""
Assign a default model to an expert based on their position.
Spreads experts across available models for diversity.
"""
if not available_models:
return "gpt-4o"
return available_models[expert_index % len(available_models)]

View File

@ -5,6 +5,8 @@ from typing import Generator
import os
import config
class LLMClient:
"""LLM API 统一客户端"""
@ -62,7 +64,7 @@ class LLMClient:
self,
system_prompt: str,
user_prompt: str,
max_tokens: int = 1024
max_tokens: int = config.MAX_OUTPUT_TOKENS
) -> Generator[str, None, None]:
"""
流式对话

View File

@ -150,3 +150,35 @@ class StorageManager:
return json.load(f)
except Exception:
return None
# ==================== Session Cache (Resume Functionality) ====================
def save_session_state(self, key: str, data: Dict[str, Any]):
"""Save temporary session state for recovery"""
try:
# We use a dedicated cache file per key
cache_file = self.root_dir / f"{key}_cache.json"
data["_timestamp"] = int(time.time())
with open(cache_file, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
except Exception as e:
print(f"Error saving session cache: {e}")
def load_session_state(self, key: str) -> Dict[str, Any]:
"""Load temporary session state"""
cache_file = self.root_dir / f"{key}_cache.json"
if not cache_file.exists():
return None
try:
with open(cache_file, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception:
return None
def clear_session_state(self, key: str):
"""Clear temporary session state"""
cache_file = self.root_dir / f"{key}_cache.json"
if cache_file.exists():
try:
os.remove(cache_file)
except Exception:
pass