Chat (Planned)¶
The chat component will provide an LLM-powered conversational interface for intelligent interactions with the knowledge base and task system.
Planned Features¶
Action Item Extraction¶
Extract tasks from meeting notes and transcripts:
Input: "In today's meeting, John agreed to review the API documentation
by Friday, and Sarah will update the deployment scripts."
Output:
- Todo: "Review API documentation" (assigned context: John, due: Friday)
- Todo: "Update deployment scripts" (assigned context: Sarah)
Knowledge-Task Linking¶
Automatic suggestions linking todos to relevant knowledge items:
Todo: "Implement authentication middleware"
Suggested Knowledge:
- Authentication Guide (score: 0.92)
- Security Best Practices (score: 0.87)
- API Design Patterns (score: 0.71)
Conversational Search¶
Natural language queries against the knowledge base:
User: "What's our approach to database migrations?"
Assistant: Based on the Engineering docs, migrations follow these steps:
1. Create migration script in /migrations
2. Test locally with `migrate test`
3. Apply in staging before production
[Source: Database Operations Guide, chunk 3]
Planned Architecture¶
┌─────────────────────────────────────────────────────────────┐
│ Chat Interface │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Conversation History │ │
│ │ ┌──────┐ ┌────────────────────────────────────┐ │ │
│ │ │ User │ │ How do I configure the database? │ │ │
│ │ └──────┘ └────────────────────────────────────┘ │ │
│ │ ┌──────┐ ┌────────────────────────────────────┐ │ │
│ │ │ Bot │ │ Based on the config guide... │ │ │
│ │ └──────┘ └────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Message Input │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Chat Service │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Context │ │ RAG │ │ LLM API │ │
│ │ Assembly │ │ Retrieval │ │ (Ollama/OpenAI) │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
LLM Provider Options¶
Ollama (Local)¶
Run models locally for privacy:
Advantages:
- Complete privacy (no data leaves machine)
- No API costs
- Works offline
OpenAI API¶
Cloud-based for higher capability:
Advantages:
- More capable models
- No local compute requirements
- Faster responses
Planned API Endpoints¶
Send Message¶
POST /api/v1/chat/message
Content-Type: application/json
{
"conversation_id": "uuid",
"message": "What security practices should I follow?",
"include_sources": true
}
Extract Tasks¶
POST /api/v1/chat/extract-tasks
Content-Type: application/json
{
"text": "Meeting notes content...",
"auto_create": false
}
Get Suggestions¶
RAG Integration¶
The chat system will use the existing RAG infrastructure:
- User query arrives
- RAG search retrieves relevant chunks
- Chunks assembled into context
- LLM generates response with citations
- Response includes source references
# Planned flow
async def process_message(query: str):
# Retrieve relevant context
chunks = await rag.search(query, n_results=5)
# Build prompt with context
context = format_context(chunks)
prompt = f"""Answer based on this context:
{context}
Question: {query}"""
# Generate response
response = await llm.complete(prompt)
return {
"response": response,
"sources": [c.knowledge_id for c in chunks]
}
Implementation Timeline¶
This feature is planned for a future release. Current priorities:
- Core todo and knowledge management (complete)
- RAG indexing and search (complete)
- Frontend knowledge integration (in progress)
- Chat interface and LLM integration (planned)