Skip to main content

AI Chat

Cync provides AI-powered chat interfaces for querying data using natural language.

Interfaces

Rhea Chat (/protected/rhea)

General-purpose AI assistant with access to:

  • Stickies
  • Links
  • News articles

Fireflies Chat

Embedded in transcript viewer for meeting-specific queries.

How It Works

┌────────────┐    ┌────────────┐    ┌────────────┐
│ User │───▶│ OpenRouter│───▶│ MCP │
│ Query │ │ LLM │ │ Tools │
└────────────┘ └────────────┘ └──────┬─────┘

┌─────────────────┼─────────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Stickies│ │ Links │ │ News │
└─────────┘ └─────────┘ └─────────┘

MCP Tools

The chat uses Model Context Protocol (MCP) tools:

ToolDescription
get_stickiesRetrieve user's notes
create_stickyCreate a new note
edit_stickyUpdate existing note
search_newsSearch news articles
get_weekly_news_digestSummarize recent news

Example Queries

Stickies

"Show me my notes from last week"
"Create a sticky about the Q4 planning meeting"
"Find notes tagged with 'urgent'"

News

"What's happening in the tech industry?"
"Summarize the news from this week"
"Find articles about AI regulation"

Meetings (Fireflies Chat)

"What were the action items from this meeting?"
"Who talked the most?"
"What did we decide about the budget?"

Multi-Model Support

OpenRouter provides access to multiple AI models:

# Choose your model in .env.local
OPENROUTER_MODEL=anthropic/claude-3.5-sonnet
# OR
OPENROUTER_MODEL=openai/gpt-4o
# OR
OPENROUTER_MODEL=meta-llama/llama-3.1-70b-instruct

Feedback System

Chat interfaces include thumbs up/down for feedback:

  • Currently logs to console
  • Future: Store feedback for model improvement

Architecture

Chat API Route

// /api/chat/route.ts
export async function POST(request: Request) {
const { messages } = await request.json();

// Get user context from session
const user = await getUser();

// Call OpenRouter with tools
const response = await openrouter.chat({
model: process.env.OPENROUTER_MODEL,
messages,
tools: getMcpTools(user.id)
});

return Response.json(response);
}

Tool Execution

// When LLM decides to use a tool
if (response.tool_calls) {
for (const call of response.tool_calls) {
const result = await executeTool(call.name, call.arguments);
// Feed result back to LLM
}
}