Multi-model agent showcase

AI Agent

One Slack interface, multiple AI models, persistent conversation memory. A team-facing AI agent that switches providers on demand, maintains context across channels, and gives every team member access to the right model for their task.

Built on OpenClaw with a Slack integration layer. The agent supports dynamic model switching between Claude, GPT-4, and other providers mid-conversation, preserves session context, and exposes team-level commands via slash commands.

Models available

5+

Claude, GPT-4, Grok, Gemini, local models

Context window

Persistent

Session memory across messages and channels

Switch latency

<1s

Model change takes effect on next message

Business framing

Why this mattered

The team was using multiple AI products in parallel — switching tabs between ChatGPT, Claude, and other tools depending on the task. Context was lost every time they switched. There was no shared history, no team-level configuration, and no way to run the same prompt against different models for comparison. The agent made AI a single interface inside the tool the team already used all day.

Observed pain

  • Switching between AI providers meant losing conversation context every time.
  • Different tasks genuinely benefit from different models — the team needed to choose, not just default to one.
  • Slack was already the team's primary workspace; adding another tool increased friction instead of reducing it.

Guided walkthrough

Each block shows the business reason, the system move, and the operational implication.

Slide 01

One interface for all models

The agent lives in Slack as a bot the team can mention in any channel or DM. No browser tabs, no separate accounts, no context loss between sessions. The team interacts with AI the way they already interact with each other — in the conversation where the relevant work is happening.

  • Mention the bot in any channel, DM, or thread
  • No separate login or tool switching required
  • Replies appear inline with the conversation they belong to

Slide 02

Model switching that works mid-conversation

A single slash command or inline prefix changes the active model for the session. The conversation history is preserved and passed to the new model so context does not reset. Teams can compare responses from different providers on the same prompt without re-entering context.

  • /model claude or /model gpt4 switches immediately
  • Session history transferred to the new model provider
  • Model attribution visible in every response

Slide 03

Team-level memory and configuration

The agent maintains per-user and per-channel session memory. Team admins can configure default models, set system prompts for specific channels, and review conversation logs. Individual users can manage their own session preferences without affecting the team defaults.

  • Per-user session memory across conversations
  • Channel-level system prompt configuration
  • Admin controls for defaults and access policy

Workflow anatomy

Each stage is small enough to inspect, yet together they form a coherent system.

01

01 · Receive

Slack message triggers the agent

The bot receives the incoming message via Slack's Events API. OpenClaw identifies the user, loads their session context and preferences, and prepares the request for model dispatch.

02

02 · Route

Model selected for this request

The request is routed to the appropriate model provider — based on the user's current session model, any inline override in the message, or the channel default. The session history is included in the request context.

03

03 · Process

Model generates the response

The selected model processes the full conversation context and generates a response. Token usage, latency, and model attribution are recorded for the session log.

04

04 · Reply

Response posted back to Slack

The response is formatted and posted as a Slack message reply, preserving thread context. Session memory is updated. If a model switch was requested, the new model is confirmed in the reply.

Business impact

What changed for operations

  • Team AI usage consolidates into a single interface instead of scattered across individual accounts and tools.
  • Context preservation across model switches means complex tasks no longer require restarting conversations from scratch.
  • Team leads can configure AI behaviour for specific channels — customer-facing channels get different defaults than internal engineering threads.

Architecture note

Routing logic in plain English

  • Slack message → OpenClaw Slack integration → session load → model router → provider API call → response format → Slack reply + session update
  • Model switching is stateless from the provider's perspective; session continuity is maintained by OpenClaw, not by the model.
  • The skill system allows extending the agent with custom commands without touching the core message routing logic.

Stack in play

Use what the business already has, then make it behave like a coherent system instead of a collection of tabs.

OpenClaw gateway

The core agent runtime: handles session management, model routing, memory, and the skill system that powers slash commands.

Slack Bot / Events API

The team-facing interface. Receives messages via webhooks, manages bot presence in channels, and posts formatted responses.

Claude (Anthropic)

Primary model for reasoning-heavy tasks, long documents, and nuanced instruction following.

GPT-4 (OpenAI)

Alternative model available via model switch. Useful for tasks where GPT-4 characteristics are preferred.

Session memory store

Per-user conversation history persisted across sessions. Loaded into context on every request so conversations can resume where they left off.

Reusable pattern

Powered by OpenClaw

This agent runs on OpenClaw, a self-hosted AI agent orchestration platform. The Slack integration and multi-model routing are features of the platform, not custom code — the same agent can be pointed at Teams, Telegram, or a web chat widget by changing the integration layer.