Your agents forget.
Threadline remembers.
Give every user a living memory profile that compounds across sessions, channels, and models.
Inject 200 tokens of signal, not raw chat logs.
1import { Threadline } from "threadline-sdk"2const tl = new Threadline( apiKey: "tl_live_..." })3const { injectedPrompt } = await tl.inject(userId, basePrompt)4await tl.update({ userId, userMessage, agentResponse })
Threadline keeps your agent fast while it remembers.
Compatible with everything you already use
38ms
p50 context retrieval
99.99%
uptime target
7
structured memory scopes
Why agents forget
Starts from zero
Agents forget user context between sessions and keep asking repeated questions.
Threadline persists structured memory across every interaction.
Prompt bloat
Teams stuff entire histories into context windows, increasing cost and latency.
Inject only high-signal memory fields with low token overhead.
No user trust controls
Users cannot inspect, edit, or delete what an AI system remembers about them.
Built-in trust dashboard with scoped grants and deletion controls.
Capture interactions
Threadline extracts durable user facts from each turn.
next step
Store structured memory
Memory is persisted by scope and linked to the correct user identity.
next step
Inject before response
Relevant context is injected into system prompts in milliseconds.
How it works
Add two calls to your existing agent flow. Inject memory before generation and update memory after each response.
// npm install threadline-sdk
import { Threadline } from "threadline-sdk"
const tl = new Threadline({ apiKey: process.env.THREADLINE_KEY! })
// Before your AI call — inject user context into the prompt
const { injectedPrompt, cacheHint } = await tl.inject(userId, basePrompt)
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "system", content: injectedPrompt }],
...(cacheHint?.openaiParam ? { extra_body: cacheHint.openaiParam } : {}),
})
// After your AI call — update the user's context
await tl.update({ userId, userMessage, agentResponse })
communication_style
Learns how each user prefers to interact and receive responses.
e.g. "prefers concise bullet points"
ongoing_tasks
Tracks projects, deadlines, and current blockers across sessions.
e.g. "ship onboarding flow by Friday"
key_relationships
Remembers stakeholders, teams, and important collaborators.
e.g. "works with PM Sam and designer Lee"
domain_expertise
Understands technical level and domain experience.
e.g. "backend engineer using TypeScript and Postgres"
preferences
Stores actionable settings and consistent personalization signals.
e.g. "likes actionable answers with code examples"
emotional_state
Captures stable sentiment signals for tone adaptation.
e.g. "stressed but optimistic during product launch"
general
Holds durable identity and context that does not fit other scopes.
e.g. "timezone GMT+4, solo founder, pre-launch stage"
Scoped grants
Right to delete
Audit visibility
Context Dashboard
Manage what agents can access
communication_style
Granted
ongoing_tasks
Granted
emotional_state
Revoked
| Feature | Threadline | Mem0 | Build your own |
|---|---|---|---|
| User-owned context controls | |||
| Scoped per-agent grants | |||
| Low-latency inject flow | |||
| Built-in trust dashboard |
Start free. Scale when you're ready.
Ship memory-aware agents today and upgrade as your usage grows.
See full pricingOAuth solved identity.
Threadline solves context.
Give every user persistent memory across your AI products without building memory infra from scratch.
Get API Key →