Team Memory Sync: A Complete Analysis

When multiple team members use Claude Code on the same codebase, the "AI memories" they each accumulate are fragmented—if Alice tells Claude to "remember this project uses ESM modules," Bob's Claude has no idea. The Team Memory Sync system (2,167 lines) solves this problem: it uploads project-level memories to the cloud via the Anthropic API, allowing every team member's Claude to share the same "project knowledge."

> **Source code locations**: `src/services/teamMemorySync/index.ts` (1,256 lines), `src/services/teamMemorySync/watcher.ts` (387 lines), `src/services/teamMemorySync/secretScanner.ts` (324 lines), `src/services/teamMemorySync/types.ts` (156 lines)

> 💡 **Plain English**: Think of it as the company wiki—every employee can take notes in their own notebook (local memory), but important project knowledge gets uploaded to the company wiki (cloud sync) so new hires can see it too. Team Memory Sync is the "auto-upload to wiki" mechanism—and it automatically checks for accidentally written passwords before uploading (secret scanning).

### Industry Context

"Multi-user memory sharing" in AI coding tools is a cutting-edge area:

- **GitHub Copilot**: No explicit memory system; learns implicitly through code context
- **Cursor**: Has project-level `.cursorrules` (equivalent to CLAUDE.md), but no dynamic sync
- **Aider**: No memory persistence; starts from scratch every session
- **LangMem / Mem0 / Zep**: Third-party memory frameworks offering multi-user sync, but require independent deployment

Claude Code's unique advantage is **native integration**—memory sync is a built-in capability requiring no additional infrastructure. It is implemented through Anthropic's own API and deeply tied to the OAuth authentication system.

---

## Overview

This chapter unfolds in the following order: Section 1 presents the system architecture; Section 2 dissects the sync protocol (pull + push); Section 3 dives into secret scanning; Section 4 explains file watching and real-time sync; Section 5 analyzes multi-session conflict handling; Section 6 discusses design trade-offs.

---

> **[Chart placeholder 3.20-A]**: Team Memory Sync Architecture — Local memdir ↔ SyncState ↔ Anthropic API ↔ Other team members' instances

> **[Chart placeholder 3.20-B]**: Pull/Push Sync Flow — ETag conditional request → Delta detection → Secret scanning → Batched upload

---

## 1. System Architecture

### 1.1 Component Relationships

```
Local file system (.claude/memory/)
  │
  │ chokidar watch
  ▼
┌────────────────────────────────────────┐
│ watcher.ts — File change watcher       │
│  · 2s debounce                         │
│  · Change events → trigger push        │
└─────────────────┬──────────────────────┘
                  │
                  ▼
┌────────────────────────────────────────┐
│ index.ts — Sync core                   │
│  · pullTeamMemory(): server → local    │
│  · pushTeamMemory(): local → server    │
│  · SHA256 content hash for delta detect│
│  · ETag conditional requests avoid     │
│    redundant transfer                  │
│  · 250KB body limit → batched PUT      │
└─────────────────┬──────────────────────┘
                  │
                  ▼
┌────────────────────────────────────────┐
│ secretScanner.ts — Secret scanning     │
│  · Check content for keys/credentials  │
│    before upload                       │
│  · Detection modes: API Key / Token /  │
│    Certificate                         │
│  · Key detected → block upload + warn  │
│    user                                │
└────────────────────────────────────────┘
                  │
                  ▼
        Anthropic API (Team Memory endpoint)
                  │
                  ▼
        Other team members' Claude Code instances
```

### 1.2 Code Distribution

| File | Lines | Responsibility |
|------|-------|----------------|
| `index.ts` | 1,256 | Sync core: pull/push/hash/batch |
| `watcher.ts` | 387 | File change watching + debounce |
| `secretScanner.ts` | 324 | Secret leak detection |
| `types.ts` | 156 | TypeScript type definitions |
| `teamMemSecretGuard.ts` | 44 | Secret guard executor |
| **Total** | **2,167** | |

---

## 2. Sync Protocol

### 2.1 Repo Identifier

Team memory uses the **SHA256 hash of the git remote URL** as its identifier—all clones of the same repo (different directories, different machines) share the same team memory:

```typescript
// Repository identifier calculation
const repoId = sha256(gitRemoteUrl)

// Effect:
// /Users/alice/projects/my-app → SHA256("git@github.com:org/my-app.git")
// /Users/bob/work/my-app       → Same SHA256 → shares same team memory
```

> 💡 **Plain English**: Like a "project number" for a company project—no matter whether Alice's code is on her Desktop or in her Documents folder, as long as it's the same Git repository, it maps to the same project number and shares the same project knowledge base.

### 2.2 Pull (Server → Local)

```typescript
// pullTeamMemory() — Server-authoritative mode
async function pullTeamMemory(syncState: SyncState) {
  // 1. GET /team-memory/{repoId}
  //    With If-None-Match: <last ETag>
  
  // 2. If 304 Not Modified → no changes, skip
  
  // 3. If 200 OK:
  //    Server returns all key-value pairs
  //    Local files are overwritten (server wins)
  //    Update local ETag cache
}
```

**Key design: server authoritative**—pull does not merge; it directly overwrites local data with server data. This simplifies conflict handling, but means if two people modify the same memory simultaneously, the later submission overwrites the earlier one.

### 2.3 Push (Local → Server)

```typescript
// pushTeamMemory() — Delta push
async function pushTeamMemory(syncState: SyncState) {
  // 1. Read all local memory files
  
  // 2. Compute SHA256 content hash for each file
  
  // 3. Compare against last known server state:
  //    - Same hash → skip (no change)
  //    - Different hash → mark as "needs upload"
  //    - New local file → mark as "needs upload"
  //    - Local deletion → mark as "needs delete"
  
  // 4. Secret scanning (see Section 3)
  
  // 5. Batched PUT (see 2.4)
}
```

### 2.4 Batched Upload

The Anthropic API gateway has a body size limit (approx. 250KB). When there are many memory files, batching is required:

```typescript
// Batching logic
const GATEWAY_BODY_LIMIT = 250 * 1024  // ~250KB

function batchPush(entries: MemoryEntry[]) {
  let batch: MemoryEntry[] = []
  let batchSize = 0
  
  for (const entry of entries) {
    if (batchSize + entry.size > GATEWAY_BODY_LIMIT) {
      // Current batch is full, send it
      await putBatch(batch)
      batch = []
      batchSize = 0
    }
    batch.push(entry)
    batchSize += entry.size
  }
  
  // Send final batch
  if (batch.length > 0) await putBatch(batch)
}
```

### 2.5 Capacity Limit Handling

```typescript
// When server returns 413 Payload Too Large:
// Response body contains structured info: { max_entries: 100 }
// Client learns this limit and locally truncates excess entries
// Next push automatically respects the limit
```

---

## 3. Secret Scanning (secretScanner.ts)

Memory data uploaded to the cloud will be shared among team members—if a memory accidentally contains an API key or database password, that's a security incident.

### 3.1 Scan Timing

```
Memory file changes
  → trigger push
  → run secret scan before push
  → key found → block upload + warn user
  → no key found → continue upload
```

### 3.2 Detection Patterns

```typescript
// secretScanner.ts — Detection pattern list

// API Key pattern
/(?:sk|pk|api)[-_](?:live|test|prod)?[-_]?[a-zA-Z0-9]{20,}/

// JWT Token
/eyJ[a-zA-Z0-9_-]{10,}\.[a-zA-Z0-9_-]{10,}\.[a-zA-Z0-9_-]{10,}/

// Generic secret pattern
/(?:password|secret|token|key|credential|auth)[\s]*[=:]\s*['"][^'"]{8,}['"]/i

// Private key
/-----BEGIN (?:RSA |EC |OPENSSH )?PRIVATE KEY-----/

// AWS key
/(?:AKIA|ASIA)[A-Z0-9]{16}/

// ... more patterns
```

### 3.3 Permanent Suppression

Some failures are "permanent"—they should not be retried indefinitely:

```typescript
// Permanent failure types:
// - 403 Forbidden: user not authorized for team memory
// - 404 Not Found: team memory feature not enabled
// - no_oauth: user has not completed OAuth authorization

// On permanent failure → suppress file watching → retry only after restart
// Prevents infinite retry loops wasting resources
```

---

## 4. File Watching and Real-Time Sync

### 4.1 Chokidar Watch

```typescript
// watcher.ts — Watch .claude/memory/ directory

function watchTeamMemory(memoryDir: string) {
  const watcher = chokidar.watch(memoryDir, {
    ignoreInitial: true,  // Don't process files present at startup
    awaitWriteFinish: {
      stabilityThreshold: 300  // Wait 300ms after write finishes
    }
  })
  
  watcher.on('change', debounce(triggerPush, 2000))
  watcher.on('add', debounce(triggerPush, 2000))
  watcher.on('unlink', debounce(triggerPush, 2000))
}
```

### 4.2 Debounce Strategy

The 2-second debounce ensures that batch file modifications (e.g., AI updating multiple memories at once) trigger only a single push:

```
t=0ms:    memory/user_role.md changed
t=500ms:  memory/project_info.md changed
t=1200ms: memory/feedback_style.md changed
t=3200ms: 2000ms since last change → trigger push (all 3 files uploaded together)
```

---

## 5. Multi-Session Conflict Handling

### 5.1 Optimistic Concurrency

Team memory sync uses **ETag-based optimistic concurrency control**:

```
Session A: pull (ETag: v1) → modify memory → push (If-Match: v1) → success (ETag: v2)
Session B: pull (ETag: v1) → modify memory → push (If-Match: v1) → 409 Conflict
                                      → re-pull (ETag: v2) → merge → push (If-Match: v2)
```

### 5.2 Last-Write-Wins

For conflicts on the same key, the current implementation adopts a **Last-Write-Wins** strategy—no content merging; the later write overwrites the earlier one. This is acceptable in practice because:

- Team memory entries are typically independent (each person owns knowledge in different domains)
- The probability of simultaneous edits to the same memory is low
- Merge semantics for natural language text are complex (unlike code, which has line-level diff)

### 5.3 OAuth Token Refresh

Sync requires a valid OAuth token. When the token expires:

```typescript
// Auto-refresh flow:
// 1. API returns 401 Unauthorized
// 2. Use refresh_token to obtain new access_token
// 3. Retry original request
// If refresh also fails → mark as permanent failure → stop syncing
```

---

## 6. Design Trade-Offs

### 6.1 Server Authoritative vs. Local Authoritative

| Aspect | Server Authoritative (Current) | Local Authoritative |
|--------|-------------------------------|---------------------|
| Conflict handling | Simple (server wins) | Complex (requires merge algorithm) |
| Offline edits | May be overwritten after coming online | Offline edits preserved |
| Consistency | Strong consistency (everyone sees same data) | Eventual consistency |
| Implementation complexity | Low | High (requires CRDT or OT) |

The current choice is reasonable: team memory is "auxiliary information" rather than "critical data," so occasional overwrites are not catastrophic.

### 6.2 Secret Scanning False Positives

Regex matching inevitably produces false positives—some legitimate memory content (e.g., discussing API key formats) may be flagged as a secret leak. There is currently no "allowlist" mechanism for users to skip secret scanning, which could cause frustration.

### 6.3 No Version History

Team memory has no version history—once overwritten, the old content is gone. For important project knowledge, this may be a risk. Future improvements could include:

- Retaining the last N versions
- Auto-backing up to `.claude/memory/_history/` before overwrite
- Git integration, so memory changes are also under version control

---

## 7. Critique and Reflection

### 7.1 Blurred Privacy Boundary

Team memory sync means things your Claude "remembers" can be seen by colleagues. If a user habitually stores personal preferences in memory (e.g., "I don't like a certain colleague's coding style"), syncing could create awkward situations. There is currently no clear boundary between "private memory" and "team memory."

### 7.2 Strong Dependency on Anthropic Backend

Sync must go through Anthropic's API—there is no self-hosted option. For enterprises concerned about data sovereignty (e.g., finance, healthcare), uploading project knowledge to a third-party cloud may not be permitted.

### 7.3 Overlapping Positioning with CLAUDE.md

The `CLAUDE.md` file can also store project-level instructions and is naturally synced via git. Team Memory Sync and CLAUDE.md have partially overlapping positioning, which may confuse users about "what goes in CLAUDE.md vs. what goes in team memory." The dividing principle:

- **CLAUDE.md**: Static, repo-level instructions (everyone sees it, synced via git)
- **Team Memory**: Dynamic, runtime-discovered knowledge (things the AI learns, auto-synced)

> 🔑 **Deep insight**: Although the Team Memory Sync system is only 2,167 lines, it touches one of the most frontier problems in AI collaboration tools—**should AI knowledge belong to the individual or the team?** When you tell Claude to "remember" a project's technical decision, does that knowledge belong to you or to the project? When multiple people's AI memories conflict, who decides? These questions are currently solved crudely with "server authoritative + Last-Write-Wins," but as AI memory systems mature, they will become product-philosophy questions that demand serious answers.
