# Cross-Cutting Concerns: The 12 Hidden Patterns Running Through the Entire System

Feature flags, error recovery, internationalization, configuration layering—these "hidden patterns" don't belong to any single module, yet they run through the entire system. This chapter examines 12 cross-cutting concerns, revealing how they maintain consistency across different modules and the trade-offs of this design.

> **🌍 Industry Context**: Cross-cutting concerns are a classic concept in software engineering—logging, authentication, error handling, and configuration management don't belong to any single business module, yet are needed in all of them. The Java ecosystem solves this with AOP (Aspect-Oriented Programming), Go uses middleware chains, and React uses the Context/Provider pattern. Among Claude Code's 12 cross-cutting concerns, most are industry-standard practices (OAuth PKCE, exponential backoff retry, feature flags), but a few are innovations unique to AI agents: **Dream Mode (sleep memory consolidation)** and **Auto-Memory (real-time conversation extraction)** have no equivalents in other AI coding tools—neither Cursor nor Windsurf has a cross-session automatic memory system, and Aider only has a manual `.aider` configuration file.

---

## Prelude: The Invisible Infrastructure

Every city has visible buildings—office towers, shopping malls, residential complexes. But what truly makes a city function lies underground, hidden in walls, things you depend on daily but never notice: water supply networks, power lines, drainage systems, communication fiber optics.

Claude Code is the same. The previous 12 chapters covered the "buildings"—startup, `queryLoop`, tool runtime, security architecture. This chapter covers the "piping"—the cross-cutting concerns that don't belong to any single module, yet **run through the entire system**. They don't have their own chapters because they are everywhere.

---

## 1. Feature Flag System: Compile-Time Gating

### What It Is

Claude Code uses a **two-layer** feature flag system. The first layer is the compile-time `feature()` function imported from `bun:bundle`; the second layer is runtime GrowthBook remote configuration (see Section 12). We'll start with the compile-time layer.

### How It Works

`feature()` is imported from Bun's bundler module `bun:bundle`, taking an uppercase string constant as an argument:

```typescript
// src/services/api/withRetry.ts
import { feature } from 'bun:bundle'

...(feature('BASH_CLASSIFIER') ? (['bash_classifier'] as const) : []),
```

Its key characteristic is **compile-time evaluation**—when Bun bundles, `feature('BASH_CLASSIFIER')` is replaced with the literal `true` or `false`. When the result is `false`, the entire branch is removed by Dead Code Elimination (DCE) and won't appear in the final artifact.

Through source code search, we can confirm the existence of at least the following compile-time flags:

| Flag Name | Purpose | Locations |
|-----------|---------|-----------|
| `TEAMMEM` | Team memory functionality | `watcher.ts`, `extractMemories.ts`, `teamMemSecretGuard.ts` |
| `VOICE_MODE` | Voice mode | `AppState.tsx`, `defaultBindings.ts` |
| `KAIROS` | Assistant mode | `metadata.ts`, `defaultBindings.ts` |
| `BASH_CLASSIFIER` | Bash command classifier | `withRetry.ts` |
| `CACHED_MICROCOMPACT` | Cached micro-compaction | `claude.ts` |
| `CONNECTOR_TEXT` | Connector text blocks | `claude.ts` |
| `TRANSCRIPT_CLASSIFIER` | Conversation classifier | `claude.ts` |
| `UNATTENDED_RETRY` | Unattended retry | `withRetry.ts` |
| `COMMIT_ATTRIBUTION` | Commit attribution | `setup.ts` |
| `UDS_INBOX` | Unix domain socket messages | `setup.ts` |
| `QUICK_SEARCH` | Quick search | `defaultBindings.ts` |
| `TERMINAL_PANEL` | Terminal panel | `defaultBindings.ts` |
| `MESSAGE_ACTIONS` | Message actions | `defaultBindings.ts` |
| `BREAK_CACHE_COMMAND` | Cache break command | `context.ts` |

The core value of these flags is **zero runtime overhead**. Internal Anthropic builds (`USER_TYPE=ant`) enable all flags, while external releases only enable stable features. Importing from `bun:bundle` means these decisions happen at bundle time, not runtime conditional branches—disabled features simply don't exist in the binary delivered to users.

A typical usage pattern is conditional `require`:

```typescript
// src/services/extractMemories/extractMemories.ts
const teamMemPaths = feature('TEAMMEM')
  ? (require('../../memdir/teamMemPaths.js') as typeof import('../../memdir/teamMemPaths.js'))
  : null
```

When `TEAMMEM` is `false`, the entire `teamMemPaths.js` module won't be bundled in at all.

> **💡 Plain English**: Feature flags are like a restaurant's secret menu—the dishes have already been developed and the chef knows how to make them, but they're only available to staff and VIP guests. Once a dish has been thoroughly tested, it gets added to the public menu for all customers. Compile-time flags go even further: for dishes not on the menu, the ingredients aren't even brought into the kitchen.

> 📚 **Course Connection**: Compile-time feature flags are essentially **conditional compilation**—C/C++ uses `#ifdef`, Rust uses `#[cfg(feature)]`, and Bun's `feature()` is the JavaScript ecosystem's equivalent. In compiler theory courses, this belongs to the classic optimization combination of **constant folding** + **dead code elimination**.

---

## 2. Error Handling Patterns: Layered Retry and Degradation

### Error Type System

Claude Code's error handling is built around the Anthropic SDK's `APIError` system, while also defining its own error types. The core file `src/services/api/withRetry.ts` defines two key error classes:

```typescript
// src/services/api/withRetry.ts
export class CannotRetryError extends Error {
  constructor(
    public readonly originalError: unknown,
    public readonly retryContext: RetryContext,
  ) { ... }
}

export class FallbackTriggeredError extends Error {
  constructor(
    public readonly originalModel: string,
    public readonly fallbackModel: string,
  ) { ... }
}
```

`CannotRetryError` indicates exhausted retries or scenarios where retrying is explicitly inappropriate. `FallbackTriggeredError` indicates that the primary model has failed consecutively and has triggered a fallback to an alternative model.

### Retry Mechanism

`withRetry()` is an `AsyncGenerator` with the following core logic:

1. **Maximum retry count**: 10 by default (`DEFAULT_MAX_RETRIES`), overridable via the `CLAUDE_CODE_MAX_RETRIES` environment variable
2. **Exponential backoff**: `BASE_DELAY_MS = 500`, doubling each time, capped at 32 seconds. Backoff formula: `min(500 * 2^(attempt-1), 32000) + jitter`
3. **Retry-After header**: If the API returns a `retry-after` header, the client uses the server-specified wait time directly
4. **529 overload protection**: Triggers degradation after 3 consecutive 529s (`MAX_529_RETRIES = 3`)

```typescript
export function getRetryDelay(attempt: number, retryAfterHeader?: string | null, maxDelayMs = 32000): number {
  if (retryAfterHeader) {
    const seconds = parseInt(retryAfterHeader, 10)
    if (!isNaN(seconds)) return seconds * 1000
  }
  const baseDelay = Math.min(BASE_DELAY_MS * Math.pow(2, attempt - 1), maxDelayMs)
  const jitter = Math.random() * 0.25 * baseDelay
  return baseDelay + jitter
}
```

> 📚 **Course Connection**: Exponential backoff + jitter is a foundational algorithm in distributed systems courses—AWS's official documentation lists it as "must implement" client behavior. The formula `min(base * 2^attempt, maxDelay) + random_jitter` appears almost verbatim in Claude Code, the AWS SDK, and gRPC clients. Jitter prevents the **thundering herd** effect—if all clients retry at exactly the same time, the server becomes overloaded again.

### Retry Strategy Determined by Query Source

A key design is that **background queries don't retry 529s**. The `FOREGROUND_529_RETRY_SOURCES` set defines which query sources are worth retrying—only operations where the user is directly waiting for results (such as `repl_main_thread`, `compact`, `sdk`) are retried; background operations (summaries, title generation, classifiers) are dropped directly, avoiding amplified request volume during capacity cascading failures.

### Persistent Retry Mode

When the `CLAUDE_CODE_UNATTENDED_RETRY` environment variable is enabled (used internally at Anthropic), 429/529 errors retry infinitely, the backoff cap rises to 5 minutes, and the total wait time cap is 6 hours. During this period, a heartbeat yield is sent every 30 seconds to prevent the host environment from considering the session idle.

> **💡 Plain English**: It's like a package delivery retry strategy—the first time you're not home, the courier leaves a note and comes back in 30 minutes; the second time still no one, they wait an hour; the third time still no luck, they return to sender (`CannotRetryError`), or hand it off to another station (`FallbackTriggeredError`). But if it's important official documents (foreground queries), they'll try a few more times; for ordinary advertising flyers (background queries), they simply won't attempt redelivery.

---

## 3. Analytics/Telemetry Pipeline: From Event Generation to Delivery

### Architecture Overview

The analytics pipeline is coordinated by 8 files under `src/services/analytics/`. The design follows one core principle: **zero-dependency entry point + lazy binding**.

`index.ts` is the public entry point for the entire system, and it has **no import dependencies whatsoever** (the comments explicitly state "This module has NO dependencies to avoid import cycles"). Events are queued in memory before the sink is connected:

```typescript
// src/services/analytics/index.ts
const eventQueue: QueuedEvent[] = []
let sink: AnalyticsSink | null = null

export function logEvent(eventName: string, metadata: LogEventMetadata): void {
  if (sink === null) {
    eventQueue.push({ eventName, metadata, async: false })
    return
  }
  sink.logEvent(eventName, metadata)
}
```

### Event Flow Path

```
logEvent() → [Queue] → attachAnalyticsSink() → sink.logEvent()
                                                    ↓
                                              ┌─────┴─────┐
                                              ↓           ↓
                                          Datadog     1P Logger
                                     (stripProtoFields)  (Full payload)
```

1. **Generation**: Code anywhere calls `logEvent('tengu_xxx', {...})`
2. **Queuing**: If the sink isn't connected yet (early startup), events enter `eventQueue`
3. **Binding**: `initializeAnalyticsSink()` is called during startup, and the queue is drained asynchronously via `queueMicrotask`
4. **Routing**: `sink.ts`'s `logEventImpl` performs dual-channel dispatch
5. **Sampling**: `firstPartyEventLogger.ts`'s `shouldSampleEvent()` decides whether to drop events based on dynamic GrowthBook configuration
6. **PII protection**: `stripProtoFields()` removes `_PROTO_*`-prefixed fields before sending to Datadog (these fields carry PII and are only visible to authorized 1P backends)

### Safety Type Tag

A unique design is the `AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS` type, which is a `never` type alias used to force developers to explicitly `as`-cast strings before putting them into metadata—reminding them to confirm the string contains no code snippets or file paths.

### Killswitch

`sinkKillswitch.ts` provides the ability to remotely shut down individual sinks. The GrowthBook configuration `tengu_frond_boric` is a JSON object; setting `{ datadog: true }` immediately stops Datadog data transmission without requiring a new release.

> **💡 Plain English**: It's like a dashcam—silently recording everything (`logEvent`), storing footage on an SD card buffer (`eventQueue`), and automatically uploading to two places once WiFi is connected: one for the insurance company (Datadog, de-identified version), and one to your private cloud (1P, full version). You only review the footage after a major incident; normally, nobody looks at it.

> 🌍 **Competitor Comparison**: Almost every commercial developer tool has a telemetry system—VS Code uses Application Insights, while Cursor and Windsurf both collect anonymous usage data. Claude Code's unique aspects are the **dual-channel dispatch** (Datadog de-identified version + 1P full version) and the `_PROTO_*` field PII isolation mechanism—this is stricter than most tools' telemetry implementations. The ultra-long type name `AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS` is itself a "code as documentation" security practice.

---

## 4. OAuth Authentication Flow: PKCE + Token Refresh

### Authentication Architecture

`src/services/oauth/` implements the complete OAuth 2.0 Authorization Code Flow with PKCE (Proof Key for Code Exchange). The core class is `OAuthService` (`index.ts`).

### Login Flow

```
User executes /login
    ↓
OAuthService.startOAuthFlow()
    ↓
Generate codeVerifier + codeChallenge (SHA-256)
    ↓
Build authUrl (contains client_id, scope, code_challenge)
    ↓
├─ Automatic flow: openBrowser(automaticFlowUrl) → localhost:PORT/callback
└─ Manual flow: Display URL for user to copy-paste authorization code
    ↓
exchangeCodeForTokens() → POST /oauth/token
    ↓
fetchProfileInfo() → Get subscription type, Rate Limit Tier
    ↓
Return OAuthTokens { accessToken, refreshToken, expiresAt, scopes, subscriptionType }
```

### Token Refresh

`refreshOAuthToken()` in `client.ts` implements intelligent refresh. A key optimization: when the full profile info is already cached locally (`billingType`, `accountCreatedAt`, `subscriptionCreatedAt`, `subscriptionType`, `rateLimitTier`), the refresh **skips the extra `/api/oauth/profile` network request**. Comments note this optimization saves approximately 7 million requests per day globally.

```typescript
// src/services/oauth/client.ts
export function isOAuthTokenExpired(expiresAt: number | null): boolean {
  if (expiresAt === null) return false
  const bufferTime = 5 * 60 * 1000  // 5-minute buffer
  return Date.now() + bufferTime >= expiresAt
}
```

Token expiration checks include a 5-minute buffer—it doesn't wait until the token actually expires, but proactively refreshes 5 minutes early to avoid mid-request token invalidation.

> 📚 **Course Connection**: PKCE (Proof Key for Code Exchange) is a mandatory requirement in the OAuth 2.1 standard—it solves the authorization code interception attack for public clients (CLI tools have no client_secret). In network security courses, this is a classic **man-in-the-attack prevention** scheme: the client generates a random code_verifier → hashes it with SHA-256 as code_challenge sent to the authorization server → presents the original code_verifier when exchanging the token → the server verifies the hash match.

### Dual-Track Authentication

`shouldUseClaudeAIAuth()` checks whether the scope contains `CLAUDE_AI_INFERENCE_SCOPE`, distinguishing between the Console API Key authentication path and the Claude.ai OAuth authentication path.

> **💡 Plain English**: It's like a bank card swipe process—insert card (open browser to log in) → enter PIN (PKCE verification) → bank authorization (Token Exchange) → get temporary authorization code (accessToken). The authorization code has an expiration date, and the bank automatically renews it when it's about to expire (refreshToken), so you don't have to reinsert your card. Your membership tier (subscriptionType) determines your credit limit.

---

## 5. Rate Limiting: Detection, Backoff, and User Communication

### Detection Layer

Rate limiting detection is scattered across multiple locations. The core judgment in `withRetry.ts`:

- **429**: Standard rate limit. Not retried for Claude AI subscribers (non-Enterprise), because the limit window is typically several hours long
- **529**: Overload error. Triggers model degradation after 3 consecutive occurrences

```typescript
function shouldRetry(error: APIError): boolean {
  // 429 is not retried for subscribers (except Enterprise, who use PAYG)
  if (error.status === 429) {
    return !isClaudeAISubscriber() || isEnterpriseSubscriber()
  }
  // ...
}
```

### Fast Mode Degradation

When a user has Fast Mode enabled and encounters rate limiting:
- **Short delay (< 20 seconds)**: Stay in Fast Mode, wait and retry (preserving prompt cache)
- **Long delay (>= 20 seconds)**: Trigger cooldown, switch to standard speed model, minimum cooldown 10 minutes

### User Interface Communication

`RateLimitMessage.tsx` displays different action recommendations based on the user's subscription type:

- **Max 20x users**: Prompt `/extra-usage` or switch to an API-billed account
- **Team/Enterprise users**: Prompt `/extra-usage` to request increased quota from admin
- **Regular users**: Prompt `/upgrade` or `/extra-usage`

The rate limit reset header `anthropic-ratelimit-unified-reset` provides the server's exact reset time (Unix timestamp), which is used directly in persistent retry mode rather than estimating.

> **💡 Plain English**: It's like a highway toll booth traffic limit—when there are too many cars (429), the entrance ramp shows a red light to control entry speed. If the entire road is jammed (529), traffic police guide you to an alternate route (model degradation). The VIP lane (Enterprise) has a separate entrance unaffected by normal traffic limits. The toll booth electronic display (`RateLimitMessage`) shows real-time estimated wait times and alternatives.

---

## 6. Dream Mode: Memory Consolidation During Sleep

### What Is Dream

Dream Mode is Claude Code's **background memory organization mechanism**. After a user has been active for a while, the system automatically runs a "dreaming" sub-agent during gaps between sessions to review recent conversation logs and organize scattered information into structured long-term memories.

### Trigger Conditions

`src/services/autoDream/autoDream.ts` defines a strict gating chain (cheapest check first):

1. **Feature toggle**: `isAutoDreamEnabled()` returns true (via GrowthBook configuration `tengu_onyx_plover`)
2. **Not in special modes**: Not in KAIROS mode, not in remote mode
3. **Time gate**: >= 24 hours since last consolidation (default `minHours: 24`)
4. **Session gate**: >= 5 new sessions since last consolidation (default `minSessions: 5`)
5. **Lock gate**: No other process is currently performing consolidation (preventing concurrent conflicts)

```typescript
const DEFAULTS: AutoDreamConfig = {
  minHours: 24,
  minSessions: 5,
}
```

### Execution Process

Dream uses `runForkedAgent()` to create a **perfect fork** sub-agent, sharing the parent session's prompt cache. The consolidation prompt (`consolidationPrompt.ts`) has 4 stages:

1. **Orient**: `ls` the memory directory, read the index file
2. **Gather**: Scan recent conversation logs, `grep` key information
3. **Consolidate**: Update or create memory files, merge duplicates, correct outdated information
4. **Prune**: Keep the index file under 200 lines

### UI Presentation

`src/tasks/DreamTask/DreamTask.ts` registers Dream as a visible background task, displayed as a pill in the bottom status bar. Users can press Shift+Down to view details, or manually abort (killing it rolls back the consolidation lock's mtime).

> **💡 Plain English**: It's like your phone's overnight automatic optimization—while you sleep (between sessions), the phone silently cleans cache, organizes photos, and backs up data. Claude Code's "dreaming" does the same: during idle time, it quietly reviews conversations from the past few days and organizes what it learned into notes. When you wake up (next conversation), it knows you better.

> 🌍 **Competitor Comparison**: Dream Mode (sleep memory consolidation) is an original design among AI coding tools. **Cursor**'s `.cursorrules` is just a static configuration file that doesn't auto-update; **Windsurf** has "Cascade Memory" but it's only effective within a single session; **Aider**'s `.aider.conf.yml` is completely manually maintained. Claude Code's Dream + Auto-Memory dual system achieves adaptive "the more you use it, the more it knows you" behavior—this is closer to the personal knowledge base concepts of Notion AI or Rewind, rather than the configuration files of traditional IDE plugins.

---

## 7. Auto-Memory Extraction: Automatic Notes from Conversation

### Difference from Dream

Dream is **inter-session** batch organization (24-hour intervals), while Auto-Memory is **in-conversation** real-time extraction—after each model response, it checks in the background whether there's information worth remembering.

### Mechanism

`src/services/extractMemories/extractMemories.ts` uses a closure to encapsulate all mutable state:

```typescript
export function initExtractMemories(): void {
  let lastMemoryMessageUuid: string | undefined   // Cursor: last processed message
  let inProgress = false                            // Prevent overlapping execution
  let turnsSinceLastExtraction = 0                  // Throttle counter
  let pendingContext: { ... } | undefined           // Delayed execution context
  // ...
}
```

Key flow:
1. After each model response, `handleStopHooks` triggers `executeExtractMemories()`
2. Check whether GrowthBook gate `tengu_passport_quail` is enabled
3. Check whether to skip (remote mode, sub-agent, auto-memory not enabled, etc.)
4. If the main agent has already written memory files directly (`hasMemoryWritesSince`), skip (mutual exclusion design)
5. Execute only every N turns (`tengu_bramble_lintel` controls frequency, default every turn)
6. Run a `runForkedAgent`, max 5 turns, only allowed to use limited tools

### Tool Permission Control

The extraction agent's permissions are strictly restricted—the `createAutoMemCanUseTool()` function only allows:
- Read operations: `FileRead`, `Grep`, `Glob` (unrestricted)
- Bash: read-only commands only (`ls`, `find`, `cat`, etc.)
- Write operations: `FileEdit`, `FileWrite` restricted to memory directory paths only

### Overlap Prevention

If a new extraction request is triggered while one is already executing, the system won't start a second one—instead, it stashes the new request's context (`pendingContext`), and performs a "trailing extraction" once the current one completes. This ensures no conversation content is lost in between.

> **💡 Plain English**: It's like automatic meeting minutes generation—you don't need to take notes during the meeting; an AI secretary listens quietly on the side, and every so often automatically distills key points into a memo. When you explicitly say "remember this," it records immediately; when you don't, it also judges which information is worth keeping.

---

## 8. Notification System: Priority Queue Notifications

### Architecture Design

`src/context/notifications.tsx` implements a **priority queue** notification system. Notifications have four priority levels:

```typescript
type Priority = 'low' | 'medium' | 'high' | 'immediate'
```

### Core Data Structure

Each notification contains:
- `key`: Unique identifier, used for deduplication and merging
- `priority`: Priority level
- `invalidates`: Which old notifications this notification invalidates
- `fold`: A merge function when notifications with the same key appear repeatedly (similar to `Array.reduce`)
- `timeoutMs`: Display duration, default 8000ms

Notifications can be plain text (`TextNotification`) or JSX components (`JSXNotification`).

### Processing Flow

1. `addNotification()` is called
2. If `immediate` priority: immediately clear the current notification and display directly
3. Otherwise: enqueue, `processQueue()` pulls the next one by priority
4. After display completes (timeout expires), automatically pop the next one

### Actual Usage Scenarios

Take `useFastModeNotification.tsx` as an example—Fast Mode state changes are communicated to the user through the notification system:
- Organization enabled Fast Mode: `"Fast mode is now available · /fast to turn on"`
- Organization disabled Fast Mode: `"Fast mode has been disabled by your organization"`
- Cooldown ended after rate limiting: automatic recovery notification

> **💡 Plain English**: It's like your phone's notification center—messages from different apps are managed centrally, important messages (`immediate`) pop up directly as push notifications, while normal messages (`low`) wait quietly in the notification shade. Similar messages are merged (`fold`), and new messages automatically revoke outdated old ones (`invalidates`).

---

## 9. State Management Architecture: Minimalist Reactive Store

### Store Core

`src/state/store.ts` is the core of the entire state management system—just 34 lines of code, implementing a minimalist reactive Store:

```typescript
export function createStore<T>(initialState: T, onChange?: OnChange<T>): Store<T> {
  let state = initialState
  const listeners = new Set<Listener>()

  return {
    getState: () => state,
    setState: (updater: (prev: T) => T) => {
      const prev = state
      const next = updater(prev)
      if (Object.is(next, prev)) return   // Skip if reference-equal
      state = next
      onChange?.({ newState: next, oldState: prev })
      for (const listener of listeners) listener()
    },
    subscribe: (listener: Listener) => {
      listeners.add(listener)
      return () => listeners.delete(listener)
    },
  }
}
```

Key design: `Object.is(next, prev)` reference equality check—if the updater returns the same object reference, no listeners are triggered. This requires all state updates to return new objects (immutable updates).

> 📚 **Course Connection**: These 34 lines are a textbook implementation of the **Pub-Sub pattern**. Compared to Redux (2,500+ lines), MobX (10,000+ lines), and Zustand (~600 lines), Claude Code chose the simplest option—no middleware, no devtools, no time-travel debugging. This choice reflects an engineering judgment: CLI tool state management doesn't need Web application-level complexity.

### AppState Structure

`src/state/AppStateStore.ts` defines the massive `AppState` type, wrapped with `DeepImmutable<>` to ensure deep immutability. Key fields include:

| Field | Type | Purpose |
|------|------|---------|
| `settings` | `SettingsJson` | User settings |
| `mainLoopModel` | `ModelSetting` | Current model |
| `toolPermissionContext` | `ToolPermissionContext` | Tool permission context |
| `notifications` | `{queue, current}` | Notification queue |
| `kairosEnabled` | `boolean` | Assistant mode toggle |
| `speculation` | `SpeculationState` | Speculative execution state |
| `tasks` | `TaskState[]` | Background task list |

### React Integration

`AppState.tsx` injects the Store into the component tree via React Context. It uses the React Compiler (`react/compiler-runtime`) for automatic memoization—creating compile-time cached arrays via the `_c()` function, avoiding manual `useMemo`/`useCallback`.

`useAppState(selector)` is combined with `useSyncExternalStore` to achieve selective subscription: only components whose selector return value changes will re-render.

> **💡 Plain English**: It's like a shopping mall's central control room—status from all floors (foot traffic, temperature, security, power) converges onto a central panel (`AppState`). Any change notifies the relevant areas (`listener`), but only floors that actually changed need to respond (`Object.is` check). The panel itself doesn't make decisions; it only ensures everyone sees the same real-time state.

---

## 10. LSP Integration: Language Server Protocol Bridging

### Architecture Layers

`src/services/lsp/` implements a complete LSP client management system, divided into three layers:

1. **LSPClient** (`LSPClient.ts`): Low-level communication with a single LSP server. Uses `vscode-jsonrpc` to interact with the server process via stdio
2. **LSPServerInstance** (`LSPServerInstance.ts`): Wraps `LSPClient` with start/stop/file synchronization logic
3. **LSPServerManager** (`LSPServerManager.ts`): Manages multiple server instances, routing requests by file extension

### Server Discovery and Configuration

`config.ts` shows that LSP servers are **loaded only through plugins**—there is no user/project-level manual configuration:

```typescript
export async function getAllLspServers(): Promise<{
  servers: Record<string, ScopedLspServerConfig>
}> {
  const { enabled: plugins } = await loadAllPluginsCacheOnly()
  // Each plugin loads its LSP server configuration in parallel
  // Later-loaded plugins override earlier-loaded ones on name conflicts
}
```

### Diagnostic Push

`LSPDiagnosticRegistry.ts` receives server-pushed diagnostic information (type errors, lint warnings, etc.), applies throttling (max 10 per file, 30 globally), and LRU deduplication (tracks up to 500 files), then injects them as Attachments into the next conversation round. This lets Claude "see" IDE error hints.

### File Synchronization

The Manager maintains an `openedFiles` mapping (URI → server name). When the user edits files through Claude Code, it automatically sends `didOpen`, `didChange`, `didSave`, and `didClose` notifications to the corresponding LSP server, keeping the language server aware of file state.

> **💡 Plain English**: It's like hiring expert consultants—Claude itself isn't a TypeScript compiler or Python type checker, but it can send code to expert consultants (LSP servers) and have them point out "there's a type error on line 42." Multiple experts divide work by specialty (TypeScript server handles `.ts` files, Python server handles `.py` files), with a dedicated secretary (Manager) responsible for distribution.

---

## 11. Team Memory Synchronization: Cross-Device Team Knowledge Base

### Sync Protocol

`src/services/teamMemorySync/index.ts` implements a bidirectional synchronization system based on HTTP API:

- **Pull**: `GET /api/claude_code/team_memory?repo={owner/repo}` fetches server-side content, **server overwrites local** (server wins)
- **Push**: `PUT` only uploads entries whose content hash has changed (delta upload); server uses upsert semantics
- **Deletes don't propagate**: Deleting a local file won't delete server-side data; it will be restored on the next pull

### Security Guard

`secretScanner.ts` performs client-side secret scanning before upload. It embeds a curated set of gitleaks rules (from an MIT-licensed public configuration), selecting only high-confidence rules with unique prefixes:

```typescript
const SECRET_RULES: SecretRule[] = [
  { id: 'aws-access-token', source: '\\b((?:A3T[A-Z0-9]|AKIA|ASIA|ABIA|ACCA)[A-Z2-7]{16})\\b' },
  { id: 'gcp-api-key', source: '\\b(AIza[\\w-]{35})...' },
  { id: 'anthropic-api-key', source: `\\b(${ANT_KEY_PFX}03-...)` },
  // GitHub PAT, Slack tokens, Stripe keys, etc.
]
```

Note that `ANT_KEY_PFX` uses `['sk', 'ant', 'api'].join('-')` for concatenation, rather than writing the literal directly—avoiding Anthropic's own API key prefix from appearing in the shipped code.

### Watcher Mechanism

`watcher.ts` uses `fs.watch()` to monitor the team memory directory, triggering push after a 2-second debounce. It includes permanent failure detection—if push fails for unrecoverable reasons (no OAuth, 404, etc.), the watcher suppresses subsequent pushes to prevent infinite retries (a real case involved an OAuth-less device emitting 167K push events over 2.5 days).

### Capacity Limits

- Single file max 250KB (`MAX_FILE_SIZE_BYTES`)
- PUT request body max 200KB (`MAX_PUT_BODY_BYTES`), sent in batches if exceeded
- Server-side entry count limit is dynamically configured per organization by GrowthBook

> **💡 Plain English**: It's like a company internal knowledge base—team members (collaborators on the same Git repository) share a set of documents. You write an experience note and upload it to the server; when a colleague opens Claude Code, it automatically syncs down. Before uploading, a security officer (`secretScanner`) flips through every page to make sure no passwords or keys got mixed in.

---

## 12. GrowthBook Integration: Remote Configuration and A/B Testing

### Positioning

If `feature()` is the compile-time switch (Section 1), GrowthBook is the **runtime remote control**. It allows Anthropic to dynamically control feature toggles, adjust parameter configurations, and run A/B experiments without shipping a new release.

### User Attributes

`src/services/analytics/growthbook.ts` defines the user attributes sent to GrowthBook:

```typescript
export type GrowthBookUserAttributes = {
  id: string                    // User ID
  sessionId: string             // Session ID
  deviceID: string              // Device ID
  platform: 'win32' | 'darwin' | 'linux'
  organizationUUID?: string     // Organization ID
  subscriptionType?: string     // Subscription type
  rateLimitTier?: string        // Rate limit tier
  appVersion?: string           // App version
  // ...
}
```

These attributes are used for targeting—for example, enabling a feature only for users with `subscriptionType: 'max'`, or splitting A/B groups by `platform`.

### Three-Layer Value Retrieval Priority

When retrieving a feature's value, priority from highest to lowest is:

1. **Environment variable override**: `CLAUDE_INTERNAL_FC_OVERRIDES` (Ant users only, for evaluation frameworks)
2. **Config override**: Local overrides set in the `/config` Gates tab (Ant users only)
3. **Remote value**: `remoteEvalFeatureValues` Map (returned by GrowthBook server)
4. **Disk cache**: `cachedGrowthBookFeatures` (value from the previous session)
5. **Default value**: Fallback specified by the caller

### Name Obfuscation

The source code uses heavily obfuscated feature key names, prefixed with `tengu_` (tengu is the internal codename for Claude Code):

| Key | Actual Function |
|-----|-----------------|
| `tengu_passport_quail` | Auto-Memory extraction toggle |
| `tengu_onyx_plover` | Dream mode configuration |
| `tengu_bramble_lintel` | Memory extraction frequency control |
| `tengu_frond_boric` | Analytics sink killswitch |
| `tengu_event_sampling_config` | Event sampling configuration |
| `tengu_log_datadog_events` | Datadog event logging toggle |
| `tengu_moth_copse` | Memory index skip toggle |
| `tengu_disable_keepalive_on_econnreset` | Disable keep-alive on connection reset |

### Experiment Exposure Logging

When a user is assigned to an experiment, `logExposureForFeature()` records the experiment exposure event via 1P (First Party) logs, including `experimentId`, `variationId`, and other data. Each feature is logged only once per session (`loggedExposures` Set deduplication).

### Refresh Listeners

`onGrowthBookRefresh()` allows other systems to register callbacks—when GrowthBook values are updated, long-lived objects that depend on these values are rebuilt. For example, the 1P Event Logger reads batch configuration at initialization, and after a GrowthBook refresh needs to rebuild the `LoggerProvider` with the new configuration.

> **💡 Plain English**: It's like a restaurant chain headquarters remotely controlling franchise menus through a central system—headquarters can decide at any time "Beijing locations get the new item" or "50% of Shanghai users see the new interface." Franchise locations (Claude Code clients) periodically sync the latest instructions, while caching the previous ones locally in case of network disconnection. Each location can also override headquarters settings locally (limited to internal test locations only).

---

## Summary: Quick Reference Table of the 12 Cross-Cutting Concerns

| # | Pattern | Core Files | One-Sentence Description |
|---|---------|-----------|--------------------------|
| 1 | Feature Flag | `bun:bundle` → `feature()` | Compile-time feature gating; disabled features disappear entirely from the binary |
| 2 | Error Handling | `src/services/api/withRetry.ts` | Exponential backoff + 529 degradation + persistent retry mode |
| 3 | Analytics | `src/services/analytics/` | Zero-dependency entry point + lazy binding + dual-channel dispatch (Datadog + 1P) |
| 4 | OAuth | `src/services/oauth/` | PKCE authorization code flow + smart token refresh (saves ~7M requests/day) |
| 5 | Rate Limiting | `withRetry.ts` + `RateLimitMessage.tsx` | Differentiated retry by subscription type + Fast Mode cooldown degradation |
| 6 | Dream Mode | `src/services/autoDream/` + `src/tasks/DreamTask/` | 24h timer + 5-session threshold → forked agent organizes memories |
| 7 | Auto-Memory | `src/services/extractMemories/` | Forked agent extracts after each conversation round, overlap prevention + mutual exclusion with main agent writes |
| 8 | Notification | `src/context/notifications.tsx` | Priority queue + fold merging + invalidates revocation mechanism |
| 9 | State Mgmt | `src/state/store.ts` (34 lines) | Minimalist pub-sub + DeepImmutable + React Compiler integration |
| 10 | LSP | `src/services/lsp/` | Three-layer architecture (Client → Instance → Manager), plugin-driven |
| 11 | Team Memory | `src/services/teamMemorySync/` | Server-wins sync + client-side secret scanning + batched PUT |
| 12 | GrowthBook | `src/services/analytics/growthbook.ts` | Runtime remote control: remote config > local cache > default value, three-layer priority |

These 12 patterns constitute Claude Code's "hidden infrastructure." They have no flashy UI, no dedicated command entry points, but behind every API call, every notification display, and every memory save, these cross-cutting concerns are working silently. Understanding them is essential to understanding why Claude Code is not a simple "LLM chat client," but an **engineered client-side system**—it isn't distributed (it's a single process), but its internal complexity (multiple subsystems coordinating through events and shared state, background tasks executing in parallel, interacting with multiple external services) has reached the level of a medium-sized backend service.
