# Complete Analysis of the Permission System

The permission system is the core hub of Claude Code's security model — every tool call (a "tool call" means the AI requests to execute a specific action, such as reading a file, running a command, or editing code) must pass through its adjudication. It determines what the AI can do, what it cannot do, and what requires human confirmation. This chapter walks you through the complete three-stage permission judgment chain (deny-first → mode evaluation → default ask), explains the design intent behind the six permission modes, and how the AI classifier in Auto mode substitutes for human decision-making. We will also deeply analyze a design of significant industry value: the remote permission kill switch (killswitch).

> **Source code location**: `src/utils/permissions/` (24 files), `src/hooks/toolPermission/`, `src/tools/tools.ts`

> 💡 **Plain English**: The permission system is like a residential access-control system — someone with a master key (bypassPermissions) can come and go freely, ordinary visitors (default mode) must register and get confirmation at the guard desk, and delivery drivers (auto mode) are vetted by an AI guard. Certain areas (bypass-immune paths, such as electrical rooms and security offices) cannot be entered even with a master key. And the property management headquarters (Anthropic's servers) retains the power to remotely lock the doors — if a security risk is discovered, they can remotely revoke all master keys.

> 🌍 **Industry Context**: Permission control for AI coding tools is a rapidly evolving field, and different products embody distinctly different security philosophies.
>
> | Tool | Permission Model | Security Philosophy | Key Difference |
> |------|-----------------|---------------------|----------------|
> | **Claude Code** | Multi-step judgment chain + AI classifier + five-level rule hierarchy | Soft judgment + fine-grained permissions | Standalone CLI; must build all security layers itself; AI classifier makes dynamic judgments |
> | **Cursor** | Trusted workspace + tool-level approval + yolo allowlist/blocklist | IDE sandbox as backstop | Relies on VS Code extension sandbox as the underlying security guarantee |
> | **CodeX (OpenAI)** | Three-tier mode (suggest/auto-edit/full-auto) + OS-level network egress restrictions | Hard isolation + broad permissions | OS-level egress rules replaced early environment-variable controls; underlying Rust rewrite brings memory safety |
> | **Aider** | `--auto-commits` + Architect mode (reasoning/edit separation) | Trust the user | AST-level Repo Map reduces context risk, but provides no AI classifier |
> | **GitHub Copilot** | Agent Mode fully GA + enterprise-grade MCP registry | Compliance-efficiency balance | Built-in Explore/Plan/Task dedicated agents; deeply integrated with enterprise security firewall policies |
>
> The most noteworthy comparison is the divergence in security philosophy between **Claude Code vs. CodeX**: CodeX's full-auto mode relies on OS-level network isolation as a hard backstop — even if the AI makes a wrong decision, the system boundary can contain it; Claude Code's auto mode relies on an AI classifier for soft judgment, with no hard isolation layer, but in exchange it gets finer-grained permission control and lower environmental requirements. This is the classic **hard isolation + broad permissions vs. soft judgment + fine-grained permissions** trade-off.

---

## System Overview

The permission system (`src/utils/permissions/`) decides whether each tool call is permitted. It is the core of Claude Code's security model, comprising roughly 1,400 lines of code.

**Core entry point**: `hasPermissionsToUseTool(tool, input, context)` — this function must be called before every tool call.

**Result types** (every permission evaluation yields exactly one of the following three results; the comments explain their meaning):
```typescript
type PermissionResult =
  | { type: 'allow' }       // Allowed, no confirmation dialog shown
  | { type: 'ask'; reason }  // Requires user confirmation
  | { type: 'deny'; reason } // Denied, error shown
```

---

## The Six Permission Modes

```typescript
// PermissionMode.ts
type PermissionMode =
  | 'default'            // Standard mode: prompt when confirmation needed
  | 'plan'               // Plan mode: read-only operations, no writes allowed
  | 'acceptEdits'        // Auto-accept file edits; other ops still require confirmation
  | 'bypassPermissions'  // Bypass all permissions (except bypass-immune ones)
  | 'dontAsk'            // Do not ask, but still respect always-allow/always-deny rules
  | 'auto'               // Auto mode: AI classifier replaces user confirmation
```

Among these, `plan`, `acceptEdits`, `bypassPermissions`, and `auto` are the four modes directly selectable by the user; `dontAsk` is used by internal sub-agents (a "sub-agent" is an AI-spawned clone used for parallel sub-task processing), and `default` is the fallback when no mode is specified.

**How to enter each mode**:
- `plan`: via the `/plan` command or the `--plan` CLI flag
- `acceptEdits`: `acceptEditsContext` configuration, or IDE integration mode
- `bypassPermissions`: `--dangerously-skip-permissions` flag (requires non-production environment), **but subject to remote killswitch control** — even if this flag is passed, if Anthropic activates the `tengu_disable_bypass_permissions_mode` switch, bypass mode will be downgraded to default mode (see the "Remote Permission Kill Switch" section below)
- `dontAsk`: used by internal sub-agents (e.g., hook agent) — no pop-ups but rules are still enforced
- `auto`: `--auto` flag, or Yolo mode

---

## Permission Judgment Chain (Three Stages)

`hasPermissionsToUseToolInner()` implements a multi-step judgment chain. The source code numbers the steps 1a-1g, 2a-2b, and 3, for a total of **10 sub-steps**, grouped into three logical stages. Multi-step permission check chains are standard practice in enterprise software — just as entering a large hospital requires passing through temperature checks, registration, and triage at multiple stations, each responsible for a specific safety judgment. Claude Code's real innovation lies in **embedding an AI classifier into the permission chain** (letting the AI itself judge whether an operation is safe) and **remote killswitch control** (Anthropic headquarters can remotely disable certain permissions).

> 💡 **Plain English**: The core logic of this judgment chain is "check the blacklist first, then the whitelist, and if neither matches, ask a human" — just like the security check when you enter a building: first see if you're on the banned list (deny), then see if you have a pre-registered pass (allow), and if neither, ask security for confirmation.

### Stage One: Denial and Safety Checks (Steps 1a - 1g)

**1a. Global deny rules**: Check `deny` rules from all sources; a hit means immediate denial.

**1b. Global ask rules**: Check `ask` rules — even auto mode will force a popup. This is the administrator's means of "mandatory human approval" for specific tools.

**1c. Tool self-check (checkPermissions)**: Invoke the tool's own `checkPermissions()` method. Each tool can perform additional checks based on input parameters:
- FileEditTool: checks if the path is in a sensitive directory like `.git/`
- BashTool: checks command patterns (bashPermissions.ts)
- WebFetchTool: checks if the URL is within allowed ranges

**1d. Tool self-check returns deny**: If the tool itself decides to deny, immediate denial.

**1e. Requires user interaction**: Checks if the tool is marked `requiresUserInteraction` (e.g., the permission dialog itself), forcing an ask if true.

**1f. Content-specific ask rules**: User-configured content-level ask rules (e.g., `Bash(npm publish:*)`) remain effective even in bypass mode. This is a critical security design: even if the user enables `--dangerously-skip-permissions`, their own ask rules will still prompt.

**1g. Bypass-Immune Safety Check (Critical!)**: Certain paths and operations must be asked about in **any mode**:
- `.git/config`, `.git/hooks/` — modifying these can inject malicious code into git operations
- `.claude/settings.json`, `.claude/settings.local.json` — modifying these can tamper with Claude Code's own permission rules
- System-sensitive directories

The common trait of these paths is: **modifying them can affect Claude Code's own security behavior** (meta-configuration files). This layer is hard-coded and not controlled by any configuration — it guards the permission system itself from being bypassed.

### Stage Two: Mode Evaluation (Steps 2a - 2b)

**2a. bypassPermissions mode**: If the current mode is `bypassPermissions`, allow directly (provided all safety checks in Stage One have passed).

**2b. Global allow rules**: Check `allow` rules in `alwaysAllowRules`. Rules are sorted by source priority, and **the first match wins** (first-match-wins).

### Stage Three: Default Behavior (Step 3)

If none of the above yields a definitive result, return `{ type: 'ask' }` — in default mode this pops up a dialog to ask the user; in auto mode it hands off to the AI classifier.

---

## Permission Rule Priority and Conflict Resolution

When multiple sources have matching rules with contradictory outcomes (e.g., project-level allow vs. user-level deny), Claude Code adopts a **first-source-wins** strategy — it traverses a fixed source-priority order, and the first matching rule is the final result.

The traversal order of `PERMISSION_RULE_SOURCES` in the source code is:

```
userSettings → projectSettings → localSettings → flagSettings → policySettings → cliArg → command → session
(user settings → project settings → local settings → remote flags → enterprise policy → CLI args → command-level → current session)
```

> 💡 For example: if your personal settings (userSettings) allow Claude to read all files, but project settings (projectSettings) prohibit reading `.env` files, the system searches left-to-right — it sees your personal "allow" first and adopts it immediately, without looking further.

However, deny rules and allow rules are **checked separately**, and deny checks come before allow checks (Step 1a in Stage One precedes Step 2b in Stage Two). This means:

- **Deny absolutely trumps allow**: Any deny rule from any source takes effect before allow rules are even checked
- **Within the same rule type**: The first matching source in `PERMISSION_RULE_SOURCES` wins
- **Session-level rules** are last in the traversal order, but because a user's temporary decision usually represents their most current intent, the system adds rules to the session level via the "Just this time" option in the permission popup

> 💡 **Plain English**: This is like a company's approval process — "you may not do this" (deny) always outranks "you may do this" (allow). Check the blacklist first, then the whitelist. If your name is on the blacklist, nothing written on the whitelist will help.

For `policy` (enterprise policy) level rules, although not first in the traversal order, they have a special property: **they cannot be overridden by the user**. Deny rules set by enterprise administrators via `managed-settings.json` or a remote API cannot be lifted through any local configuration. This reflects Claude Code's product positioning serving both individual developers and enterprise customers — individuals enjoy high freedom, while enterprise administrators retain ironclad authority.

---

## Remote Permission Kill Switch (bypassPermissionsKillswitch)

This is the most industry-valuable design in the entire chapter, deserving in-depth analysis.

### Mechanism Analysis

`bypassPermissionsKillswitch.ts` implements a remote control switch: even if the user launches Claude Code with `--dangerously-skip-permissions`, if the Anthropic server activates the killswitch, bypass mode will be **forcibly downgraded to default mode**.

```typescript
// bypassPermissionsKillswitch.ts — core logic
export async function checkAndDisableBypassPermissionsIfNeeded(
  toolPermissionContext, setAppState
): Promise<void> {
  // Execute only once before the first query
  if (bypassPermissionsCheckRan) return;
  bypassPermissionsCheckRan = true;

  const shouldDisable = await shouldDisableBypassPermissions();
  if (!shouldDisable) return;

  // Forced downgrade: mode → default, isBypassPermissionsModeAvailable → false
  setAppState(prev => ({
    ...prev,
    toolPermissionContext: createDisabledBypassPermissionsContext(
      prev.toolPermissionContext,
    ),
  }));
}
```

Technical implementation details:

1. **Remote configuration service**: Controlled by the Statsig/GrowthBook feature gate `tengu_disable_bypass_permissions_mode`. This is not a simple HTTP request; it leverages the GrowthBook SDK's **cache + async refresh** mechanism — when a local cache exists, it uses the cache (millisecond-level); when there is no cache, it asynchronously requests remote configuration.
2. **Execution timing**: Checked once before the user sends their first query (the `bypassPermissionsCheckRan` flag ensures it runs only once). After a login state change (the `/login` command), the flag is reset and re-checked — because different organizations may have different killswitch states.
3. **Downgrade behavior**: It does not directly disable tools; instead, it downgrades `bypassPermissions` mode to `default` mode and sets `isBypassPermissionsModeAvailable` to `false`, preventing the user from re-entering bypass mode through mode switching.
4. **Offline behavior**: When GrowthBook is unavailable (no network) and there is no local cache, `checkSecurityRestrictionGate` returns `false` — meaning **bypass mode is NOT disabled**. This is a fail-open design: when offline, it would rather let the user retain permissions than block their work.

### Security Philosophy: How Much "Phone Home" Capability Should a CLI Tool Have?

This design touches a deep engineering-philosophy question. A CLI tool installed locally on a user's machine can control the local highest-permission mode via a remote switch — meaning Anthropic retains the ability to **remotely lock all users' bypass modes** in an emergency (e.g., discovery of a 0-day vulnerability or model jailbreak exploitation).

This is not an Anthropic original. There are multiple precedents in the industry:

| Mechanism | Product | Behavior |
|-----------|---------|----------|
| SafeBrowsing | Chrome | Remotely pushes malicious-site blocklists; browser intercepts locally |
| Extension Kill Switch | VS Code | Microsoft can remotely disable extensions with security issues |
| App Revocation | iOS/Android | Apple/Google can remotely revoke signatures of installed apps |
| **bypassPermissionsKillswitch** | **Claude Code** | **Anthropic can remotely disable local bypass mode** |

But Claude Code's implementation has a key difference: it is **fail-open** (does not disable when offline), whereas most of the above security mechanisms are **fail-closed** (tend to deny when uncertain). This choice reflects Anthropic's trade-off between **security vs. user autonomy** — for a CLI tool aimed at professional developers, the cost of blocking user work is higher than the security risk.

> 💡 **Plain English**: This is like a car manufacturer installing a remote speed limiter in your vehicle — if they discover a serious safety defect in your model (e.g., brakes might fail), they can remotely restrict your top speed. You can still drive (default mode), but you can no longer race (bypass mode). But if you're driving in the wilderness with no cell signal (offline), the restriction won't take effect — because stranding you in the wilderness is more dangerous than letting you drive fast.

---

## Auto Mode in Detail

When `mode === 'auto'`, the default ask in Stage Three is not a popup but an AI classifier.

### Three Fast Paths (Executed Before the Classifier)

**Fast Path 1: acceptEdits check**
If the tool is a file-edit tool (FileEditTool, etc.), allow directly (no AI classifier invoked).

**Fast Path 2: Tool whitelist**
Read-only tools (Read, Glob, Grep, etc.) are allowed directly.

**Fast Path 3: classifyYoloAction**
Invoke the AI classifier, using a small model (Haiku) for rapid judgment.

### Iron Gate Rule

> 📚 **Course Connection**: The Iron Gate design is a classic application of the **Circuit Breaker pattern** from distributed systems courses — when the downstream service (here, the AI classifier) fails consecutively up to a threshold, the system automatically switches to the fallback path (revert to human confirmation), preventing cascading failures. It is also analogous to **TCP congestion control** in computer networking: after consecutive packet losses, throttle back and retransmit.

```typescript
// denialTracking.ts
const DENIAL_LIMITS = {
  maxConsecutive: 3,   // 3 consecutive denials
  maxTotal: 20,        // 20 cumulative denials
}
```

If the classifier denies tool calls 3 times in a row or 20 times cumulatively, the system automatically falls back to `default` mode, regardless of whether the user has auto mode enabled. This prevents the AI from being indefinitely locked in auto mode (for example, due to a system-prompt injection attack causing the classifier to reject everything).

---

## Permission Rule Format

Allow rules support wildcard matching:

```
Bash(git *)         # Allow all bash commands starting with git
Read(src/*)         # Allow reading all files under src/
Edit                # Allow all file edits
Bash                # Allow all bash commands (dangerous!)
```

Rule sources (from highest priority to lowest):
1. `session`: Dynamically added in the current session (disappears after restart)
2. `local`: `.claude/settings.local.json` (not committed to git)
3. `project`: `.claude/settings.json` (can be committed to git)
4. `user`: `~/.claude/settings.json` (global user-level)
5. `policy`: Enterprise policy (non-overridable)

---

## Permission Update (PermissionUpdate)

After the user makes a decision in the permission popup, the result is persisted via `applyPermissionUpdate()` and `persistPermissionUpdate()`:

- **"Just this time"**: Added to session-level rules (not written to disk)
- **"This project"**: Written to `.claude/settings.json`
- **"This user"**: Written to `~/.claude/settings.json`

---

## Permission System vs. MCP Tools

Tools provided by MCP servers go through the same permission system. The tool name format `mcp__server__toolname` can be used as a target for permission rules:

```json
{
  "allow": ["mcp__my-server__read_file"]
}
```

---

## Key Files

| File | Content |
|------|---------|
| `src/utils/permissions/permissions.ts` | Core logic, ~1,500 lines |
| `src/utils/permissions/PermissionMode.ts` | Type definitions for the 6 permission modes |
| `src/utils/permissions/denialTracking.ts` | Iron Gate denial counting logic |
| `src/utils/permissions/PermissionResult.ts` | PermissionResult type |
| `src/utils/permissions/PermissionUpdate.ts` | Applying and persisting permission updates |
| `src/utils/permissions/permissionRuleParser.ts` | Rule string parsing (Bash(git *), etc.) |
| `src/utils/permissions/permissionSetup.ts` | Permission context initialization |
| `src/utils/permissions/bypassPermissionsKillswitch.ts` | Remote kill switch for bypassPermissions mode |
| `src/hooks/toolPermission/` | Tool permission hook integration |
| `src/tools/tools.ts` | Tool registration and permission declarations |

---

## Code Landing Spots

- `permissions.ts`, line 1158: `hasPermissionsToUseToolInner()` — complete implementation of the three-stage permission judgment chain
- `permissions.ts`, line 473: `hasPermissionsToUseTool()` — outer function handling dontAsk/auto/headless
- `permissions.ts`, ~line 850: `classifyYoloAction()` — AI classifier for auto mode
- `denialTracking.ts`, line 1: `DENIAL_LIMITS` constant definitions
- `PermissionMode.ts`, line 1: `PermissionMode` type definition

---

## Design Trade-offs and Critical Analysis

### Complexity Cost of the Judgment Chain

The permission judgment chain involves 10 sub-steps (three logical stages), imposing a high learning curve on new contributors trying to understand the full chain. Any accidental reordering of steps could introduce security vulnerabilities — for example, if the bypass-immune check (Step 1g) were mistakenly moved after the bypassPermissions evaluation (Step 2a), paths like `.git/hooks/` would lose their protection. This "order-sensitive" design is a double-edged sword for maintainability. But it should also be recognized that multi-step permission check chains are standard in enterprise security systems — Spring Security's FilterChain has 15+ filters. Claude Code's true complexity does not lie in the number of steps, but in embedding AI judgment into a traditionally purely rule-driven permission chain.

### Auto Mode's AI Classifier: Innovation or Risk?

`classifyYoloAction` uses a small model (Haiku) to make permission decisions — using a different model to review another model's behavior touches the "orthogonality" principle in AI safety: the independence of reviewer and reviewed. The advantage of a small model is low latency and low cost; the cost is an upper bound on accuracy. The Iron Gate thresholds of 3/20 denials are hard-coded and cannot be adjusted based on project risk levels — high-security scenarios might demand more conservative thresholds.

By comparison: CodeX's full-auto mode needs no AI classifier at all, because it has OS-level network isolation as a hard backstop; Cursor's `.mdc` conditional rules engine takes another path — triggering different policies via globs matching specific file types, combined with VM-level isolation from Background Agents. The three approaches represent three different security philosophies: hard isolation (CodeX), soft judgment (Claude Code), and conditional rules + VM isolation (Cursor).

### Permission Rule Wildcard Matching Is Overly Simple

Prefix matching like `Bash(git *)` cannot express negative conditions such as "allow git push but deny git push --force." The lack of regex or glob negation syntax limits rule granularity, forcing users to choose between "too permissive" and "too many confirmation popups." This may be deliberate simplification rather than engineering debt — if negation rules were introduced (e.g., `Bash(!git push --force)`), conflict resolution between rules would become significantly more complex, and the risk of user misconfiguration would increase.

### Trust Issues with the Remote Killswitch

`bypassPermissionsKillswitch` gives Anthropic the ability to remotely control local permissions, a classic tension between "security vs. user autonomy." Currently, users cannot opt out of this mechanism (except by using it offline), and there is no public documentation acknowledging its existence. By contrast, Chrome's SafeBrowsing allows users to turn it off, and VS Code's extension kill switch has a public event-notification mechanism. In the future, Anthropic may need to do more on the transparency front to maintain the trust of the professional developer community.

### Permission Fatigue

The history of permission popups on Android and iOS has proven that overly frequent permission confirmations lead to users mindlessly tapping "Allow." Claude Code's auto mode is essentially a technical response to permission fatigue — replacing human decisions with AI ones. But this introduces a new risk: excessive user trust in AI decisions. Cursor's yolo mode and Codex's full-auto are addressing the same problem in different ways, and no solution has yet been proven optimal.
