Architecture

A deep dive into how the Agent, Backend, and Web IDE communicate and maintain consistent cluster state.

Octokube is composed of three layers — Agent, Backend, and Web IDE — each with a single, well-defined responsibility. This page explains how they communicate, how state is maintained, and how consistency is guaranteed across your entire team.


The three layers

┌─────────────────────────────────────────────────────┐
│                    Your Cluster                     │
│                                                     │
│   kube-apiserver ──► Agent                          │
│                      │ in-memory state              │
│                      │ global version counter       │
└──────────────────────┼──────────────────────────────┘
                       │ single outbound connection

              ┌────────────────┐
              │    Backend     │
              │                │
              │  Virtual RBAC  │
              │  Multiplexing  │
              └───────┬────────┘
                      │ per-user filtered stream
          ┌───────────┼───────────┐
          ▼           ▼           ▼
      [ IDE ]     [ IDE ]     [ IDE ]
      User A      User B      User C

Agent

The Agent runs inside your cluster and is the only component with direct access to the kube-apiserver. Everything it does is designed around two guarantees: order and consistency.

Watch and hot state

On startup, the Agent opens a watch connection to the kube-apiserver for each resource type it tracks. As events arrive, it maintains an in-memory materialized view of your cluster — called hot state — structured as a map of resource type to resource key to a lightweight resource object.

Resource objects stored in hot state are reduced projections of the full Kubernetes object. Only the fields relevant to the Web IDE are kept. Fields like managedFields, verbose annotations, and extended status are discarded.

Single execution queue

All operations in the Agent pass through a single synchronous queue. There are two operation types:

  • ApplyEvent — processes an incoming resource event and updates hot state
  • GenerateSnapshot — serializes the current hot state for a given resource type

The queue guarantees that no read happens during a write, and no two writes happen concurrently. Every operation is processed sequentially and in order.

Global version counter

Every time the Agent processes a batch of events, it increments a global version counter — a monotonically increasing integer scoped to the cluster. This counter is the foundation of the consistency model.

The Agent is the only component that generates versions. The Backend does not assign versions. The Web IDE does not assign versions.

Write path

  1. The kube-apiserver emits an event via the watch connection
  2. The Agent converts it into an ApplyEvent operation and enqueues it
  3. The executor updates hot state, increments the version counter, and emits a structured delta to the Backend

Read path

  1. The Backend requests a full state snapshot for a given resource type
  2. The Agent enqueues a GenerateSnapshot operation
  3. The executor serializes the current hot state and returns it alongside the current version

Because snapshots pass through the same queue as writes, a snapshot always reflects exactly the state after all preceding events have been applied.

Agent restart

When the Agent restarts, the version counter resets to zero and hot state is rebuilt via a full list and watch cycle. Connected clients will detect the version gap and request a full resync automatically.


Backend

The Backend sits between the Agent and the Web IDE. It does not generate state and does not assign versions. Its responsibilities are access control and distribution.

Delta delivery

When the Backend receives a delta from the Agent, it fans it out to every connected client for that cluster. Before delivery, it applies each user's Virtual RBAC rules and constructs a filtered payload. Even if the filtered payload is empty for a given user, the version number is still delivered — so the client can advance its local version counter without missing a beat.

Snapshot deduplication

When multiple clients request a full state snapshot at the same time — for example, after an Agent restart — the Backend deduplicates those requests. A single snapshot operation is issued to the Agent, and all waiting clients receive the result. This prevents snapshot storms from overloading the Agent after a reconnection event.


Web IDE

The Web IDE maintains a local copy of cluster state for each resource type it is displaying. It applies incoming deltas incrementally and detects consistency gaps automatically.

Version tracking

For each cluster and resource type, the Web IDE tracks a localVersion. When a delta arrives:

  • If incomingVersion == localVersion + 1 → apply the delta and advance localVersion
  • If incomingVersion != localVersion + 1 → a gap has been detected, request a full snapshot

When a snapshot arrives, it replaces the local state entirely and sets localVersion to the snapshot's version. The snapshot is always treated as authoritative.

This model means the Web IDE is self-healing by design. Any connectivity issue, Agent restart, or missed update resolves itself through a resync without any action required from the user.


Consistency guarantees

PropertyGuarantee
Event orderTotal order per cluster via single queue
DeterminismSame initial state + same event sequence = same final state
RecoveryAny version gap triggers an authoritative resync
Version authorityAgent only — Backend and Web IDE never assign versions
Snapshot integrityAlways reflects all events processed before it in the queue

What this means in practice

  • One Agent per cluster. The Agent is a single point of version authority. High availability for the Agent is not supported in the current version — a restart triggers a full team resync, which completes automatically.
  • No distributed consensus required. The single queue model eliminates the need for distributed locking or consensus protocols. The system is simple, deterministic, and operationally predictable.
  • Flat load on the apiserver. The Agent maintains one watch connection per resource type regardless of how many engineers are connected. Your apiserver sees no additional load as your team grows.

Next steps

  • Go to Quickstart to deploy the Agent into your cluster
  • Read Virtual RBAC to understand how access rules are defined and enforced

On this page