Building Agentic Systems from First Principles Inspired by Unix and Kubernetes

Engineering
April 21, 2025
April 22, 2025
Yedendra Srinivasan

Chief Architect

Ravi Kokku

Co-Founder & CTO

Introduction

The way we design and orchestrate computational workflows is evolving once again. From the modular elegance of Unix pipes to the scalable orchestration of Kubernetes, each leap in system architecture has redefined how software systems are composed and operated.

Now, we stand at the cusp of another transformation—this time driven by intelligent, autonomous agents. These agents are designed for composability and modularity, capable of creating, coordinating, and evolving other agents, tools and workflows with minimal human intervention.

Unlike Unix or Kubernetes, which rely on human-authored scripts or YAML configurations, agentic systems dynamically instantiate new agents based on task structure. They also introduce governance mechanisms tailored for distributed intelligence—fine-grained permissions, role-based constraints, and memory scopes that ensure safe, auditable collaboration.

This post explores how agentic systems inherit foundational ideas from Unix and Kubernetes while introducing nine defining tenets: Agent Composability, Agentic Capabilities, Task I/O, Local and Global Memory, Execution Contexts, Skills as System Call Equivalents, Asynchronous Execution, Dynamic Agent Instantiation, and Agent Governance.

Together, these principles form a blueprint for the next era of intelligent, autonomous systems.

1. Agent Composability

Definition: Agent composability refers to the ability to chain different agents together in workflows, where the output of one agent becomes the input for another. This resembles the modular approach used in Unix pipes.

Unix Pipes Analogy: Unix pipes (|) are known for their simplicity in chaining commands. For instance, cat file.txt | grep "search_term" | sort demonstrates how the output of one command becomes the input for the next. Similarly, in an agents framework, one agent’s output can seamlessly become another’s input, creating a continuous and efficient workflow.

Benefits: This modular composability enables scalable and flexible workflow creation, which can be easily modified and extended without altering the entire system. This approach fosters reusability and simplicity.

2. Agentic Capabilities

Definition & Importance: Agentic capabilities refer to an agent’s ability to reason, self-improve, and orchestrate other agents or services. This makes agents capable of autonomous decision-making and complex task management.

Comparison with Kubernetes Orchestration: Kubernetes orchestrates containers to manage applications. It automatically handles tasks like deployment, scaling, and healing. Similarly, agents with agentic capabilities can autonomously manage tasks, make decisions, and orchestrate workflows dynamically.

Autonomous Operation: Agents can increasingly handle complex tasks independently, making decisions about how to delegate or modify workflows. This reduces the need for constant human oversight with time and allows systems to adapt and learn in real-time.

3. Task I/O

Definition: Task I/O refers to the manner in which agents receive task input (prompts) and independently process them to produce task responses. Agents utilize a TaskMessage interface to handle task requests and return responses, including status updates and outputs. Each agent receives a TaskMessage that contains:

  • A natural language instruction that describes the task to be performed.
  • Task Input data (which may be the output of a previous agent).
  • After processing the task as per the instruction, the agent produces a Task Output, which is either passed to the next agent or stored, depending on the task's place in the pipeline.

Unix Files Analogy: In Unix, processes communicate by reading and writing to files, with file metadata providing important details about the file's purpose and content. Similarly, in a multi-agent system, agents use TaskMessages to communicate. Each TaskMessage contains a natural language instruction (acting like file metadata) that describes the task to be performed, and Task Input data (analogous to file content) that the agent processes. After processing, agents generate Task Output (similar to writing back to a file), ensuring a clear, structured flow of data between agents.

Workflow Management: This model allows for streamlined task management where each task is treated as a unit of work, ensuring tasks can be queued, executed, and managed efficiently.

4. Local and Global Memory

Definition: Agents use both local memory (specific to a single agent) and global memory (shared across all agents) to manage state and data.

Unix Process Memory Comparison: In Unix, each process has its own memory space but can access shared memory for inter-process communication. Similarly, agents maintain their own state but can also access a shared state for coordination and data sharing.

State Management: This dual-memory approach enhances the ability to manage state efficiently and allows agents to share important information without compromising their autonomy.

5. Execution Contexts: Runtime Substrates for Agents

Definition:
An execution context is the logical environment where an agent operates—defined by the resources, APIs, and rules it exposes. Think of it as a lightweight runtime substrate, akin to a container but for task-oriented agents.

Analogy to Unix and Containers:
In Unix, processes interact with hardware via the kernel; containers abstract this further, providing isolated environments atop shared systems. Similarly, agents run within execution contexts that define what they can access, observe, and control—be it a UI, database, or API.

Types of Contexts:
Execution contexts vary but follow a unified interface. Examples include:

  1. UI Contexts: Control over DOM and user events
  2. Data Contexts: Query access to databases or warehouses
  3. Model Contexts: Interactions with ML models
  4. Workflow Contexts: Execution in serverless or CI/CD pipelines

Execution Context vs. Control Plane:
Like cloud systems with control and data planes, agent platforms may have a central coordinator. But actual agent work—reasoning and action—happens within execution contexts, each with its own scoped capabilities and responsibilities.

6. Skills as Domain-Specific Microservices for Agents

Definition:
Skills are configurable, high-level actions that agents can invoke—such as querying a database, summarizing a document, or sending a notification within a bounded execution context. They operate similarly to microservices or RPC/REST API endpoints in distributed systems, serving as modular, reusable interfaces to external or internal capabilities.

Microservices Analogy:
Just like microservices expose discrete units of business logic over a network, skills expose encapsulated functions that an agent can call to perform a task. These actions might span IO-bound tasks, data transformations, or third-party integrations, all governed by clear contracts and input/output schemas.

Abstraction and Composability:
Skills are intentionally higher-level than system primitives. Instead of exposing raw, low-level operations (as system calls do), skills encapsulate meaningful domain operations. This abstraction allows agents to compose complex workflows from declarative steps, much like orchestrators build flows from service calls.

Wrapping LLM Tools as Skills:
A practical example of this abstraction is wrapping LLM tools (e.g. OpenAI function calls, plugins, or LangChain tools) as internal skills.

  • A tool like getWeather(location) can be exposed to the LLM as a function schema, but behind the scenes, it’s implemented as a skill that fetches weather data from a microservice or third-party API.
  • From the agent’s perspective, it doesn't matter if the action is invoked via an LLM tool or triggered directly—the skill interface stays consistent, testable, and orchestratable.

This design decouples how a tool is invoked (e.g. via function calling) from how it's implemented (e.g. HTTP API, internal RPC, etc.), making the system more modular and easier to extend.

Operational Benefits:
Treating skills as microservices brings operational advantages:

  • Versioning and Lifecycle Management
  • Structured Logging and Monitoring
  • Access Control and Auditing
  • Retry and Failure Handling Policies

7. Asynchronous Execution

Definition: Asynchronous execution allows agents to schedule, queue, and process tasks independently, similar to background or daemon processes in Unix.

Unix Daemon Processes Analogy: In Unix, daemon processes run in the background and perform tasks without user intervention. Agents, through asynchronous execution, can independently manage tasks, enhancing system scalability and flexibility.

Scalability and Flexibility: Asynchronous execution enables agents to handle large volumes of tasks concurrently, improving overall efficiency and responsiveness of the system.

8. Dynamic Agent Instantiation

Definition: Dynamic agent instantiation refers to the system's ability to automatically create and configure new agents based on the needs of a task or workflow. Rather than requiring pre-defined agents, the platform can spawn new agents on the fly, assigning them roles, objectives, and context as needed.

Novelty vs. Unix & Kubernetes:

  • In Unix, every process must be explicitly launched by a script or command.
  • In Kubernetes, containers are spun up based on declarative configuration (YAML).
  • In an agentic platform, agents are generated at runtime, often through reflection or recursion—e.g., an agent tasked with a project might spawn sub-agents for subtasks, each with tailored capabilities and lifespans.

Benefits:

  • Elasticity of intelligence: Workflows adapt dynamically to complexity without human intervention.
  • Resource efficiency: Agents are ephemeral and task-bound.
  • Recursive delegation: Agents can recursively spawn other agents with localized goals, enabling deep task decomposition.

9. Agent Governance

Definition: Agent governance defines the rules that constrain what agents are allowed to do, where they can act, and how they can interact with each other or share resources. This mirrors Access Control Lists (ACLs) in Unix, but applied to agent capabilities and communication channels.

Drawing from Unix ACLs:

  • Unix ACLs define which users or groups have read/write/execute permissions on files.
  • Similarly, agents can be given capability-based restrictions:
    • Read-only access to a shared global memory.
    • Permission to act within specific execution context (e.g., CRM systems but not finance databases).
    • Messaging rights only to certain agents (e.g., compliance or audit agents).

Governance Structures:

  • Role-based access: Agents receive roles (e.g., reader, executor, orchestrator) that determine their power.
  • Policy engines: Runtime checks enforce who can spawn agents, invoke skills, or mutate global memory.
  • Auditability: Logs of agent actions allow retrospective analysis and compliance assurance.

Why it matters:

  • Prevents runaway agents or unintended side effects.
  • Enables enterprise-grade security, compliance, and trust.
  • Supports safe scaling by bounding what autonomous agents are allowed to do.

Conclusion:

Agents represent more than a new type of automation—they represent a fundamental shift in how we build intelligent systems.

If Unix gave us pipelines of deterministic programs, and Kubernetes gave us clusters of scalable containers, the agentic paradigm gives us self-evolving networks of autonomous workers—capable of recursive task decomposition, adaptive orchestration, and collaborative intelligence.

By fusing classic computing abstractions with modern AI capabilities, agentic systems redefine what’s possible in enterprise automation, software design, and digital operations. Agentic Systems unlock autonomy and adaptability at the “system” level, not just at the component level.

We’re no longer writing scripts or configuring containers—we’re defining behaviors, permissions, and environments for agents that think, learn, and act.

The implications are profound—and we are only beginning to explore what this new era makes possible.

More from the Journal

April 15, 2025

Layer by Layer: A Structured Approach to Benchmarking AI Agents in the Enterprise

A structured five-layer framework provides standardized benchmarking for AI agent capabilities across the full spectrum of enterprise task complexity, from UI to infrastructure.

April 15, 2025

Beyond the Browser: Benchmarking the Next Generation of Enterprise AI Agents

Discover why standard AI benchmarks fall short for enterprise needs and how agent performance is truly measured on realistic, multi-application workflows using both UI and APIs.

April 1, 2025

Towards Autonomous Agents and Recursive Intelligence

Emergence’s Agents platform is evolving into a system that can automatically create agents and assemble multi-agent systems withminimal human intervention.