Chief Architect
Co-Founder & CTO
The way we design and orchestrate computational workflows is evolving once again. From the modular elegance of Unix pipes to the scalable orchestration of Kubernetes, each leap in system architecture has redefined how software systems are composed and operated.
Now, we stand at the cusp of another transformation—this time driven by intelligent, autonomous agents. These agents are designed for composability and modularity, capable of creating, coordinating, and evolving other agents, tools and workflows with minimal human intervention.
Unlike Unix or Kubernetes, which rely on human-authored scripts or YAML configurations, agentic systems dynamically instantiate new agents based on task structure. They also introduce governance mechanisms tailored for distributed intelligence—fine-grained permissions, role-based constraints, and memory scopes that ensure safe, auditable collaboration.
This post explores how agentic systems inherit foundational ideas from Unix and Kubernetes while introducing nine defining tenets: Agent Composability, Agentic Capabilities, Task I/O, Local and Global Memory, Execution Contexts, Skills as System Call Equivalents, Asynchronous Execution, Dynamic Agent Instantiation, and Agent Governance.
Together, these principles form a blueprint for the next era of intelligent, autonomous systems.
Definition: Agent composability refers to the ability to chain different agents together in workflows, where the output of one agent becomes the input for another. This resembles the modular approach used in Unix pipes.
Unix Pipes Analogy: Unix pipes (|
) are known for their simplicity in chaining commands. For instance, cat file.txt | grep "search_term" | sort demonstrates how the output of one command becomes the input for the next. Similarly, in an agents framework, one agent’s output can seamlessly become another’s input, creating a continuous and efficient workflow.
Benefits: This modular composability enables scalable and flexible workflow creation, which can be easily modified and extended without altering the entire system. This approach fosters reusability and simplicity.
Definition & Importance: Agentic capabilities refer to an agent’s ability to reason, self-improve, and orchestrate other agents or services. This makes agents capable of autonomous decision-making and complex task management.
Comparison with Kubernetes Orchestration: Kubernetes orchestrates containers to manage applications. It automatically handles tasks like deployment, scaling, and healing. Similarly, agents with agentic capabilities can autonomously manage tasks, make decisions, and orchestrate workflows dynamically.
Autonomous Operation: Agents can increasingly handle complex tasks independently, making decisions about how to delegate or modify workflows. This reduces the need for constant human oversight with time and allows systems to adapt and learn in real-time.
Definition: Task I/O refers to the manner in which agents receive task input (prompts) and independently process them to produce task responses. Agents utilize a TaskMessage interface to handle task requests and return responses, including status updates and outputs. Each agent receives a TaskMessage that contains:
Unix Files Analogy: In Unix, processes communicate by reading and writing to files, with file metadata providing important details about the file's purpose and content. Similarly, in a multi-agent system, agents use TaskMessages to communicate. Each TaskMessage contains a natural language instruction (acting like file metadata) that describes the task to be performed, and Task Input data (analogous to file content) that the agent processes. After processing, agents generate Task Output (similar to writing back to a file), ensuring a clear, structured flow of data between agents.
Workflow Management: This model allows for streamlined task management where each task is treated as a unit of work, ensuring tasks can be queued, executed, and managed efficiently.
Definition: Agents use both local memory (specific to a single agent) and global memory (shared across all agents) to manage state and data.
Unix Process Memory Comparison: In Unix, each process has its own memory space but can access shared memory for inter-process communication. Similarly, agents maintain their own state but can also access a shared state for coordination and data sharing.
State Management: This dual-memory approach enhances the ability to manage state efficiently and allows agents to share important information without compromising their autonomy.
Definition:
An execution context is the logical environment where an agent operates—defined by the resources, APIs, and rules it exposes. Think of it as a lightweight runtime substrate, akin to a container but for task-oriented agents.
Analogy to Unix and Containers:
In Unix, processes interact with hardware via the kernel; containers abstract this further, providing isolated environments atop shared systems. Similarly, agents run within execution contexts that define what they can access, observe, and control—be it a UI, database, or API.
Types of Contexts:
Execution contexts vary but follow a unified interface. Examples include:
Execution Context vs. Control Plane:
Like cloud systems with control and data planes, agent platforms may have a central coordinator. But actual agent work—reasoning and action—happens within execution contexts, each with its own scoped capabilities and responsibilities.
Definition:
Skills are configurable, high-level actions that agents can invoke—such as querying a database, summarizing a document, or sending a notification within a bounded execution context. They operate similarly to microservices or RPC/REST API endpoints in distributed systems, serving as modular, reusable interfaces to external or internal capabilities.
Microservices Analogy:
Just like microservices expose discrete units of business logic over a network, skills expose encapsulated functions that an agent can call to perform a task. These actions might span IO-bound tasks, data transformations, or third-party integrations, all governed by clear contracts and input/output schemas.
Abstraction and Composability:
Skills are intentionally higher-level than system primitives. Instead of exposing raw, low-level operations (as system calls do), skills encapsulate meaningful domain operations. This abstraction allows agents to compose complex workflows from declarative steps, much like orchestrators build flows from service calls.
Wrapping LLM Tools as Skills:
A practical example of this abstraction is wrapping LLM tools (e.g. OpenAI function calls, plugins, or LangChain tools) as internal skills.
getWeather(location)
can be exposed to the LLM as a function schema, but behind the scenes, it’s implemented as a skill that fetches weather data from a microservice or third-party API.This design decouples how a tool is invoked (e.g. via function calling) from how it's implemented (e.g. HTTP API, internal RPC, etc.), making the system more modular and easier to extend.
Operational Benefits:
Treating skills as microservices brings operational advantages:
Definition: Asynchronous execution allows agents to schedule, queue, and process tasks independently, similar to background or daemon processes in Unix.
Unix Daemon Processes Analogy: In Unix, daemon processes run in the background and perform tasks without user intervention. Agents, through asynchronous execution, can independently manage tasks, enhancing system scalability and flexibility.
Scalability and Flexibility: Asynchronous execution enables agents to handle large volumes of tasks concurrently, improving overall efficiency and responsiveness of the system.
Definition: Dynamic agent instantiation refers to the system's ability to automatically create and configure new agents based on the needs of a task or workflow. Rather than requiring pre-defined agents, the platform can spawn new agents on the fly, assigning them roles, objectives, and context as needed.
Novelty vs. Unix & Kubernetes:
Benefits:
Definition: Agent governance defines the rules that constrain what agents are allowed to do, where they can act, and how they can interact with each other or share resources. This mirrors Access Control Lists (ACLs) in Unix, but applied to agent capabilities and communication channels.
Drawing from Unix ACLs:
Governance Structures:
Why it matters:
Conclusion:
Agents represent more than a new type of automation—they represent a fundamental shift in how we build intelligent systems.
If Unix gave us pipelines of deterministic programs, and Kubernetes gave us clusters of scalable containers, the agentic paradigm gives us self-evolving networks of autonomous workers—capable of recursive task decomposition, adaptive orchestration, and collaborative intelligence.
By fusing classic computing abstractions with modern AI capabilities, agentic systems redefine what’s possible in enterprise automation, software design, and digital operations. Agentic Systems unlock autonomy and adaptability at the “system” level, not just at the component level.
We’re no longer writing scripts or configuring containers—we’re defining behaviors, permissions, and environments for agents that think, learn, and act.
The implications are profound—and we are only beginning to explore what this new era makes possible.
A structured five-layer framework provides standardized benchmarking for AI agent capabilities across the full spectrum of enterprise task complexity, from UI to infrastructure.
Discover why standard AI benchmarks fall short for enterprise needs and how agent performance is truly measured on realistic, multi-application workflows using both UI and APIs.
Emergence’s Agents platform is evolving into a system that can automatically create agents and assemble multi-agent systems withminimal human intervention.