Dorothy

Dorothy

Orchestrate multiple AI CLI agents with automations and MCP servers

Category: development Source: Charlie85270/Dorothy

What Is This?

Overview

Dorothy is an open-source orchestration framework designed to coordinate multiple AI command-line interface agents, automations, and Model Context Protocol (MCP) servers within a unified workflow. Built by Charlie85270, Dorothy provides developers and AI engineers with a structured way to define, connect, and run complex multi-agent pipelines directly from the terminal. Rather than managing each agent independently, Dorothy acts as a central conductor that routes tasks, shares context, and sequences operations across your entire AI toolchain.

The framework draws its design philosophy from the idea that modern AI workflows rarely involve a single model or tool. Production-grade applications often require chaining language models, retrieval systems, code executors, and external APIs together in a reliable and repeatable way. Dorothy addresses this by providing a configuration-driven approach where agents, triggers, and MCP server connections are declared in structured files and executed through a consistent CLI interface.

Dorothy integrates natively with the Model Context Protocol, an emerging standard for connecting AI models to external tools and data sources. This means you can register MCP servers alongside traditional CLI agents and have Dorothy manage communication, context passing, and error handling across all of them in a single orchestration layer.

Who Should Use This

  • AI engineers building multi-step pipelines that involve more than one language model or tool
  • Backend developers who want to automate complex workflows combining AI agents with system commands
  • DevOps practitioners integrating AI capabilities into CI/CD pipelines and scheduled jobs
  • Researchers experimenting with multi-agent architectures who need a lightweight local orchestration layer
  • Platform engineers responsible for standardizing how AI tools are deployed and coordinated across teams
  • Developers already working with MCP servers who need a reliable way to manage multiple server connections

Why Use It?

Problems It Solves

  • Managing multiple AI agents manually leads to fragmented scripts, inconsistent context passing, and difficult debugging. Dorothy centralizes this coordination.
  • MCP server connections are often handled ad hoc, with no standard way to register, start, or route requests between them. Dorothy provides a structured registry for this.
  • Automation triggers for AI workflows are typically hardcoded into application logic, making them brittle and hard to modify. Dorothy externalizes these triggers into configuration.
  • Running multi-agent workflows locally requires significant boilerplate. Dorothy reduces setup time by providing a ready-to-use orchestration runtime.
  • Debugging failures in chained AI pipelines is difficult without visibility into each step. Dorothy provides structured logging across all agents and servers.

Core Highlights

  • Configuration-driven agent and MCP server registration
  • Support for sequential and parallel agent execution
  • Native Model Context Protocol integration
  • Automation triggers based on events, schedules, or manual invocation
  • Shared context passing between agents in a pipeline
  • CLI-first design with no required GUI or cloud dependency
  • Extensible architecture for adding custom agents and connectors
  • Structured logging for observability across all orchestrated components

How to Use It?

Basic Usage

Install Dorothy and initialize a project:

npm install -g dorothy-cli
dorothy init my-project
cd my-project

Define your agents and MCP servers in the dorothy.config.json file:

{
  "agents": [
    { "name": "summarizer", "command": "llm summarize --input {{input}}" },
    { "name": "formatter", "command": "llm format --style markdown" }
  ],
  "mcpServers": [
    { "name": "file-reader", "url": "http://localhost:3100" }
  ],
  "pipelines": [
    { "name": "process-doc", "steps": ["file-reader", "summarizer", "formatter"] }
  ]
}

Run a pipeline:

dorothy run process-doc --input ./document.txt

Specific Scenarios

Scenario 1: Scheduled document processing. Configure Dorothy to trigger a pipeline on a cron schedule, pulling files from a directory, processing them through an AI summarizer, and writing results to an output folder.

Scenario 2: Event-driven agent chaining. Set up Dorothy to listen for webhook events and automatically route incoming payloads through a sequence of agents that classify, enrich, and store the data.

Real-World Examples

  • A development team uses Dorothy to run code review agents and documentation generators as part of their pull request automation workflow.
  • A data team orchestrates multiple retrieval-augmented generation agents through Dorothy to process nightly data dumps and generate structured reports.

Important Notes

Requirements

  • Node.js 18 or higher installed on the host machine
  • At least one configured AI CLI agent or MCP server to orchestrate
  • Basic familiarity with JSON configuration files and terminal usage