A web-based automation tool for orchestrating and scheduling sequences of LLM interactions through the Goose CLI tool, with support for MCP server extensions.
Maistro allows you to:
- Define multiple prompts (saved as individual .md files)
- Execute them in order, waiting for each response
- Schedule these "runs" ahead of time (via crontab)
- Configure and use MCP servers for extended functionality
Each configuration allows you to define:
- A name for the configuration
- Any number of prompts in sequence
- MCP server extensions for each prompt
- A schedule (daily, weekly, or monthly)
- Node.js and npm
- Goose CLI tool (confirmed to work with v1.0.7+)
- Access to system crontab (for scheduling)
- Docker and Docker Compose (for containerized deployment)
- Clone this repository:
git clone https://github.com/forayconsulting/maistro.git
cd maistro- Install dependencies:
npm install- Start the application:
npm start- Open your browser to http://localhost:3000
- Clone this repository:
git clone https://github.com/yourusername/maistro.git
cd maistro- Build and run using Docker Compose:
# Build the Docker image for your current platform
./scripts/build-local.sh
# Start the container (with interactive configuration)
./scripts/run-local.sh
# Or run in detached mode
./scripts/run-local.sh --detached- Open your browser to http://localhost:3000
When you run Maistro for the first time using run-local.sh, you'll be guided through a configuration process:
flowchart TD
A[Start run-local.sh] --> B{Config exists?}
B -->|Yes| C[Read config]
B -->|No| D[Interactive setup]
D --> E[Create config file]
E --> C
C --> F{Volume type?}
F -->|Docker volume| G[Use docker volume]
F -->|Host directory| H[Mount host directory]
G --> I[Start container]
H --> I
You can choose between two options for data persistence:
-
Docker-managed volume (default):
- Easier to manage
- Data is managed by Docker
- Ideal for most users
-
Custom host directory:
- Mount a specific directory from your host system
- Direct access to configuration files
- Useful for advanced users or for sharing configurations
Your choice is saved in a .maistro-docker-config file for future runs.
# Skip configuration prompts (use existing or default)
./scripts/run-local.sh --skip-config
# Reset configuration and prompt again
./scripts/run-local.sh --reset-config
# Show help and all available options
./scripts/run-local.sh --helpThe Docker build supports both ARM64 and AMD64 architectures:
# Build the Docker image for your current platform
./scripts/build-local.sh
# Build with verbose output
./scripts/build-local.sh --verbose
# Build for a specific platform
./scripts/build-local.sh --platform="linux/amd64"
# Build for multiple architectures (requires Docker registry setup)
./scripts/build-local.sh --multi-archBy default, the build script will detect your current platform and build only for that architecture. Use the --multi-arch flag to build for both ARM64 and AMD64 architectures simultaneously.
- Click "+ New Configuration"
- Enter a name
- Add prompts using the "Add Prompt" button
- For each prompt, select an optional model or use the default one
- Optionally configure a schedule
- Click "Save"
- Select a configuration from the list
- Click "Run Now" in the execution panel
- View real-time output in the terminal window
- Select a configuration
- Enable scheduling
- Choose frequency (daily, weekly, monthly)
- Set the time and day(s) as needed
- Save the configuration
Maistro allows you to specify which LLM model should process each prompt:
- Navigate to the "Models" tab
- Enter your OpenRouter API key
- Set a default model
- Add or remove models as needed
- When creating prompts, you can select a specific model for each prompt or use the default
Out of the box models include:
- anthropic/claude-3.7-sonnet:thinking
- anthropic/claude-3.7-sonnet
- openai/o3-mini-high
- openai/gpt-4o-2024-11-20
You can add any model supported by OpenRouter.
Maistro uses a sophisticated approach to model switching that preserves session context:
-
Initial Setup: Configure your OpenRouter API key once through the Models tab.
-
Per-prompt Model Selection: Each prompt in a configuration can use either:
- The default model (configured in Models tab)
- A specific model chosen from the dropdown in the prompt editor
-
Dynamic Model Switching: When executing a configuration with multiple prompts:
- Maistro automatically updates the Goose configuration file before each prompt
- For the first prompt, Maistro starts a new session with the appropriate model
- For subsequent prompts, Maistro uses the
--resumeflag to maintain context - This allows switching models mid-conversation while preserving context
-
Technical Implementation:
- Maistro directly updates the
GOOSE_MODELparameter in the Goose YAML config - This approach is more reliable than interactive configuration
- All API keys and models are stored securely in the Maistro data directory
- Maistro directly updates the
This implementation allows for sophisticated workflows where different prompts can leverage the strengths of different models while maintaining a coherent conversation throughout the execution.
Model Context Protocol (MCP) servers enable LLMs to interact with external systems and APIs. Maistro allows you to:
-
Configure MCP Servers:
- Navigate to the "MCP Servers" tab
- Click "+ New MCP Server"
- Enter the server details:
- Name: A descriptive name for the server
- Command: The executable (e.g.,
node) - Arguments: The script path (e.g.,
/path/to/server.js) - Environment Variables: Any required API keys or configuration
- Optionally set "Enable by default for new prompts"
- Save the MCP server configuration
-
Assign MCP Servers to Prompts:
- In the configuration editor, each prompt has an "MCP Servers" button
- Click this button to open the server selection dialog
- Check the servers you want to enable for this prompt
- Click "Apply" to save your selection
When the prompt runs, Maistro will automatically include the selected MCP servers as --with-extension parameters to the Goose CLI, giving your prompts access to the tools and resources provided by those servers.
Maistro provides a comprehensive REST API that allows you to programmatically interact with the application. The API endpoints include:
- Configurations: Create, read, update, and delete configurations
- Folders: Manage folder structure for organizing configurations
- Execution: Run configurations programmatically
- MCP Servers: Manage MCP server definitions
- Models: Configure LLM models and API keys
Maistro includes interactive API documentation powered by Swagger UI. You can access the documentation at:
http://localhost:3000/api-docs
The documentation provides:
- Complete list of all available endpoints
- Request and response schemas
- "Try it out" functionality to test API calls directly from the browser
- Code examples in various languages
This makes it easy to integrate Maistro with other tools and systems or build custom interfaces.
Maistro uses:
- Node.js with Express for the backend
- WebSockets for real-time execution feedback
- File-based storage for configurations, prompts, and MCP server definitions
- Integration with Goose CLI's extension system for MCP servers
- System crontab for scheduling
- Swagger UI for API documentation
Maistro stores all its data in the /app/data directory inside the container, including:
/app/data/
├── configs.json # All saved configurations
├── mcp-servers.json # MCP server definitions
├── models.json # Model settings and API keys
└── prompts/ # Individual prompt files
├── config1_prompt_0.md
├── config1_prompt_1.md
└── ...
When running with Docker, this data is persisted in one of two ways:
flowchart TD
A[Maistro Container] -->|/app/data| B{Volume Type}
B -->|Docker Volume| C[Docker-managed Volume]
B -->|Host Directory| D[Custom Host Directory]
C -->|Managed by Docker| E[maistro-data]
D -->|Direct Access| F[User-specified Path]
-
Docker-managed Volume (default):
- Data is stored in a Docker volume named
maistro-data - Managed automatically by Docker
- Persists across container restarts and rebuilds
- Can be backed up using Docker volume commands
- Data is stored in a Docker volume named
-
Custom Host Directory:
- Data is stored in a directory on your host system
- Provides direct access to configuration files
- Easier to back up or version control
- Can be shared between different installations
The volume configuration is managed by the run-local.sh script, which creates a Docker Compose override file based on your preferences.
MIT