Core developer framework of SpoonOS ββAgentic OS for the sentient economy. Next-Generation AI Agent Framework | Powerful Interactive CLI | Web3 infrastructure optimized Support
This README is your guide to getting started with the SpoonOS Core Developer Framework (SCDF). It walks you through everything you needβfrom understanding core capabilities to actually running your own agents.
Here's how to navigate it:
-
β¨ Features: Start here to understand what SpoonOS can do. This section gives you a high-level overview of its agentic, composable, and interoperable architecture.
-
π§ Installation: As of June 2025, SpoonOS currently supports Python only. This section tells you which Python version to use and how to set up a virtual environment.
-
π Environment & API Key Config: Learn how to configure the API keys for various LLMs (e.g., OpenAI, Claude, deepseek). We also provide configuration methods for Web3 infrastructure such as chains, RPC endpoints, databases, and blockchain explorers.
-
π Quick Start: Once your environment is ready, start calling our MCP server, which bundles a wide range of tools. Other servers are also available.
-
π οΈ CLI Tools: This section shows how to use the CLI to run LLM-powered tasks with ease.
-
π§© Agent Framework: Learn how to create your own agents, register custom tools, and extend SpoonOS with minimal setup.
-
π API Integration: Plug in external APIs to enhance your agent workflows.
-
π€ Contributing: Want to get involved? Check here for contribution guidelines.
-
π License: Standard license information.
By the end of this README, you'll not only understand what SCDF isβbut you'll be ready to build and run your own AI agents and will gain ideas on scenarios what SCDF could empower. Have fun!
SpoonOS is a living, evolving agentic operating system. Its SCDF is purpose-built to meet the growing demands of Web3 developers β offering a complete toolkit for building sentient, composable, and interoperable AI agents.
- π§ ReAct Intelligent Agent - Advanced agent architecture combining reasoning and action
- π§ Custom Tool Ecosystem - Modular tool system for easily extending agent capabilities
- π¬ Multi-Model Support - Compatible with major large language models including OpenAI, Anthropic, DeepSeek, and more Web3 fine-tuned LLM
- ποΈ Unified LLM Architecture - Extensible provider system with automatic fallback, load balancing, and comprehensive monitoring
- β‘ Prompt Caching - Intelligent caching for Anthropic models to reduce token costs and improve response times
- π Web3-Native Interoperability - Enables AI agents to communicate and coordinate across ecosystems via DID and ZKML-powered interoperability protocols.
- π MCP (Model Context Protocol) β Dynamic, protocol-driven tool invocation system. Agents can discover and execute tools at runtime over
stdio,http, orwebsockettransports β without hardcoding or restarts. - π‘ Scalable Data Access β Combined with MCP, agents gain seamless access to structured/unstructured data, including databases, Web3 RPCs, external APIs, and more.
- π» Interactive CLI - Feature-rich command line interface
- π State Management - Comprehensive session history and state persistence
- πComposable Agent Logic - Create agents that can sense, reason, plan, and execute modularly β enabling use cases across DeFi, creator economy, and more
- π Easy to Use - Well-designed API for rapid development and integration
- Python 3.10+
- pip package manager (or uv as a faster alternative)
# Clone the repo
$ git clone https://github.com/XSpoonAi/spoon-core.git
$ cd spoon-core
# Create a virtual environment
$ python -m venv spoon-env
$ source spoon-env/bin/activate # For macOS/Linux
# Install dependencies
$ pip install -r requirements.txtPrefer faster install? See docs/installation.md for uv-based setup.
Create a .env file in the root directory:
cp .env.example .envFill in your keys:
OPENAI_API_KEY=sk-your-openai-key
ANTHROPIC_API_KEY=sk-your-claude-key
DEEPSEEK_API_KEY=your-deepseek-key
GEMINI_API_KEY=your-gemini-api-key
PRIVATE_KEY=your-wallet-private-key
RPC_URL=https://mainnet.rpc
CHAIN_ID=12345Then in your Python entry file:
from dotenv import load_dotenv
load_dotenv(override=True)For advanced config methods (CLI setup, config.json, PowerShell), see docs/configuration.md.
SpoonOS uses a hybrid configuration system that combines a .env file for initial setup with a dynamic config.json for runtime settings. This provides flexibility for both static environment setup and on-the-fly adjustments via the CLI.
The configuration is loaded with the following priority:
-
config.json(Highest Priority): This file is the primary source of configuration at runtime. If it exists, its values are used directly, overriding any corresponding environment variables set in.env. You can modify this file using theconfigcommand in the CLI. -
Environment Variables (
.env) (Lowest Priority): This file is used for initial setup. On the first run, ifconfig.jsonis not found, the system will read the variables from your.envfile to generate a newconfig.json. Any changes to.envafterconfig.jsonhas been created will not be reflected unless you deleteconfig.jsonand restart the application.
This model ensures that sensitive keys and environment-specific settings are kept in .env (which should not be committed to version control), while config.json handles user-level customizations and runtime state.
The config.json file manages agent and API settings. Below are the supported parameters:
| Parameter | Type | Description | Default |
|---|---|---|---|
api_keys |
object |
A dictionary containing API keys for different LLM providers (e.g., openai, anthropic, deepseek). |
{} |
base_url |
string |
The base URL for the API endpoint, particularly useful for custom or proxy servers like OpenRouter. | "" |
default_agent |
string |
The default agent to use for tasks. | "default" |
llm_provider |
string |
The name of the LLM provider to use (e.g., openai, anthropic). Overrides provider detection from model name. |
"openai" |
model_name |
string |
The specific model to use for the selected provider (e.g., gpt-4.1, claude-sonnet-4-20250514). |
null |
Here is an example config.json where a user wants to use OpenAI. You only need to provide the key for the service you intend to use.
{
"api_keys": {
"openai": "sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"base_url": "https://api.openai.com/v1",
"default_agent": "default",
"llm_provider": "openai",
"model_name": "gpt-4.1"
}SpoonOS features a unified LLM infrastructure that provides seamless integration with multiple providers, automatic fallback mechanisms, and comprehensive monitoring.
- Provider Agnostic: Switch between OpenAI, Anthropic, Gemini, and custom providers without code changes
- Automatic Fallback: Built-in fallback chains ensure high availability
- Load Balancing: Distribute requests across multiple provider instances
- Comprehensive Monitoring: Request logging, performance metrics, and error tracking
- Easy Extension: Add new providers with minimal code
from spoon_ai.llm import LLMManager, ConfigurationManager
# Initialize the LLM manager
config_manager = ConfigurationManager()
llm_manager = LLMManager(config_manager)
# Simple chat request (uses default provider)
response = await llm_manager.chat([
{"role": "user", "content": "Hello, world!"}
])
print(response.content)
# Use specific provider
response = await llm_manager.chat(
messages=[{"role": "user", "content": "Hello!"}],
provider="anthropic"
)
# Chat with tools
tools = [{"name": "get_weather", "description": "Get weather info"}]
response = await llm_manager.chat_with_tools(
messages=[{"role": "user", "content": "What's the weather?"}],
tools=tools,
provider="openai"
)Configure providers in your config.json:
{
"llm_providers": {
"openai": {
"api_key": "sk-your-openai-key",
"model": "gpt-4.1",
"max_tokens": 4096,
"temperature": 0.3
},
"anthropic": {
"api_key": "sk-ant-your-key",
"model": "claude-sonnet-4-20250514",
"max_tokens": 4096,
"temperature": 0.3
},
"gemini": {
"api_key": "your-gemini-key",
"model": "gemini-2.5-pro",
"max_tokens": 4096
}
},
"llm_settings": {
"default_provider": "openai",
"fallback_chain": ["openai", "anthropic", "gemini"],
"enable_monitoring": true,
"enable_caching": true
}
}# Set up fallback chain
llm_manager.set_fallback_chain(["openai", "anthropic", "gemini"])
# The manager will automatically try providers in order if one fails
response = await llm_manager.chat([
{"role": "user", "content": "Hello!"}
])
# If OpenAI fails, it will try Anthropic, then Geminifrom spoon_ai.llm import LLMProviderInterface, register_provider
@register_provider("custom", capabilities=["chat", "completion"])
class CustomProvider(LLMProviderInterface):
async def initialize(self, config):
self.api_key = config["api_key"]
# Initialize your provider
async def chat(self, messages, **kwargs):
# Implement chat functionality
return LLMResponse(
content="Custom response",
provider="custom",
model="custom-model",
finish_reason="stop"
)
# Implement other required methods...from spoon_ai.llm import get_debug_logger, get_metrics_collector
# Get monitoring instances
debug_logger = get_debug_logger()
metrics = get_metrics_collector()
# View provider statistics
stats = metrics.get_provider_stats("openai")
print(f"Success rate: {stats['success_rate']:.1f}%")
print(f"Average response time: {stats['avg_response_time']:.2f}s")
# Get recent logs
logs = debug_logger.get_recent_logs(limit=10)
for log in logs:
print(f"{log.timestamp}: {log.provider} - {log.method}")from spoon_ai.chat import ChatBot
from spoon_ai.agents import SpoonReactAI
# Using OpenAI's GPT-4
openai_agent = SpoonReactAI(
llm=ChatBot(model_name="gpt-4.1", llm_provider="openai")
)
# Using Anthropic's Claude
claude_agent = SpoonReactAI(
llm=ChatBot(model_name="claude-sonnet-4-20250514", llm_provider="anthropic")
)
# Using OpenRouter (OpenAI-compatible API)
# Uses OPENAI_API_KEY environment variable with your OpenRouter API key
openrouter_agent = SpoonReactAI(
llm=ChatBot(
model_name="anthropic/claude-sonnet-4", # Model name from OpenRouter
llm_provider="openai", # MUST be "openai"
base_url="https://openrouter.ai/api/v1" # OpenRouter API endpoint
)
)# Start the MCP server with all available tools
python -m spoon_ai.tools.mcp_tools_collection
# The server will start and display:
# MCP Server running on stdio transport
# Available tools: [list of tools]python main.pyTry chatting with your agent:
> action chat
> Hello, Spoon!from spoon_ai.tools.base import BaseTool
class MyCustomTool(BaseTool):
name: str = "my_tool"
description: str = "Description of what this tool does"
parameters: dict = {
"type": "object",
"properties": {
"param1": {"type": "string", "description": "Parameter description"}
},
"required": ["param1"]
}
async def execute(self, param1: str) -> str:
# Tool implementation
return f"Result: {param1}"from spoon_ai.agents import ToolCallAgent
from spoon_ai.tools import ToolManager
class MyAgent(ToolCallAgent):
name: str = "my_agent"
description: str = "Agent description"
system_prompt: str = "You are a helpful assistant..."
max_steps: int = 5
available_tools: ToolManager = Field(
default_factory=lambda: ToolManager([MyCustomTool()])
)import asyncio
async def main():
agent = MyCustomAgent(llm=ChatBot())
result = await agent.run("Say hello to Scarlett")
print("Result:", result)
if __name__ == "__main__":
asyncio.run(main())Register your own tools, override run(), or extend with MCP integrations. See docs/agent.md or docs/mcp_mode_usage.md
π Full guide
π Example agent
SpoonOS supports runtime pluggable agents using the MCP (Model Context Protocol) β allowing your agent to connect to a live tool server (via SSE/WebSocket/HTTP) and call tools like get_contract_events or get_wallet_activity with no extra code.
Two ways to build MCP-powered agents:
Built-in Agent Mode: Build and run your own MCP server (e.g., mcp_thirdweb_collection.py) and connect to it using an MCPClientMixin agent.
Community Agent Mode: Use mcp-proxy to connect to open-source agents hosted on GitHub.
π Full guide
π Example mcp
SpoonOS supports prompt caching for Anthropic models to reduce costs and improve performance. Enable/disable globally:
from spoon_ai.chat import ChatBot
# Enable prompt caching (default: True)
chatbot = ChatBot(
llm_provider="anthropic",
enable_prompt_cache=True
)- README.md
- .env.example
- requirements.txt
- main.py
- examples/
- spoon_ai/ β π΄ Core agent framework
- doc/