A Rust libp2p application that implements the Wetware Protocol (/ww/0.1.0
) for peer-to-peer RPC communication with capability-based security. The node connects to IPFS DHT networks and provides secure remote execution capabilities through Cap'n Proto RPC.
- IPFS DHT Bootstrap: Automatically discovers and connects to IPFS peers from local Kubo node
- Protocol Compatibility: Uses standard IPFS protocols (
/ipfs/kad/1.0.0
,/ipfs/id/1.0.0
) for full network compatibility - RSA Key Support: Includes RSA support for connecting to legacy IPFS peers
- Creates libp2p Host: Generates Ed25519 identity and listens on TCP with IPFS-compatible protocols
- DHT Operations: Participates in IPFS DHT operations (provide/query) after bootstrap
- Structured Logging: Comprehensive logging with configurable levels and performance metrics
-
Kubo (IPFS) daemon running locally
kubo daemon
-
Cap'n Proto compiler (version 0.5.2 or higher)
# Ubuntu/Debian sudo apt-get install capnproto # macOS brew install capnp # Or build from source # See https://capnproto.org/install.html
-
Rust toolchain (nightly required)
rustup install nightly rustup default nightly
Note: This project requires Rust nightly due to dependencies that use
edition2024
features. The nightly toolchain provides access to these experimental features.
The application now uses a subcommand structure. The main command is ww
with a run
subcommand for starting a wetware node.
ww <COMMAND>
Commands:
run Run a wetware node
help Print this message or the help of the given subcommand(s)
-
Start Kubo daemon (in a separate terminal):
kubo daemon
-
Run the application using the
run
subcommand:# Use defaults (http://localhost:5001, info log level) cargo run -- run # Custom IPFS endpoint cargo run -- run --ipfs http://127.0.0.1:5001 cargo run -- run --ipfs http://192.168.1.100:5001 # Custom log level cargo run -- run --loglvl debug cargo run -- run --loglvl trace # Combine both cargo run -- run --ipfs http://192.168.1.100:5001 --loglvl debug # Or use environment variables export WW_IPFS=http://192.168.1.100:5001 export WW_LOGLVL=debug cargo run -- run
The run
subcommand supports the following options:
--ipfs <IPFS>
: IPFS node HTTP API endpoint (e.g., http://127.0.0.1:5001)--loglvl <LEVEL>
: Log level (trace, debug, info, warn, error)--preset <PRESET>
: Use preset configuration (minimal, development, production)--env-config
: Use configuration from environment variables
The project includes a multi-stage Docker build for containerized deployment and distribution.
# Build the container image
make podman-build
# or
podman build -t wetware:latest .
# Run the container
make podman-run
# or
podman run --rm -it wetware:latest
# Clean up container images
make podman-clean
- Multi-stage build: Optimizes image size by separating build and runtime stages
- Security: Runs as non-root user (
wetware
) - Efficient caching: Leverages container layer caching for faster builds
- Minimal runtime: Based on Debian Bookworm slim for smaller footprint
Note: When running the container, you'll need to use the run
subcommand:
# Run the container with the run subcommand
podman run --rm -it wetware:latest run
# With custom options
podman run --rm -it wetware:latest run --ipfs http://host.docker.internal:5001 --loglvl debug
Create a docker-compose.yml
for easy development (works with both Docker and Podman):
version: '3.8'
services:
wetware:
build: .
ports:
- "8080:8080"
environment:
- WW_IPFS=http://host.docker.internal:5001
- WW_LOGLVL=info
volumes:
- ./config:/app/config
command: ["run"] # Use the run subcommand
The project includes GitHub Actions workflows for automated testing, building, and publishing.
- Automated Testing: Runs on every push and pull request
- Code Quality: Includes formatting checks and clippy linting
- Release Automation: Automatically builds and publishes artifacts on releases
- Docker Integration: Builds and pushes Docker images to registry
- Artifact Publishing: Creates distributable binaries and archives
- Create a GitHub release with a semantic version tag (e.g.,
v1.0.0
) - Workflow automatically:
- Builds the Rust application
- Creates release artifacts (binary + tarball)
- Builds and pushes Docker images
- Uploads artifacts to GitHub releases
For Docker publishing, set these repository secrets:
DOCKER_USERNAME
: Your Docker Hub usernameDOCKER_PASSWORD
: Your Docker Hub access token
# Test only
gh workflow run rust.yml --ref main
# Build Docker image (on main branch)
gh workflow run rust.yml --ref main
The application uses structured logging with the tracing
crate. You can configure log levels using environment variables:
-
WW_IPFS
: IPFS node HTTP API endpoint (defaults to http://localhost:5001)# Use default localhost endpoint export WW_IPFS=http://localhost:5001 # Use custom IPFS node export WW_IPFS=http://192.168.1.100:5001 # Use remote IPFS node export WW_IPFS=https://ipfs.example.com:5001
-
WW_LOGLVL
: Controls the log level (trace, debug, info, warn, error)# Set log level for all components export WW_LOGLVL=info # More verbose logging export WW_LOGLVL=debug export WW_LOGLVL=trace # Only show warnings and errors export WW_LOGLVL=warn export WW_LOGLVL=error
error
: Errors that need immediate attentionwarn
: Warnings about potential issuesinfo
: General information about application flowdebug
: Detailed debugging informationtrace
: Very detailed tracing (very verbose)
The application logs performance metrics for key operations:
- Kubo peer discovery duration
- Host setup time
- DHT bootstrap duration
- Provider announcement time
- Provider query time
- Total application runtime
2024-01-15T10:30:00.123Z INFO ww::main{thread_id=1 thread_name="tokio-runtime-worker"}: Starting basic-p2p application
2024-01-15T10:30:00.124Z INFO ww::main{thread_id=1 thread_name="tokio-runtime-worker"}: Bootstrap Kubo node kubo_url=http://127.0.0.1:5001
2024-01-15T15T10:30:00.125Z INFO ww::get_kubo_peers{thread_id=1 thread_name="tokio-runtime-worker"}: Querying Kubo node for peers url=http://127.0.0.1:5001/api/v0/swarm/peers
2024-01-15T10:30:00.200Z INFO ww::get_kubo_peers{thread_id=1 thread_name="tokio-runtime-worker"}: Found peer addresses from Kubo node peer_count=5 parse_errors=0
2024-01-15T10:30:00.201Z INFO ww::main{thread_id=1 thread_name="tokio-runtime-worker"}: Kubo peer discovery completed duration_ms=76
Starting basic-p2p application...
Bootstrap Kubo node kubo_url=http://127.0.0.1:5001
Querying Kubo node for peers url=http://127.0.0.1:5001/api/v0/swarm/peers
Found peer addresses from Kubo node peer_count=86 parse_errors=0
Found peers from Kubo node peer_count=86
Kubo peer discovery completed duration_ms=18
Generated Ed25519 keypair peer_id=12D3KooWDmTwwTyjY7kwFvY3qPPJMLaZYrs62a4xqPRByu8rczoX
Created Kademlia configuration
Set Kademlia to client mode
Built libp2p swarm
Started listening on address listen_addr=/ip4/0.0.0.0/tcp/0
Local PeerId peer_id=12D3KooWDmTwwTyjY7kwFvY3qPPJMLaZYrs62a4xqPRByu8rczoX
Adding 86 IPFS peers to Kademlia routing table
Bootstrapping DHT with IPFS peers
DHT bootstrap completed
Provider announcement completed
Provider query completed
Starting DHT event loop
Application ready! Successfully joined the IPFS DHT network
- Configuration: Determines IPFS endpoint from command line, environment variable, or default
- Peer Discovery: Queries the configured IPFS node's HTTP API to discover connected peers
- Host Creation: Generates Ed25519 keypair and creates libp2p swarm with IPFS-compatible protocols
- DHT Bootstrap: Adds discovered peers to Kademlia routing table and establishes connections
- Network Integration: Joins the IPFS DHT network and participates in DHT operations
- DHT Operations: Can provide content and query for providers in the IPFS network
The application implements a sophisticated DHT bootstrap process:
- Peer Discovery: Queries the local Kubo node's
/api/v0/swarm/peers
endpoint to discover connected peers - Routing Table Population: Adds discovered peers to the Kademlia routing table before establishing connections
- Connection Establishment: Dials discovered peers to establish TCP connections
- Protocol Handshake: Performs identify and Kademlia protocol handshakes using standard IPFS protocols
- Bootstrap Trigger: Triggers the Kademlia bootstrap process to populate the routing table
- Network Participation: Begins participating in DHT operations (provide/query)
This approach ensures rapid integration into the IPFS network by leveraging the local Kubo node's peer knowledge.
The application is designed for full IPFS network compatibility:
- Kademlia DHT: Uses
/ipfs/kad/1.0.0
protocol for DHT operations - Identify: Uses
/ipfs/id/1.0.0
protocol for peer identification - Transport: Supports TCP with Noise encryption and Yamux multiplexing
- Key Types: Supports both Ed25519 (modern) and RSA (legacy) key types
- Multiaddr: Handles standard IPFS multiaddresses with peer IDs
This ensures the application can communicate with any IPFS node in the network, regardless of their specific configuration.
- "IPFS API file not found": Make sure Kubo is running (
kubo daemon
) - Connection errors: Check if Kubo is listening on the expected port and endpoint
- DHT bootstrap failures: Ensure Kubo has peers and the API endpoint is correct
- Protocol compatibility: The application uses standard IPFS protocols for full compatibility
- RSA connection errors: RSA support is included for legacy IPFS peers
- Configuration issues: Check
WW_IPFS
environment variable for correct IPFS endpoint - Logging issues: Check
WW_LOGLVL
environment variable and ensure tracing is properly initialized
libp2p
: P2P networking stack with IPFS protocol supportlibp2p-kad
: Kademlia DHT implementation for IPFS compatibilitylibp2p-identify
: Peer identification protocol for IPFS compatibilityreqwest
: HTTP client for Kubo API integrationtokio
: Async runtime for concurrent operationsanyhow
: Error handling and propagationserde
: JSON serialization/deserialization for API responsestracing
: Structured logging framework with performance metricstracing-subscriber
: Logging subscriber with environment-based configuration