Skip to content
/ WASP Public
forked from Uomi-network/WASP

A powerful development environment for creating UOMI agents using WebAssembly and Rust πŸ¦€

Notifications You must be signed in to change notification settings

Widiskel/WASP

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

20 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ WASP

Wasp Logo

Rust WebAssembly Node.js

A powerful development environment for creating UOMI agents using WebAssembly and Rust πŸ¦€

πŸ“– Table of Contents

πŸ“– Overview

This development environment allows you to create, test, and debug UOMI agents using WebAssembly (WASM) and Rust. The environment provides seamless integration with both UOMI and third-party LLM services, supporting multiple model configurations and API formats.

🌟 Features

  • πŸ”„ Hot-reloading development environment
  • πŸ“ Interactive console for testing
  • πŸ› Built-in debugging capabilities
  • πŸ” Response analysis tools
  • πŸ’Ύ Conversation history management
  • πŸ”Œ Support for multiple LLM providers
  • πŸ”‘ Secure API key management
  • πŸ“Š Performance metrics tracking

πŸ›  Prerequisites

Before you begin, ensure you have the following installed:

  • Rust (latest stable version)
  • Node.js (v14 or higher)
  • WebAssembly target: rustup target add wasm32-unknown-unknown

πŸš€ Getting Started

Option 1: Quick Start with NPX

# Create a new UOMI agent project
npx wasp create

Option 2: Manual Setup

git clone https://github.com/Uomi-network/uomi-chat-agent-template.git
cd uomi-chat-agent-template/agent
npm install
chmod +x ./bin/build_and_run_host.sh
npm start

πŸ”§ Configuration

Model Configuration

The environment supports multiple model configurations through uomi.config.json:

{
  "local_file_path": "path/to/input.txt",
  "api": {
    "timeout_ms": 30000,
    "retry_attempts": 3,
    "headers": {
      "Content-Type": "application/json",
      "Accept": "application/json",
      "User-Agent": "UOMI-Client/1.0"
    }
  },
  "models": {
    "1": {
      "name": "Qwen/Qwen2.5-32B-Instruct-GPTQ-Int4"
    },
    "2": {
      "name": "gpt-3.5-turbo",
      "url": "https://api.openai.com/v1/chat/completions",
      "api_key": "your-api-key-here"
    }
  },
  "ipfs": {
    "gateway": "https://ipfs.io/ipfs",
    "timeout_ms": 10000
  }
}

You can run the node-ai service by following the instruction in this node-ai repository.

By doing that, you don't need to specify a url or api-key in the model's configuration, and you will run the production version of the node-ai service.

If you don't have enough resources to run the node-ai service, you can use a third-party service like openAI, in this case you need to specify the url and api_key in the models configuration.

Response Formats

The environment automatically handles different response formats:

UOMI Format

{
  "response": "Hello, how can I help?",
  "time_taken": 1.23,
  "tokens_per_second": 45,
  "total_tokens_generated": 54
}

OpenAI Format

{
  "choices": [{
    "message": {
      "content": "Hello, how can I help?"
    }
  }],
  "usage": {
    "total_tokens": 150,
    "prompt_tokens": 50,
    "completion_tokens": 100
  }
}

πŸ’‘ Usage Examples

Interactive Mode

$ npm start
UOMI Development Environment
Type your messages. Use these commands:
/clear - Clear conversation history
/history - Show conversation history
/exit - Exit the program

You: Hello, how are you?
Assistant: Hello! I'm doing well, thank you for asking...

Performance Metrics:
- Time taken: 1.20s
- Tokens/second: 45
- Total tokens: 54

Development

Custom Model Integration

// Add a new model in uomi.config.json
{
  "models": {
    "3": {
      "name": "custom-model",
      "url": "https://api.custom-provider.com/v1/chat",
      "api_key": "your-api-key"
    }
  }
}

πŸ“Š Performance Monitoring

The environment provides detailed performance metrics:

  • Response time tracking
  • Token usage statistics
  • Rate limiting information
  • Error tracking and retry statistics

πŸ” Security

  • API keys are stored securely in configuration files
  • Support for environment variable substitution
  • Automatic header management for authentication
  • Secure HTTPS communication

πŸ› Debugging

Built-in debugging features:

  • Detailed WASM logging
  • Request/response inspection
  • Performance profiling
  • Error tracing with retry information

πŸ“š API Reference

Host Functions

Function Description
get_input() Read input data
set_output() Set output data
call_service_api() Make API calls with retry support
get_file_from_cid() Fetch IPFS content
log() Debug logging

Compiled WASM

The compiled WASM file after test is located in the host/src/agent_template.wasm directory.

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.


Made with ❀️ by the UOMI team

About

A powerful development environment for creating UOMI agents using WebAssembly and Rust πŸ¦€

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 55.6%
  • JavaScript 43.8%
  • Shell 0.6%