Command-line interface for LLM Meter - Track and monitor your LLM usage, costs, and performance across multiple providers (OpenAI, Anthropic, Ollama, etc.).
- View usage statistics and costs
- List individual API requests with token details
- Interactive REPL mode for exploring data
- Server management commands
- Model pricing configuration
npm install -g llm-meter-cliOr install and run with npx:
npx llm-meter-cli# Show usage statistics
llm-meter stats
# List recent API requests
llm-meter requests
# View details of a specific request
llm-meter request <request-id>
# View model pricing
llm-meter pricing
# Start the LLM Meter server
llm-meter serve--help: Show help information for any command-p, --project <name>: Filter by project--page <number>: Page number for requests (default: 1)--limit <number>: Requests per page (default: 20)
Run llm-meter without any arguments to enter the interactive REPL mode:
llm-meterThis launches a shell where you can run commands like:
stats- Show usage statisticsrequests- List recent requestspricing- View model pricinghelp- Show available commands
The CLI connects to the LLM Meter backend server. You can configure the API endpoint using the LLM_METER_API environment variable:
export LLM_METER_API=http://localhost:3001/apiDefault endpoint: http://localhost:3001/api
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
MIT