Lists (3)
Sort Name ascending (A-Z)
Starred repositories
🐍 TOON for Python (Token-Oriented Object Notation) Encoder/Decoder - Reduce LLM token costs by 30-60% with structured data.
🎒 Token-Oriented Object Notation (TOON) – Compact, human-readable, schema-aware JSON for LLM prompts. Spec, benchmarks, TypeScript SDK.
NOFX: Defining the Next-Generation AI Trading Operating System. A multi-exchange Al trading platform(Binance/Hyperliquid/Aster) with multi-Ai competition(deepseek/qwen/claude)self-evolution, and re…
Amazon CloudWatch Embedded Metric Format Client Library
The open source developer platform to build AI agents and models with confidence. Enhance your AI applications with end-to-end tracking, observability, and evaluations, all in one integrated platform.
An open-source transformation engine written in a weekend
Inspect: A framework for large language model evaluations
An MCP server to query any Postgres database in natural language.
DSPy: The framework for programming—not prompting—language models
StackRender is the next-gen database schema design and generation tool.
[Beta] Community-maintained Terraform Provider for Langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
An automation that automatically backs up all your n8n Workflows for you!
capabilities to deploy and operate agents securely, at scale using any agentic framework and any LLM model.
Amazon Bedrock Agentcore accelerates AI agents into production with the scale, reliability, and security, critical to real-world deployment.
Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
all of the workflows of n8n i could find (also from the site itself)
A clean and modular FastAPI boilerplate designed to kickstart your GenAI project.
The observability platform for Iceberg lakehouses.
A high-throughput and memory-efficient inference and serving engine for LLMs
This Guidance demonstrates how to streamline access to numerous large language models (LLMs) through a unified, industry-standard API gateway based on OpenAI API standards
Run existing Model Context Protocol (MCP) stdio-based servers in AWS Lambda functions
✨ Agentic chat experience in your terminal. Build applications using natural language.