Stars
Service Control Policies that have been Latacora recommended all wrapped up in terraform that is easy to attach to an OU.
New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos
This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.
An open-source RAG-based tool for chatting with your documents.
KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge baโฆ
Learn how to design, develop, deploy and iterate on production-grade ML applications.
voctoweb โ the frontend and backend software behind media.ccc.de
A real world full-stack application using LlamaIndex
๐ Open source distributed and RESTful search engine.
This repository contains various advanced techniques for Retrieval-Augmented Generation (RAG) systems.
๐ป Ghostty is a fast, feature-rich, and cross-platform terminal emulator that uses platform-native UI and GPU acceleration.
๐ A curated list of awesome MLOps tools
A high-throughput and memory-efficient inference and serving engine for LLMs
Learn how to design, develop, deploy and iterate on production-grade ML applications.
Fully local web research and report writing assistant
The official repo of Qwen (้ไนๅ้ฎ) chat & pretrained large language model proposed by Alibaba Cloud.
Machine Learning Engineering Open Book
PhD/MSc course on Machine Learning Security (Univ. Cagliari)
Use OpenAI's realtime API for a chatting with your documents
๐ฅ๐ฅ๐ฅ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.
The complete stack for AI Engineers: framework, runtime and control plane.
Collection of awesome LLM apps with AI Agents and RAG using OpenAI, Anthropic, Gemini and opensource models.
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
The world's easiest, most powerful edgar library
Build a RAG (Retrieval Augmented Generation) pipeline from scratch and have it all run locally.
Composable building blocks to build LLM Apps