- Houston, TX
Stars
Training library for Megatron-based models with bi-directional Hugging Face conversion capability
Developer Asset Hub for NVIDIA Nemotron — A one-stop resource for training recipes, usage cookbooks, and full end-to-end reference examples to build with Nemotron models
Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation
🎨 NeMo Data Designer: A general library for generating high-quality synthetic data from scratch or based on seed data.
The official Python library for the OpenAI API
DC-Gen: Post-Training Diffusion Acceleration with Deeply Compressed Latent Space
A collection of various NVIDIA DGX Cloud code examples
Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support
Scalable toolkit for efficient model reinforcement
Lets make video diffusion practical!
Lightweight coding agent that runs in your terminal
A Pythonic framework to simplify AI service building
A tool to configure, launch and manage your machine learning experiments.
Open-Sora: Democratizing Efficient Video Production for All
Scalable data pre processing and curation toolkit for LLMs
Official inference library for Mistral models
A package to generate summaries of long-form text and evaluate the coherence of these summaries. Official package for our ICLR 2024 paper, "BooookScore: A systematic exploration of book-length summ…
Convert PDF to markdown + JSON quickly with high accuracy
This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.
The open source codebase powering HuggingChat
A high-throughput and memory-efficient inference and serving engine for LLMs
Felk / dolphin
Forked from dolphin-emu/dolphinDolphin is a GameCube / Wii emulator, allowing you to play games for these two platforms on PC with improvements.
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.