Lists (1)
Sort Name ascending (A-Z)
Stars
🥢像老乡鸡🐔那样做饭。主要部分于2024年完工,非老乡鸡官方仓库。文字来自《老乡鸡菜品溯源报告》,并做归纳、编辑与整理。CookLikeHOC.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Official code repo for the O'Reilly Book - "Hands-On Large Language Models"
Step-by-step optimization of CUDA SGEMM
A collection of handy Bash One-Liners and terminal tricks for data processing and Linux system maintenance.
Fast and memory-efficient exact attention
A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")
TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and support for E2E production ML pipelines when you're ready.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
System design interview for IT companies
This repo is meant to serve as a guide for Machine Learning/AI technical interviews.
Hub for curated insights and resources on software systems and technologies
Explain complex systems using visuals and simple terms. Help you prepare for system design interviews.
VIP cheatsheets for Stanford's CS 221 Artificial Intelligence
VIP cheatsheets for Stanford's CS 229 Machine Learning
VIP cheatsheets for Stanford's CS 230 Deep Learning
Homebridge Package for Synology DSM 7.
An open-source ML pipeline development platform
Raspberry Pi & NanoPi R2S/R4S & G-Dock & x86 OpenWrt Compile Project. (Based on Github Action / Daily Update)
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
I am trying to describe complex matters in simple doodles!
Matter (formerly Project CHIP) creates more connections between more objects, simplifying development for manufacturers and increasing compatibility for consumers, guided by the Connectivity Standa…
Google's Engineering Practices documentation
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
NVIDIA Linux open GPU kernel module source