Stars
A curated reading list of research in Mixture-of-Experts(MoE).
A one stop repository for generative AI research updates, interview resources, notebooks and much more!
From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 🍓
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
[ICRA 2025] PyTorch Code for Local Policies Enable Zero-shot Long-Horizon Manipulation