- Paris, France
- https://huggingface.co/eliebak
- @eliebakouch
- in/eliebak
Stars
An interface library for RL post training with environments.
Supporting code for the blog post on modular manifolds.
Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.
Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible experimentation and exploration.
A lightweight, local-first, and π experiment tracking library from Hugging Face π€
slime is an LLM post-training framework for RL Scaling.
A PyTorch native platform for training generative AI models
An Open Source Toolkit For LLM Distillation
Open-source implementation of AlphaEvolve
Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax
Open-source framework for the research and development of foundation models.
The simplest, fastest repository for training/finetuning small-sized VLMs.
A TTS model capable of generating ultra-realistic dialogue in one pass.
π€ Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Accelerating your LLM training to full speed! Made with β€οΈ by ServiceNow Research
Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.
Experience email the way you want with Mail0 β the first open source email app that puts your privacy and safety first. Join the discord: https://mail0.link/discord
an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)
Environments for LLM Reinforcement Learning
A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS