Highlights
- Pro
Stars
Open-source framework for the research and development of foundation models.
Internal Causal Mechanisms Robustly Predict Language Model Out-of-Distribution Behaviors
PyTorch native quantization and sparsity for training and inference
Official code release for Delta Activations: A Representation for Finetuned Large Language Models
💨 Fast, Async-ready, Openapi, type hints based framework for building APIs
Concatenate a directory full of files into a single prompt for use with LLMs
Learning Deep Representations of Data Distributions
Tile primitives for speedy kernels
Why Do Some Inputs Break Low-Bit LLM Quantization?
DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
Reference PyTorch implementation and models for DINOv3
Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
Render any git repo into a single static HTML page for humans or LLMs
Full stack, modern web application template. Using FastAPI, React, SQLModel, PostgreSQL, Docker, GitHub Actions, automatic HTTPS and more.
[ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning
A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643
Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".
MTEB: Massive Text Embedding Benchmark
Code for the Fractured Entangled Representation Hypothesis position paper!