-
Preferred Networks, Inc.
- Japan
-
14:36
(UTC +09:00) - nzw0301.github.io
Lists (4)
Sort Name ascending (A-Z)
Stars
- All languages
- Batchfile
- C
- C#
- C++
- CMake
- CSS
- Classic ASP
- Common Lisp
- Common Workflow Language
- Cuda
- Cython
- Dockerfile
- Emacs Lisp
- Fortran
- Gherkin
- Go
- HTML
- Haskell
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Kotlin
- Lean
- Lua
- MATLAB
- MDX
- Nix
- OCaml
- Objective-C
- OpenEdge ABL
- Perl
- PostScript
- PowerShell
- Python
- R
- Roff
- Ruby
- Rust
- SCSS
- Sass
- Scala
- Shell
- Starlark
- TeX
- TypeScript
- Vue
Kernel sources for https://huggingface.co/kernels-community
A scalable asynchronous reinforcement learning implementation with in-flight weight updates.
verl: Volcano Engine Reinforcement Learning for LLMs
A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).
Train transformer language models with reinforcement learning.
A benchmark to evaluate search-augmented LLMs
Scalable data pre processing and curation toolkit for LLMs
Reference PyTorch implementation and models for DINOv3
Tool for generating high quality Synthetic datasets
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
Simple and efficient DeepSeek V3 SFT using pipeline parallel and expert parallel, with both FP8 and BF16 trainings
Jan is an open source alternative to ChatGPT that runs 100% offline on your computer.
Implementation for our COLM paper "Off-Policy Corrected Reward Modeling for RLHF"
An action for automatically labelling pull requests
Reasoning-based Evaluation and Ranking of Translations.
[ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction
A lightweight, local-first, and 🆓 experiment tracking library from Hugging Face 🤗
A simple, performant and scalable Jax LLM!
open-source coding LLM for software engineering tasks
The Optuna MCP Server is a Model Context Protocol (MCP) server to interact with Optuna APIs.
ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment