-
University of Cambridge
-
14:40
(UTC -05:00) - Ryan0v0.github.io
- @Renee42581826
- All languages
- Assembly
- BibTeX Style
- C
- C#
- C++
- CMake
- CSS
- Cuda
- Dart
- Dockerfile
- Go
- HTML
- Haskell
- Java
- JavaScript
- Jsonnet
- Julia
- Jupyter Notebook
- Kotlin
- LilyPond
- Lua
- MATLAB
- Makefile
- Mathematica
- PDDL
- PHP
- PostScript
- PowerShell
- Python
- R
- Roff
- Ruby
- Rust
- SQLPL
- Scala
- Shell
- Solidity
- TeX
- TypeScript
- VBA
- Vim Script
- Vue
- YAML
Starred repositories
Schedule-Free Optimization in PyTorch
Intelligent Router for Mixture-of-Models
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery 🧑🔬
Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"
TruthfulQA: Measuring How Models Imitate Human Falsehoods
Code for Model-Free Opponent Shaping (ICML 2022)
Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
[NeurIPS 2025] Scaling Language-centric Omnimodal Representation Learning
[NeurIPS 2025 D&B (Spotlight🌟)] TIME: A Multi-level Benchmark for Temporal Reasoning of LLMs in Real-World Scenario
A scalable asynchronous reinforcement learning implementation with in-flight weight updates.
Neural Networks: Zero to Hero
An open source python library for scalable Bayesian optimisation.
🥢像老乡鸡🐔那样做饭。主要部分于2024年完工,非老乡鸡官方仓库。文字来自《老乡鸡菜品溯源报告》,并做归纳、编辑与整理。CookLikeHOC.
Intelligent automation and multi-agent orchestration for Claude Code
When it comes to optimizers, it's always better to be safe than sorry
A complete computer science study plan to become a software engineer.
Practice The CodeSignal Pre-screen for the Industry Coding Framework.
Supporting code for https://arxiv.org/abs/2010.00753.
Official code for the paper Improving Language Plasticity via Pretraining with Active Forgetting, NeurIPS 2023
Unsupervised text tokenizer for Neural Network-based text generation.
[NeurIPS 2024] CLUES🔍: Collaborative Private-domain High-quality Data Selection for LLMs via Training Dynamics
Chameleon: A Flexible Data-mixing Framework for Language Model Pretraining and Finetuning, ICML 2025