-
University of Illinois Urbana-Champaign
-
10:30
(UTC -05:00) - jiawei-site.github.io
- @JiaweiLiu_
- https://huggingface.co/ganler
Highlights
- Pro
Stars
- All languages
- BibTeX Style
- C
- C#
- C++
- CMake
- CSS
- Cuda
- Cython
- Dafny
- Dockerfile
- F#
- GLSL
- Go
- HTML
- Haskell
- Java
- JavaScript
- Julia
- Jupyter Notebook
- LLVM
- Linear Programming
- Lua
- MATLAB
- MDX
- MLIR
- Makefile
- Markdown
- Mojo
- OCaml
- Objective-C
- PHP
- Perl
- Python
- Racket
- Ruby
- Rust
- SCSS
- SMT
- Sass
- Scala
- Shell
- Solidity
- Swift
- SystemVerilog
- TeX
- TypeScript
- Vim Script
- Vue
- WebAssembly
- Xonsh
Generating, validating and running exploitable verifiable coding problems
MCPMark is a comprehensive, stress-testing MCP benchmark designed to evaluate model and agent capabilities in real-world MCP use.
Access large language models from the command-line
We all know Rust's trait system is Turing complete, so tell me, why aren't we exploiting this???
🔮Reasoning for Safer Code Generation; 🥇Winner Solution of Amazon Nova AI Challenge 2025
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems
A powerful AI coding agent. Built for the terminal.
We track and analyze the activity and performance of autonomous code agents in the wild
LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU via HF, vLLM, and SGLang.
A curated reading list for machine learning reliability research and practice
This repository will be a community-curated list of software and resources that explicitly avoid the integration of artificial intelligence.
A Framework for Automated Validation of Deep Learning Training Tasks
Turn any browser into your terminal & command your agents on the go.
A benchmark for LLMs on complicated tasks in the terminal
slime is an LLM post-training framework for RL Scaling.
TradingAgents: Multi-Agents LLM Financial Trading Framework
[COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models
Simple high-throughput inference library
A simple, performant and scalable Jax LLM!
an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM
✨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framwork
A Datacenter Scale Distributed Inference Serving Framework
A fast and scalable general purpose sandbox code execution engine.