Lists (1)
Sort Name ascending (A-Z)
Stars
- All languages
- Assembly
- Bicep
- Brainfuck
- C
- C#
- C++
- CMake
- CSS
- Clojure
- Cuda
- Dockerfile
- Emacs Lisp
- Fennel
- Go
- HTML
- Java
- JavaScript
- Jsonnet
- Jupyter Notebook
- Kotlin
- Less
- Lua
- Makefile
- Markdown
- NASL
- Nginx
- Objective-C
- PHP
- PLpgSQL
- Perl
- PostScript
- Python
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Starlark
- Swift
- TLA
- TeX
- TypeScript
- Vala
- Vim Script
- Vue
- Yacc
The Standard Algorithms in C++.
🔥 一键优化 Linux 网络性能与系统稳定性(sysctl + IRQ + Offload + 自检修复)安全、通用、持久。
Apache Doris is an easy-to-use, high performance and unified analytics database.
An transformer based LLM. Written completely in Rust
📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉
Tablestore for Agent Memory
Static suckless single batch CUDA-only qwen3-0.6B mini inference engine
Supercharge Your LLM with the Fastest KV Cache Layer
MongoDB-compatible database engine for cloud-native and open-source workloads. Built for scalability, performance, and developer productivity.
A list of fast libraries, primarily x86/64 C++ and Node.js C++ extensions
Open-source CLI toolkit for low-RAM finetuning, quantization, and deployment of LLMs
eBPF Developer Tutorial: Learning eBPF Step by Step with Examples
Gluster Filesystem : Build your distributed storage in minutes
NFS-Ganesha is an NFSv3,v4,v4.1 fileserver that runs in user mode on most UNIX/Linux systems
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
An open-source AI agent that brings the power of Gemini directly into your terminal.
Highly customizable Wayland bar for Sway and Wlroots based compositors. ✌️ 🎉
Hyprland is an independent, highly customizable, dynamic tiling Wayland compositor that doesn't sacrifice on its looks.
SGLang is a fast serving framework for large language models and vision language models.
A Datacenter Scale Distributed Inference Serving Framework
A high-throughput and memory-efficient inference and serving engine for LLMs
Pluggable in-process caching engine to build and scale high performance services
A collection of modern C++ libraries, include coro_http, coro_rpc, compile-time reflection, struct_pack, struct_json, struct_xml, struct_pb, easylog, async_simple etc.
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
Rime万象拼音输入方案:标准版与增强版可选,词库基于AI筛选和语料辅助筛选精干高效,配合全新语法模型,输入不再纠结。PRO版本支持10种双拼,6种辅助码,可扩展。支持混合编码输入,内置超级注释、带调全拼输入码显示、快符、候选手动排序、tips、首选成对符号包裹等功能扩展,大大增强使用体验。 Q群:11033572