- Darmstadt, Germany
- https://www.hikarushindo.com/
Highlights
- Pro
Stars
A lightweight, powerful framework for multi-agent workflows
Muon is an optimizer for hidden layers in neural networks
Benchmarking the Spectrum of Agent Capabilities
"AutoAgent: Fully-Automated and Zero-Code LLM Agent Framework"
A collection of model counting (#SAT) benchmarks.
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Fully open reproduction of DeepSeek-R1
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
✨✨Latest Advances on Neuro-Symbolic Learning in the era of Large Language Models
SatLM: SATisfiability-Aided Language Models using Declarative Prompting (NeurIPS 2023)
A minimalistic and high-performance SAT solver
Get started with building Fullstack Agents using Gemini 2.5 and LangGraph
Prover9 is an automated theorem prover for first-order and equational logic, and Mace4 searches for finite models and counterexamples.
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
Implementing DeepSeek R1's GRPO algorithm from scratch
A very simple GRPO implement for reproducing r1-like LLM thinking.
[NeurIPS 2025] Thinkless: LLM Learns When to Think
LogicBench is a natural language question-answering dataset consisting of 25 different reasoning patterns spanning over propositional, first-order, and non-monotonic logics.
(ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.01935
Framework and Language for Neurosymbolic Programming.
One repository is all that is necessary for Multi-agent Reinforcement Learning (MARL)
An open collection of methodologies to help with successful training of large language models.