-
Hangzhou Dianzi University
- China
Highlights
- Pro
Lists (7)
Sort Name ascending (A-Z)
Stars
Dynamic 3D Foundation Model using Causal Transformer
mindmap: Spatial Memory in Deep Feature Maps for 3D Action Policies
A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation
"DeepCode: Open Agentic Coding (Paper2Code & Text2Web & Text2Backend)"
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation (NeurIPS 2025)
🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
Dexbotic: Open-Source Vision-Language-Action Toolbox
A lightweight, rootless CLI for Mihomo proxy management in restricted environments.
MichalZawalski / embodied-CoT
Forked from openvla/openvlaEmbodied Chain of Thought: A robotic policy that reason to solve the task.
Spec-driven development for AI coding assistants.
The only AI app builder that knows backend
OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation
Implementation of ICCV 2025 paper "Growing a Twig to Accelerate Large Vision-Language Models".
GigaBrain-0: A World Model-Powered Vision-Language-Action Model
[ACM Multimedia 2025] "Multi-Agent System for Comprehensive Soccer Understanding"
StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing
🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!
A configuration framework that enhances Claude Code with specialized commands, cognitive personas, and development methodologies.
💫 Toolkit to help you get started with Spec-Driven Development
📖 This is a repository for organizing papers, codes, and other resources related to Latent Reasoning.
RoboChallenge Inference example code
A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.
🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.