-
Shanghai Jiao Tong University
- Shanghai
-
09:47
(UTC +08:00) - https://sjtuyinjie.github.io/
- https://scholar.google.com/citations?user=Y8LVRYIAAAAJ&hl=en
Highlights
- Pro
Lists (1)
Sort Name ascending (A-Z)
Starred repositories
ViTacFormer: Learning Cross-Modal Representation for Visuo-Tactile Dexterous Manipulation
1st place solution of 2025 BEHAVIOR Challenge
terrain-robustness benchmark for legged locomotion
Mastering Diverse Domains through World Models
Reproduction code of paper "World Model-based Perception for Visual Legged Locomotion"
Code for Visual Dexterity: In-Hand Reorientation of Novel and Complex Object Shapes (Science Robotics)
HiF-VLA: An efficient, bidirectional spatiotemporal expansion Vision-Language-Action Model
A paper list of multimodal VLAs
VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
This website is for the collection of VLA SOTA results.
Learning-based locomotion control from OpenRobotLab, including Hybrid Internal Model & H-Infinity Locomotion Control
[CVPR 2025] WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments
Official implementation for SIGGRAPH 2023 paper "Learning Physically Simulated Tennis Skills from Broadcast Videos"
End-to-end pipeline converting generative videos (Veo, Sora) to humanoid robot motions
[CoRL25] GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data
Official PyTorch Implementation of Paper -- "MoRE: Mixture of Residual Experts for Humanoid Lifelike Gaits Learning on Complex Terrains"
Official Implementation of "KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills"
Twisting Lids Off with Two Hands [CoRL 2024]
Official code of Motus: A Unified Latent Action World Model
[arXiv 2025] TWIST2: Scalable, Portable, and Holistic Humanoid Data Collection System
The code for the voraus-AD dataset paper.