-
Autonomous vision group, University of Tübingen
- takerum.github.io
- @takeru_miyato
Highlights
- Pro
Lists (3)
Sort Name ascending (A-Z)
Stars
Public code for XFactor: Introduces the first geometry-free model to achieve true self-supervised / pose-free Novel View Synthesis (NVS) by learning transferable latent camera pose representations.
Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence
Devkit and documentation for the NVIDIA Physical AI Autonomous Vehicles Dataset
Code for "Kuramoto Orientation Diffusion"
A Mimetic Procedural Benchmark Generator for the Abstraction and Reasoning Corpus
MDPO: Overcoming the Training-Inference Divide of Masked Diffusion Language Models
Official repository for the paper "Flow Equivariant Recurrent Neural Networks"
Don't just regulate gradients like in Muon, regulate the weights too
[CoRL 2025] CaRL: Learning Scalable Planning Policies with Simple Rewards
Two self-contained notebooks to perform "weight transformer" from pretrained Transformer model to neuron-astrocyte network.
Continuous Thought Machines, because thought takes time and reasoning is a process.
[ICCV'25 oral] Official Code for "LoftUp: Learning a Coordinate-Based Feature Upsampler for Vision Foundation Models"
[CVPR 2025] Volumetric Surfaces: Representing Fuzzy Geometries with Layered Meshes
Induce brain-like topographic structure in your neural networks
J-Moshi: A Japanese Full-duplex Spoken Dialogue System
A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model
Pre-trained models, data, code & materials from the paper "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness" (ICLR 2019 Oral)
Processed / Cleaned Data for Paper Copilot
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Diffusion model derived evolutionary algorithm
Pytorch optimiser for training ANNs with exponentiated gradient desent
Official implementation of "Traveling Waves Encode the Recent Past and Enhance Sequence Learning" (ICLR 2024)
🚀 Efficient implementations of state-of-the-art linear attention models