Stars
Exploring on Point Clouds: A Lightweight LiDAR-Based UAV Exploration Framework for Large-Scale Scenarios
A Robust Approach for LiDAR-Inertial Odometry Without Sensor-Specific Modelling
This repository contains the official implementation of "FastVLM: Efficient Vision Encoding for Vision Language Models" - CVPR 2025
Paper Survey for Transformer-based SLAM
Code & Models for 3DETR - an End-to-end transformer model for 3D object detection
This repository primarily organizes papers, code, and other relevant materials related to Active SLAM and Robotic Exploration.
Real-time webcam demo with SmolVLM and llama.cpp server
Lichtblick is an integrated visualization and diagnosis tool for robotics, available in your browser or as a desktop app on Linux, Windows, and macOS.
ymichael / open-codex
Forked from openai/codexLightweight coding agent that runs in your terminal
3D LIDAR Localization using NDT/GICP and pointcloud map in ROS 2 (Not SLAM)
Notebook-based book "Introduction to Robotics and Perception" by Frank Dellaert and Seth Hutchinson
A library for differentiable nonlinear optimization
Python tool for converting files and office documents to Markdown.
Get your documents ready for gen AI
A Tightly-Coupled System for LiDAR-Inertial Odometry and Multi-Object Tracking.
An in-depth step-by-step tutorial for implementing sensor fusion with robot_localization! 🛰
The fuse stack provides a general architecture for performing sensor fusion live on a robot. Some possible applications include state estimation, localization, mapping, and calibration.
Release repo for our SLAM Handbook
For an education purpose, from-scratch, single-file, python-only pose-graph optimization implementation
Minimal, robust, accurate and real-time LiDAR odometry
Pointcept: Perceive the world with sparse points, a codebase for point cloud perception research. Latest works: Concerto (NeurIPS'25), Sonata (CVPR'25 Highlight), PTv3 (CVPR'24 Oral)
We write your reusable computer vision tools. 💜
A one stop repository for generative AI research updates, interview resources, notebooks and much more!
Robot Utility Models are trained on a diverse set of environments and objects, and then can be deployed in novel environments with novel objects without any further data or training.
Target-free Extrinsic Calibration of a 3D Lidar and an IMU
YOLOv10: Real-Time End-to-End Object Detection [NeurIPS 2024]