-
Pister Labs
- Madison, Wi
- pister.dev
- https://orcid.org/0009-0004-3144-8235
- @kaiserpister
Lists (2)
Sort Name ascending (A-Z)
Stars
Agent Reinforcement Trainer: train multi-step agents for real-world tasks using GRPO. Give your agents on-the-job training. Reinforcement learning for Qwen2.5, Qwen3, Llama, and more!
Lumina-mGPT 2.0: Stand-Alone AutoRegressive Image Modeling
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
Natural language search for complex JSON arrays, with AI Quickstart.
[WIP] Layer Diffusion for WebUI (via Forge)
Generative Models by Stability AI
RoMa: A lightweight library to deal with 3D rotations in PyTorch.
Karras et al. (2022) diffusion models for PyTorch
[EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
Convert PDF to markdown + JSON quickly with high accuracy
Supercharge Your LLM Application Evaluations 🚀
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior
A unified framework for 3D content generation.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
General technology for enabling AI capabilities w/ LLMs and MLLMs
WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
Marsha is a functional, higher-level, English-based programming language that gets compiled into tested Python software by an LLM
Must read research papers and links to tools and datasets that are related to using machine learning for compilers and systems optimisation
An open-source visual programming environment for battle-testing prompts to LLMs.
Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"