Skip to content
@NVIDIA

NVIDIA Corporation

Pinned Loading

  1. cuopt cuopt Public

    GPU accelerated decision optimization

    Cuda 711 127

  2. cuopt-examples cuopt-examples Public

    NVIDIA cuOpt examples for decision optimization

    Jupyter Notebook 408 64

  3. open-gpu-kernel-modules open-gpu-kernel-modules Public

    NVIDIA Linux open GPU kernel module source

    C 16.7k 1.6k

  4. aistore aistore Public

    AIStore: scalable storage for AI applications

    Go 1.8k 234

  5. nvidia-container-toolkit nvidia-container-toolkit Public

    Build and run containers leveraging NVIDIA GPUs

    Go 4.1k 477

  6. GenerativeAIExamples GenerativeAIExamples Public

    Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.

    Jupyter Notebook 3.8k 978

Repositories

Showing 10 of 672 repositories
  • holodeck Public

    Holodeck is a project to create test environments optimised for GPU projects.

    NVIDIA/holodeck’s past year of commit activity
    Go 26 Apache-2.0 13 2 1 Updated Feb 15, 2026
  • k8s-test-infra Public

    K8s-test-infra

    NVIDIA/k8s-test-infra’s past year of commit activity
    Go 12 Apache-2.0 11 0 3 Updated Feb 15, 2026
  • KAI-Scheduler Public

    KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale

    NVIDIA/KAI-Scheduler’s past year of commit activity
    Go 1,133 Apache-2.0 152 30 (1 issue needs help) 75 Updated Feb 15, 2026
  • cuda-quantum Public

    C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows

    NVIDIA/cuda-quantum’s past year of commit activity
    C++ 930 340 427 (16 issues need help) 112 Updated Feb 15, 2026
  • mig-parted Public

    MIG Partition Editor for NVIDIA GPUs

    NVIDIA/mig-parted’s past year of commit activity
    Go 240 Apache-2.0 56 22 22 Updated Feb 15, 2026
  • TensorRT-LLM Public

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    NVIDIA/TensorRT-LLM’s past year of commit activity
    Python 12,883 2,100 533 519 Updated Feb 15, 2026
  • bare-metal-manager-core Public

    NVIDIA Bare Metal Manager - Hardware Lifecycle Management and multitenant networking

    NVIDIA/bare-metal-manager-core’s past year of commit activity
    Rust 38 Apache-2.0 27 45 (3 issues need help) 23 Updated Feb 15, 2026
  • Model-Optimizer Public

    A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM, TensorRT, vLLM, etc. to optimize inference speed.

    NVIDIA/Model-Optimizer’s past year of commit activity
    Python 1,988 Apache-2.0 273 66 87 Updated Feb 14, 2026
  • ACCV-Lab Public

    Accelerated Computer Vision Lab (ACCV-Lab) is a systematic collection of packages with the common goal to facilitate end-to-end efficient training in the ADAS domain, each package offering tools & best practices for a specific aspect/task in this domain.

    NVIDIA/ACCV-Lab’s past year of commit activity
    Python 45 Apache-2.0 8 1 0 Updated Feb 15, 2026
  • k8s-operator-libs Public

    A collection of useful Go libraries to ease the development of NVIDIA Operators for GPU/NIC management.

    NVIDIA/k8s-operator-libs’s past year of commit activity
    Go 29 Apache-2.0 22 2 5 Updated Feb 15, 2026