Stars
Docker-based tool designed to automate building of Emacs.
Integrating SSE with NVIDIA Triton Inference Server using a Python backend and Zephyr model. There is very less documentation how to use Nvidia Triton in Streaming use-cases ( hard to find in their…
Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inference Server.
OpenAI compatible API for TensorRT LLM triton backend
The ad-insertion reference pipeline shows how to integrate various media building blocks, with analytics powered by the OpenVINO™ Toolkit, for intelligent server-side ad insertion.
TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. Tensor…
The Triton TensorRT-LLM Backend
Repository for open inference protocol specification
Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
Repository to store INT8 quantized models derived from open model zoo
Gstreamer command-line cheat sheet
DL Streamer is now part of Open Edge Platform, for latest updates and releases please visit new repo: https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer
The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines. It includes tools for downloading pipelines and their dependencies and tools for measuring their performace.
libupnp: Build UPnP-compliant control points, devices, and bridges on several operating systems.
This repository contains a collection of FFmpeg* patches and samples to enable CNN model based video analytics capabilities (such as object detection, classification, recognition) in FFmpeg* framew…
The smart city reference pipeline shows how to integrate various media building blocks, with analytics powered by the OpenVINO™ Toolkit, for traffic or stadium sensing, analytics and management tasks.