Stars
Achieve state of the art inference performance with modern accelerators on Kubernetes
A RL Framework for multi LLM agent system
Pretrain, finetune and serve LLMs on Intel platforms with Ray
Confidential Computing Zoo provides confidential computing solutions based on Intel SGX, TDX, HEXL, etc. technologies.
Official Python client library for kubernetes
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
An End-to-End Distributed and Scalable Cloud KMS (Key Management System) built on top of Intel SGX enclave-based HSM (Hardware Security Module), aka eHSM.
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…
A multi-party collaborative machine learning framework
Occlum is a memory-safe, multi-process library OS for Intel SGX
mbedtls-SGX: a SGX-friendly TLS stack (ported from mbedtls)