Lists (4)
Sort Name ascending (A-Z)
Stars
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
Universal and Transferable Attacks on Aligned Language Models
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Source code of IPA, https://escholarship.org/uc/item/2p0805dq
gSlice Slicing GPUs to Serve Heterogeneous Inference Requests
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, …
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning mode…
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
🛡 A curated list of adversarial attacks in PyTorch, with a focus on transferable black-box attacks.
PyTorch implementation of adversarial attacks [torchattacks]
CanarySEFI is a framework for evaluating the robustness of deep learning-based image recognition models. It can evaluate model robustness and attack/defense algorithm effectiveness, encompassing 26…
Experimental code for the paper "Practical Over-Threshold Multi-Party Private Set Intersection"
Private set intersection using garbled bloom filters in semi-honest setting
Multi-party Private Set Intersections & Threshold Set Intersections
Curated collection of papers in MoE model inference