Skip to content
View chawins's full-sized avatar

Highlights

  • Pro

Organizations

@wagner-group

Block or report chawins

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Showing results

dInfer: An Efficient Inference Framework for Diffusion Language Models

Python 287 26 Updated Nov 7, 2025

The vLLM Support for D2F

Python 39 6 Updated Nov 5, 2025
Python 8 Updated Oct 27, 2025

Gemma open-weight LLM library, from Google DeepMind

Python 3,804 575 Updated Nov 5, 2025

The best ChatGPT that $100 can buy.

Python 36,193 4,240 Updated Nov 5, 2025

[EMNLP 2025 Oral] IPIGuard: A Novel Tool Dependency Graph-Based Defense Against Indirect Prompt Injection in LLM Agents

Python 15 1 Updated Sep 16, 2025

Curated resources, research, and tools for securing AI systems

172 31 Updated Nov 6, 2025

gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI

Python 19,137 1,917 Updated Nov 1, 2025

Renderer for the harmony response format to be used with gpt-oss

Rust 3,988 223 Updated Nov 5, 2025

Repo for the paper "Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks".

Python 34 8 Updated Oct 30, 2025

Code for the paper "Defeating Prompt Injections by Design"

Jupyter Notebook 145 24 Updated Jun 20, 2025

[ICLR 2025] Dissecting adversarial robustness of multimodal language model agents

Python 113 6 Updated Feb 19, 2025

Open-source implementation of AlphaEvolve

Python 4,479 664 Updated Nov 1, 2025

Official PyTorch implementation for "Large Language Diffusion Models"

Python 3,188 215 Updated Nov 8, 2025

OO for LLMs

Python 868 67 Updated Nov 7, 2025

Dataset and code for "JailbreaksOverTime: Detecting Jailbreak Attacks Under Distribution Shift"

Jupyter Notebook 7 Updated Apr 24, 2025

Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)

Python 153 12 Updated Nov 30, 2024

Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"

Python 74 5 Updated Jul 24, 2025
Python 10 Updated Mar 22, 2025

A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.

Python 343 88 Updated Oct 29, 2025

A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).

1,728 116 Updated Nov 2, 2025

A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models

255 11 Updated Nov 4, 2025

A data augmentations library for audio, image, text, and video.

Python 5,057 309 Updated Oct 31, 2025

Fast near-duplicate matching is a method for quickly finding near-duplicate spans in a document by utilizing the Rabin-Karp algorithm.

Rust 2 Updated Sep 22, 2024
Python 1 Updated Jun 7, 2024

The Security Toolkit for LLM Interactions

Python 2,228 301 Updated Nov 3, 2025

LLM Prompt Injection Detector

TypeScript 1,370 118 Updated Aug 7, 2024

Every practical and proposed defense against prompt injection.

574 38 Updated Feb 22, 2025

Official code for "Measuring Non-Adversarial Reproduction of Training Data in Large Language Models" (https://arxiv.org/abs/2411.10242)

Jupyter Notebook 8 1 Updated Nov 18, 2024

Documenting large text datasets 🖼️ 📚

Python 14 3 Updated Dec 17, 2024
Next