We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.
You must be logged in to block users.
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
🌱 I’m working on LLM safety.
🔭 I’m currently a research intern at Shanghai AI Lab.
The official implementation of RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
Python 1.5k 196
AISafetyLab: A comprehensive framework covering safety attack, defense, evaluation and paper list.
Python 212 14
Python 111 10
[ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety
Python 52 1
[EMNLP 2025 Main] Layer-Aware Representation Filtering: Purifying Finetuning Data to Preserve LLM Safety Alignment
Python 8
There was an error while loading. Please reload this page.