Skip to content

ZJUNESA/SANDP

 
 

Repository files navigation

Agenda 2025

Please upload your slides or a introduction (Chinese or English) of your presentation in advance, such as conference, title, abstract,which can be written in the form of markdown. Please add your title in the agenda.

AI Security Group Meeting

Location:Cao Guangbiao High-tech Building 201

Time: Friday 18:00

Date Speaker Title Publication
1 2025.01.03 曾睿 BAIT: Large Language Model Backdoor Scanning by Inverting Attack Target IEEE S&P 2025
2 2025.01.10 赵芷茗 Emulated Disalignment: Safety Alignment for Large Language Models May Backfire! ACL 2024
3 2025.01.17
4 2025.01.24
5 2025.01.31
6 2025.02.07
7 2025.02.14
8 2025.02.21 冯周 Towards Backdoor Stealthiness in Model Parameter Space Preprint 2025
9 2025.02.28 甘雨由 Systematic review of the development of open-source multimodal large language models -
10 2025.03.07 王异鸣 Rethinking the Invisible Protection against Unauthorized Image Usage in Stable Diffusion Usenix Security 2024
11 2025.03.14 李欣迪 Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models CVPR 2025
12 2025.03.21 陈曦 Deliberative Alignment Reasoning Enables Safer Language Models OpenAI
13 2025.03.28 贺兴 DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-Image Diffusion Models ICLR 2024
14 2024.04.04 李俊豪 Air Gap: Protecting Privacy-Conscious Conversational Agents CCS 2024
15 2025.04.11 陈佳豪 On the Security and Privacy Risks of Model Content Protocol
16 2025.04.18 张铃沛 CS-LSTMs: Context and Seasonal LSTMs for Time Series Anomaly Detection
17 2025.04.25 张童 Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution NDSS 2024
18 2025.05.02 刘家宁 AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs ICLR 2025
19 2025.05.09 曾睿 DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks SP 2025
20 2025.05.16 周豪杰 SELFDEFEND: LLMs Can Defend Themselves against Jailbreaking in a Pratical Manner USENIX Security 2025
21 2025.05.23 冯周 Whispering Under the Eaves: Protecting User Privacy Against Commercial and LLM-powered Automatic Speech Recognition Systems USENIX Security 2025
22 2025.05.30 赵芷茗 Safety Alignment Should Be Made More Than Just A Few Tokens Deep ICLR 2025
23 2025.06.06 杨勇 Alleviating the Fear of Losing Alignment in LLM Fine-Tuning SP 2025
24 2025.06.13 王异鸣 DORMANT: Defending against Pose-driven Human Image Animation USENIX Security 2025
25 2025.06.20 麻瓯勃 Loss of Plasticity in Deep Reinforcement Learning
26 2025.06.27 李俊豪 Doxing via the Lens: Revealing Location-related Privacy Leakage on Multi-modal Large Reasoning Models arXiv
27 2025.07.04 李欣迪 Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink USENIX Security 2025
28 2025.07.11 贺兴 Fuzz-Testing Meets LLM-Based Agents: An Automated and Efficient Framework for Jailbreaking Text-To-Image Generation Models SP 2025
29 2025.07.18 陈佳豪 Delving into the Privacy Risks of Generative Models
30 2025.07.25 陈曦 BadRobot: Jailbreaking Embodied LLMs in the Physical World ICLR 2025
31 2025.08.01 林瑞潇 Industrial Frameworks of LLM-based Multi-Agent Systems
32 2025.08.08 张童 Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models USENIX Security 2025
33 2025.08.15 周豪杰 Safety Layers in Aligned Large Language Models: The Key to LLM Security ICLR 2025
34 2025.08.22 甘雨由 SafeNeuron: Detecting Jailbreaking in Large Vision Language Model via Locating Critical Neurons AAAI 2025
35 2025.08.29 冯周 SafeSpeech: Robust and Universal Voice Protection Against Malicious Speech Synthesis USENIX Security 2025
36 2025.09.05 曾睿 Cloak, Honey, Trap: Proactive Defenses Against LLM Agents USENIX Security 2025
37 2025.09.12 王异鸣 Exposing the Guardrails: Reverse-Engineering and Jailbreaking Safety Filters in DALL·E Text-to-Image Pipelines USENIX Security 2025
38 2025.09.19 姜毅 Cascading Adversarial Bias from Injection to Distillation in Language Models CCS 2025
39 2025.09.26 刘家宁 We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs USENIX Security 2025
40 2025.10.03 赵芷茗 Test-Time Poisoning Attacks Against Test-Time Adaptation Models SP 2024
41 2025.10.10 李欣迪
42 2025.10.17 陈佳豪
43 2025.10.24 张铃沛
44 2025.10.31 陈曦
45 2025.11.07 李俊豪
46 2025.11.14 贺兴
47 2025.11.21 职巳杰
48 2025.11.28 吴柏祺
49 2025.12.05 朱富康
50 2025.12.12 周豪杰
51 2025.12.19 王露怡
52 2025.12.26 张童

System Security Group Meeting

Location:Cao Guangbiao High-tech Building 201

Time: Sunday 18:00

Date Speaker Title Publication
1 2025.01.05 李秉政 SymBisect: Accurate Bisection for Fuzzer-Exposed Vulnerabilities Usenix Security 2024
2 2025.01.12 黄钢 Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation AAAI 2025
3 2025.01.19 刘昕鹏 Unveiling IoT Security in Reality: A Firmware-Centric Journey Usenix Security 2024
4 2025.01.26
5 2025.02.02
6 2025.02.09
7 2025.02.16
8 2025.02.23 江世昊 GhostType: The Limits of Using Contactless Electromagnetic Interference to Inject Phantom Keys into Analog Circuits of Keyboards NDSS 2024
9 2025.03.02 张凌铭 CarpetFuzz: Automatic Program Option Constraint Extraction from Documentation for Fuzzing Usenix Security 2023
10 2025.03.09 祝遥 Incorporating Gradients to Rules: Towards Lightweight, Adaptive Provenance-based Intrusion Detection NDSS 2025
11 2025.03.16 常博宇 SpecRover: Code Intent Extraction via LLMs ICSE 2025
12 2025.03.23 杨禹 AdvSQLi: Generating Adversarial SQL Injections Against Real-World WAF-as-a-Service TIFS 2024
13 2025.03.30 林型双 PropertyGPT: LLM-driven Formal Verification of Smart Contracts through Retrieval-Augmented Property Generation NDSS 2025
14 2025.04.06 武旗龙 Large Language Models for Code Analysis : Do LLMs Really Do Their Job? USENIX Security 2024
15 2025.04.13 李秉政 ARTEMIS: Toward Accurate Detection of Server-Side Request Forgeries through LLM-Assisted Inter-procedural Path-Sensitive Taint Analysis OOPSLA 2025
16 2025.04.20 黄钢 kAPR: LLM-assisted Automated Program Repair on Linux Kernel Personal Progress Report
17 2025.04.27 刘昕鹏 Static Analysis for (RTOS-Based) Firmware Personal Progress Report
18 2025.05.04 江世昊 Inside Your Robot Dog Friend: Architecture and Security Challenges of Embodied AI Intelligent Unmanned Systems Personal Progress Report
19 2025.05.11 张凌铭 The Case for Learned Provenance-based System Behavior Baseline ICML 2025
20 2025.05.18 祝遥 Fuzzing across JavaScript and WebAssembly Language Boundary Personal Progress Report
21 2025.05.25 杨禹 An Empirical Study on EDR Systems’ Robustness against Attack Mutations by LLMs Personal Progress Report
22 2025.06.01 常博宇 Towards Patch Correctness Assessment Personal Progress Report
23 2025.06.08 林型双 CompliGuard: Detecting Reusable Components Usage Logical Noncompliance in Smart Contracts Personal Progress Report
24 2025.06.15 黄钢 HAFE: A Hybrid and Automated PHP WebShell Obfuscation Technique with Branch-Oriented Control and Variable Functions for Detection Evasion Personal Progress Report
25 2025.06.22 武旗龙 FLLMBackdoor : Stealthy Injection and Triggering in Malicious LLM Deployment Frameworks Personal Progress Report
26 2025.06.29 祝遥 What We Talk About When We Talk About Logs: Understanding the Effects of Dataset Quality on Endpoint Threat Detection Research IEEE S&P 2025
27 2025.07.06 张凌铭 RepairAgent: An Autonomous, LLM-Based Agent for Program Repair ICSE 2025
28 2025.07.13 林型双 Copy-and-Paste? Identifying EVM-Inequivalent Code Smells in Multi-chain Reuse Contracts ISSTA 2025
29 2025.07.20 江世昊 Demystifying RCE Vulnerabilities in LLM-Integrated Apps CCS 2024
30 2025.07.27 刘昕鹏 Stealthy and Persistent Attacks Leveraging AI-IDE Personal Progress Report
31 2025.08.03 杨禹 Generating API Parameter Security Rules with LLM for API Misuse Detection NDSS 2025
32 2025.08.10 常博宇 COMMITSHIELD: Tracking Vulnerability Introduction and Fix in Version Control Systems ICSE 2025
33 2025.08.17 黄钢 An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection USENIX Security 2024
34 2025.08.24 江世昊 BadRobot: Manipulating Embodied LLMs in the Physical World ICLR 2025
35 2025.08.31 张凌铭 Top Score on the Wrong Exam: On Benchmarking in Machine Learning for Vulnerability Detection ISSTA 2025
36 2025.09.07 刘昕鹏 UntrustIDE: Exploiting Weaknesses in VS Code Extensions NDSS 2024
37 2025.09.14 祝遥 AutoLabel: Automated Fine-Grained Log Labeling for Cyber Attack Dataset Generation USENIX Security 2025
38 2025.09.21 林型双 Forge: An LLM-driven Framework for Large-Scale Smart Contract Vulnerability Dataset Construction ICSE 2026
39 2025.09.28 武旗龙 The philosopher's stone: Trojaning plugins of large language models NDSS 2025
40 2025.10.05 常博宇 PATCHAGENT: A Practical Program Repair Agent Mimicking Human Expertise USENIX Security 2025
41 2025.10.12 杨禹
42 2025.10.19 徐博
43 2025.10.26 江世昊
44 2025.11.02 王晋文
45 2025.11.09 张宁瑞
46 2025.11.16 林型双
47 2025.11.23 祝遥
48 2025.11.30 武旗龙
49 2025.12.07 黄钢
50 2025.12.14 张凌铭
51 2025.12.21 刘昕鹏
52 2025.12.28 常博宇

About

Seminar 2022

Resources

Stars

Watchers

Forks