本日紹介した論文の一覧
Coercing LLMs to do and reveal (almost) anything
http://arxiv.org/abs/2402.14020v1
Corrective Machine Unlearning
http://arxiv.org/abs/2402.14015v1
FedADMM-InSa: An Inexact and Self-Adaptive ADMM for Federated Learning
http://arxiv.org/abs/2402.13989v1
Cybersecurity as a Service
http://arxiv.org/abs/2402.13965v1
AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement
Learning
http://arxiv.org/abs/2402.13946v1
Explain to Question not to Justify
http://arxiv.org/abs/2402.13914v1
Grover's oracle for the Shortest Vector Problem and its application in
hybrid classical-quantum solvers
http://arxiv.org/abs/2402.13895v1
An Explainable Transformer-based Model for Phishing Email Detection: A
Large Language Model Approach
http://arxiv.org/abs/2402.13871v1
Large Language Models are Advanced Anonymizers
http://arxiv.org/abs/2402.13846v1
An Empirical Study on Oculus Virtual Reality Applications: Security and
Privacy Perspectives
http://arxiv.org/abs/2402.13815v1
Spatial-Domain Wireless Jamming with Reconfigurable Intelligent Surfaces
http://arxiv.org/abs/2402.13773v1
A Unified Knowledge Graph to Permit Interoperability of Heterogeneous
Digital Evidence
http://arxiv.org/abs/2402.13746v1
On the Conflict of Robustness and Learning in Collaborative Machine
Learning
http://arxiv.org/abs/2402.13700v1
Finding Incompatibles Blocks for Reliable JPEG Steganalysis
http://arxiv.org/abs/2402.13660v1
Privacy-Preserving Instructions for Aligning Large Language Models
http://arxiv.org/abs/2402.13659v1
Generative AI for Secure Physical Layer Communications: A Survey
http://arxiv.org/abs/2402.13553v1
Private Gradient Descent for Linear Regression: Tighter Error Bounds and
Instance-Specific Uncertainty Estimation
http://arxiv.org/abs/2402.13531v1
Towards Efficient Verification of Constant-Time Cryptographic
Implementations
http://arxiv.org/abs/2402.13506v1
GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient
Analysis
http://arxiv.org/abs/2402.13494v1
Stealthy Adversarial Attacks on Stochastic Multi-Armed Bandits
http://arxiv.org/abs/2402.13487v1
Learning to Poison Large Language Models During Instruction Tuning
http://arxiv.org/abs/2402.13459v1
LLM Jailbreak Attack versus Defense Techniques -- A Comprehensive Study
http://arxiv.org/abs/2402.13457v1
なお、ポッドキャスト内で紹介する内容は、各論文の概要を日本語で解説したもので、論文概要の著作権は論文著者に帰属します。
]]>