podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Dr. Florian Tramer
Shows
ThinkstScapes
ThinkstScapes Research Roundup - Q2 - 2024
AI/ML in securityInjecting into LLM-adjacent componentsJohann Rehberger[Blog 1] [Blog 2]Teams of LLM Agents can Exploit Zero-Day VulnerabilitiesRichard Fang, Rohan Bindu, Akul Gupta, Qiusi Zhan, and Daniel Kang[Paper] Project Naptime: Evaluating Offensive Security Capabilities of Large Language Models Sergei Glazunov and Mark Brand[Blog] LLMs Cannot Reliably Identify and Reason About Security Vulnerabilities (Yet?): A Comprehensive Evaluation, Framework, and BenchmarksSaad Ullah, Mingji Han, Saurabh Pujar, Hammond Pearce, Ayse Kivilcim Coskun, and Gianluca Str...
2024-07-29
31 min
Zero Knowledge
Enhancing On-Chain Intelligence with Ritual
This week, Anna and Tarun chat with Niraj Pant and Anish Agnihotri from Ritual. They kick off by revisiting the AIxCrypto intersection before diving into the Ritual product and its goals around developing open access AI infrastructure. They explore the opportunities that open up when you bring ML to smart contracts. Here’s some additional links for this episode:Episode 216: A Dip into the Mempool & MEV with Project Blanc Episode 246: Adversarial Machine Learning Research with Florian Tramèr Episode 314: Succinct’s Platform, Prover Network and SP1 FrenRug Website Mistral 7B by Jiang, Sablayrolles, Mensch, Bamford, Chaplot, De La...
2024-03-27
1h 08
The MLSecOps Podcast
AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 1)
Send us a text*This episode is also available in video format! Click to watch the full YouTube video.*Welcome to the final episode of the first season of The MLSecOps Podcast, brought to you by the team at Protect AI.In this two-part episode, we’ll be taking a look back at some favorite highlights from the season where we dove deep into machine learning security operations. In this first part, we’ll be revisiting clips related to things like adversarial machine learning; how malicious actors can use AI to fool machine lear...
2023-09-19
37 min
Zero Knowledge
The State of ZK with Anna and Kobi
In this week’s episode, host Anna Rose and Kobi Gurkan check in on the state of ZK today. They discuss recent ZK applications and tooling as well as developments from the last 6 months. They review new use cases such as ZK for off-chain computations and dive into research breakthroughs, trends, security and much more. Finally, they introduce the concept of zkpod.ai which will be covered fully in next week's episode.Additional links mentioned in this episode:Renegade.fi Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed SecretsEpisode 256: New ZK Use Cases with Dan Boneh Ep...
2023-05-31
48 min
The MLSecOps Podcast
Just How Practical Are Data Poisoning Attacks? With Guest: Dr. Florian Tramèr
Send us a textETH Zürich's Assistant Professor of Computer Science, Dr. Florian Tramèr, joins us to talk about data poisoning attacks and the intersection of Adversarial ML and MLSecOps (machine learning security operations).Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LL...
2023-03-29
47 min
Zero Knowledge
Where ZK and ML intersect with Yi Sun and Daniel Kang
This week, Anna Rose and Tarun Chitra dive back into the topic of ZK ML with guests Yi Sun, co-founder of Axiom, and Daniel Kang, Assistant Professor of computer science at UIUC. They discuss Yi and Daniel’s previous academic work and what led them to get interested in ZK topics and specifically ZK ML. They then dive into a discussion about 2 recent papers which examine the use of ZK within Machine Learning architectures. Here are some additional links for this episode:Episode 246: Adversarial Machine Learning Research with Florian Tramèr Trustless Verification of Machine LearningEfficient Ver...
2023-02-22
56 min
Zero Knowledge
ZK in 2023 with Kobi, Guillermo, and Tarun
In this week’s episode, Anna and guest co-hosts Guillermo, Tarun and Kobi share their thoughts about the state of Zero Knowledge tech today and what it might look like going in 2023. The group discusses some exciting ZK experiments and some of the emerging topics such as: ZK ID, ZK Bridges, ZK DeFi, and more. Here are some additional links for this episode:Epicenter:State of the ZK Ecosystem with Anna Rose & Kobi Gurkan Pt 1Epicenter:State of the ZK Ecosystem with Anna Rose & Kobi Gurkan P2 Episode 246: Adversarial Machine Learning Research with Florian Tramèr Epi...
2023-01-18
1h 06
Zero Knowledge
Adversarial Machine Learning Research with Florian Tramèr
This week, Anna and Tarun chat with Florian Tramèr, Assistant Professor at ETH Zurich. They discuss his earlier work on side channel attacks on privacy blockchains, as well as his academic focus on Machine Learning (ML) and adversarial research. They define some key ML terms, tease out some of the nuances of ML training and models, chat zkML and other privacy environments where ML can be trained, and look at why the security around ML will be important as these models become increasingly used in production. Here are some additional links for this episode:Episode 228: C...
2022-09-21
1h 06
Machine Learning Street Talk (MLST)
#040 - Adversarial Examples (Dr. Nicholas Carlini, Dr. Wieland Brendel, Florian Tramèr)
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. there's good reason to believe neural networks look at very different features than we would have expected. As articulated in the 2019 "features not bugs" paper Adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. Adversarial examples don't just affect deep learning models. A cottage industry has sprung up around Threat Modeling in AI and ML...
2021-01-31
1h 36
Data Skeptic
Stealing Models from the Cloud
Platform as a service is a growing trend in data science where services like fraud analysis and face detection can be provided via APIs. Such services turn the actual model into a black box to the consumer. But can the model be reverse engineered? Florian Tramèr shares his work in this episode showing that it can. The paper Stealing Machine Learning Models via Prediction APIs is definitely worth your time to read if you enjoy this episode. Related source code can be found in https://github.com/ftramer/Steal-ML.
2016-10-28
37 min
Data Skeptic
Stealing Models from the Cloud
Platform as a service is a growing trend in data science where services like fraud analysis and face detection can be provided via APIs. Such services turn the actual model into a black box to the consumer. But can the model be reverse engineered? Florian Tramèr shares his work in this episode showing that it can. The paper Stealing Machine Learning Models via Prediction APIs is definitely worth your time to read if you enjoy this episode. Related source code can be found in https://github.com/ftramer/Steal-ML.
2016-10-28
37 min