# Summary
Today on the show I am talking to Manuel Reinsperger, Cybersecurity Expert and Penetration Tester. Manuel will provide us an introduction into the topic of Machine Learning Security with an emphasis on Chatbot and Large Language Model security.
We are going to discuss topics like AI Red Teaming that focuses on identifying and testing AI systems within an holistic approach for system security. Another major theme of the episode are different Attack Scenarios against Chatbots and Agent systems.
Manuel will explain to use, what Jailsbreak are and methods to exfiltrate information and cause harm through direct and indirect prompt injection.
Machine Learning security is a topic I am specially interested in and I hope you are going to enjoy this episode and find it useful.
## AAIP Community
Join our discord server and ask guest directly or discuss related topics with the community.
https://discord.gg/5Pj446VKNU
## TOC
00:00:00 Beginning
00:02:05 Guest Introduction
00:05:16 What is ML Security and how does it differ from Cybersecurity?
00:25:56 Attacking chatbot systems
00:41:12 Attacking RAGs with Indirect prompt injection
00:54:43 Outlook on LLM security
## Sponsors
- Quantics: Supply Chain Planning for the new normal - the never normal - https://quantics.io/
- Belichberg GmbH: Software that Saves the Planet: The Future of Energy Begins Here - https://belichberg.com/
## References
Manuel Reinsperger - https://manuel.reinsperger.org/
Test your prompt hacking skills: https://gandalf.lakera.ai/
Hacking Bing Chat: https://betterprogramming.pub/the-dark-side-of-llms-we-need-to-rethinInjectGPT: k-large-language-models-now-6212aca0581a
AI-Attack Surface: https://danielmiessler.com/blog/the-ai-attack-surface-map-v1-0/
https://blog.luitjes.it/posts/injectgpt-most-polite-exploit-ever/
https://github.com/jiep/offensive-ai-compilation
AI Security Reference List: https://github.com/DeepSpaceHarbor/Awesome-AI-Security
Prompt Injection into GPT: https://kai-greshake.de/posts/puzzle-22745/