podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
StackAware
Shows
Deploy Securely
AI Action Plan, "tool-squatting" attacks, jobless college grads, and insurance for AI
Federal AI action plan: https://www.ai.gov/action-planTool-squatting attack paper: https://arxiv.org/pdf/2504.19951Burning Glass Institute report: https://static1.squarespace.com/static/6197797102be715f55c0e0a1/t/6889055d25352c5b3f28c202/1753810269213/No+Country+for+Young+Grads+V_Final7.29.25+%281%29.pdfAIUC: https://aiuc.com
2025-08-06
37 min
Ethical Machines
How Do You Control Unpredictable AI?
LLMs behave in unpredictable ways. That’s a gift and a curse. It both allows for its “creativity” and makes it hard to control (a bit like a real artist, actually). In this episode, we focus on the cyber risks of AI with Walter Haydock, a former national security policy advisor and the Founder of StackAware.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
2025-07-10
51 min
Deploy Securely
Big Beautiful AI Moratorium fails, ISO 42005, and automating yourself out of a job
Walter kicks off a recurring series with Steve Dufour, talking about:- Trump's "Big Beautiful Bill" moving through the Senate and how a key AI-related provision was just removed.- Some key court decisions related to generative AI training on copyrighted material- ISO/IEC 42005:2025, which gives guidance on AI impact assessments- Ways to (avoid) automating yourself out of a job
2025-07-02
50 min
Risk Management Show
AI Risk Layers EXPLAINED: Models, Applications, Agents with Walter Haydock
In this episode of GRC Chats, we explore "AI Risk Layers EXPLAINED: Models, Applications, Agents" with Walter Haydock, founder of StackAware and a leader in AI risk management and cybersecurity. Walter shares his expert insights on the three critical layers of AI risk—models, applications, and agents—and discusses how organizations can navigate these complexities. From the importance of data provenance at the model level to potential chain reactions in AI agents, this conversation is packed with actionable strategies for effective risk mitigation and governance. We discussed how businesses can implement AI policies, maintain a robu...
2025-06-16
12 min
Smarter Online Safety with Jocelyn King
Inside the Mind of an AI Hacker: How Safe Is Your Data?
In this must-watch episode, host Jocelyn King sits down with renowned cybersecurity and AI expert Walter Haydock—the founder & CEO of StackAware, Harvard Business School graduate, and former Marine Corps intelligence officer. Walter has protected everything from Fortune 100 companies to national security assets, and today he’s sharing his insider knowledge with you! 🔥 What You'll Discover: * The 3 key ways AI is changing cybersecurity—for better and for worse! * Real-world stories of AI hacking, “unintended training” nightmares, and how even giant companies like Amazon have fallen victim! * The truth about nonhuman identities (NHIs): Why your next cus...
2025-05-27
37 min
Blak Cyber
The AI Governance Jedi That's Empowering AI Companies To Innovate While Managing Risks
ABOUT THIS EPISODE:The Blak Cyber podcast presents "The AI Governance Mentors Series, Episode 1 featuring Walter Haydock, ownerof Stackaware, an AI Governance consulting company. Please subscribe and share to support this podcast.Be sure to tap the "SUBSCRIBE" buttonl!Walter's LinkedIn: https://linkedin.com/in/walter-haydockWalter's Website: https://stackaware.com/Walter's Youtube: https://www.youtube.com/@StackAware
2025-04-26
08 min
Security & GRC Decoded
Navigating DeepSeek’s AI Risks: Insights for Security & Compliance Teams
In this episode of Security & GRC Decoded, Raj Krishnamurthy, CEO of ComplianceCow, sits down with Walter Haydock, CEO of StackAware, to discuss the evolving landscape of AI security, governance, risk, and compliance (GRC). Walter shares insights on emerging AI threats, the importance of ISO 42001 certification, and the challenges organizations face when integrating AI into their security and compliance programs. Key topics include: DeepSeek and AI Privacy Risks Regulatory Challenges in AI Security & Compliance The Intersection of AI Governance and GRC Building a Business Case for AI Security Programs How Security & GRC Teams Can Adapt to Rapid...
2025-02-06
40 min
re:invent security
Dr. Nikki Robinson (IBM) on Effective Vulnerability Management: Beyond Tools, Towards People
In this episode of Reinvent Security, we dive deep into the world of vulnerability management with Dr. Nikki Robinson, a distinguished cybersecurity expert, author, and educator. With years of experience in IT operations and cybersecurity, Dr. Robinson brings a unique perspective to managing vulnerabilities in today’s ever-evolving threat landscape. During the episode, Dr. Robinson shares her journey from IT operations to earning a doctorate in cybersecurity, highlighting the pivotal moments that shaped her approach to vulnerability management. She emphasizes the importance of looking beyond patching to address the broader aspects of risk reduction, including human factors, automation, and AI...
2024-11-28
52 min
Resilient Cyber
Resilient Cyber w/ Walter Haydock - Implementing AI Governance
In this episode, we sit down with StackAware Founder and AI Governance Expert Walter Haydock. Walter specializes in helping companies navigate AI governance and security certifications, frameworks, and risks. We will dive into key frameworks, risks, lessons learned from working directly with organizations on AI Governance, and more.We discussed Walter’s pivot with his company StackAware from AppSec and Supply Chain to a focus on AI Governance and from a product-based approach to a services-oriented offering and what that entails.Walter has been actively helping organizations with AI Governance, including helping them meet emerging and newly formed st...
2024-11-22
28 min
Deploy Securely
Getting patients to better doctors, faster with generative AI
The basics of healthcare can often be a nightmare:- Finding the right doctor- Setting up and appointment- Getting simple questions answeredWhile these things might seem like an inconvenience, on the grand scale they cost a lot - of money, and unfortunately, lives.That’s why the Embold Virtual Assistant (EVA) is such a breakthrough.A generative AI-powered chatbot with access to up-to-date doctor listings and performance ratings, it’s literally a lifesaver.StackAware was honored to conduct a pre-deployment AI risk assessment and pene...
2024-11-15
38 min
Brand Stories Podcasts
Leveraging AI for Effective Healthcare Solutions | A Brand Story Conversation From HITRUST Collaborate 2024 | A HITRUST Story with Walter Haydock and Steve Dufour
The Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their...
2024-10-17
25 min
ITSPmagazine Podcasts
Leveraging AI for Effective Healthcare Solutions | A Brand Story Conversation From HITRUST Collaborate 2024 | A HITRUST Story with Walter Haydock and Steve Dufour
The Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their...
2024-10-17
25 min
On Location With Sean Martin And Marco Ciappelli
Leveraging AI for Effective Healthcare Solutions | A Brand Story Conversation From HITRUST Collaborate 2024 | A HITRUST Story with Walter Haydock and Steve Dufour
The Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their...
2024-10-17
25 min
Redefining CyberSecurity
Leveraging AI for Effective Healthcare Solutions | A Brand Story Conversation From HITRUST Collaborate 2024 | A HITRUST Story with Walter Haydock and Steve Dufour
The Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their...
2024-10-17
25 min
Deploy Securely
Tackling AI governance with federal data
On this episode of the Deploy Securely podcast, I spoke with Kenny Scott, Founder and CEO of Paramify.Paramify gets companies ready for the U.S. government's Federal Risk and Authorization Management Program (FedRAMP). And in this conversation, we talked about:- Paramify "walking the walk" by getting FedRAMP High authorized- How AI is impacting FedRAMP authorizations- The future of AI regulation
2024-09-26
36 min
Deploy Securely
The state of AI assurance in 2024
I was thrilled to have a leading voice on AI governance and assurance on the Deploy Securely podcast: Patrick Sullivan.Patrick is the Vice President of Strategy and Innovation at A-LIGN, a cybersecurity assurance firm. He’s an expert on the intersection of AI and compliance, regularly sharing expert insights about ISO 42001, the EU AI Act, and their interplay with existing regulations and best practices.We chatted about what he's seen from his customer base when it comes to AI-related:- Cybersecurity- Compliance- PrivacyCheck out the fu...
2024-09-12
35 min
Deploy Securely
Securely harnessing AI in financial services
I spoke with Matt Adams, Head of Security Enablement at Citi, about:- The EU AI Act and other laws and regulations impacting AI governance and security- What financial services organizations can do to secure their AI deployments- Some of the biggest myths and misconceptions when it comes to AI governance
2024-09-05
40 min
Deploy Securely
How Conveyor deploys AI securely (for security)
While using AI securely is a key concern (especially for companies like StackAware), on the flipside, AI has been supercharging security and compliance teams.Especially when tackling mundane tasks like security questionnaires, AI can accelerate sales and build trust.I chatted with Chas Ballew, CEO of Conveyor, about:- How AI can help with customer security reviews- What sort of controls Conveyor has in place- What Chas thinks the future will look like- The regulatory landscape for AIHere are some resources Chas mentions in...
2024-07-26
37 min
Deploy Securely
3 AI governance frameworks
Drive sales, improve customer trust, and avoid regulatory penalties with the NIST AI RMF, EU AI Act, and ISO 42001.Check out the full post on the Deploy Securely blog: https://blog.stackaware.com/p/eu-ai-act-nist-rmf-iso-42001-picking-frameworks
2024-07-12
04 min
Deploy Securely
Accelerating AI governance at Embold Health
No sector is more in need of effective, well-governed AI than healthcare.The United States spends vastly more per person than any other nation, yet is in the middle of the pack when it comes to life expectancy.That’s why I was so excited to work with Embold Health to measure and manage their AI-related cybersecurity, compliance, and privacy risk.Recently I had the pleasure of speaking with their Chief Security and Privacy Officer, Steve Dufour, and Vice President of Engineering, Mark Blackham on the Deploy Securely podcast.We went in...
2024-07-08
39 min
Deploy Securely
The top 3 AI security concerns in healthcare
2024-07-02
03 min
Deploy Securely
Who should get ISO 42001 certified?
1) Early-stage AI startups often grapple with customer security reviews, making certifications like SOC 2 or ISO 27001 essential. However, ISO 42001 might be more suitable for AI-focused companies due to its comprehensive coverage.2) Larger corporations using AI to manage sensitive data face scrutiny and criticism. These companies can validate their AI practices through ISO 42001, offering a certified risk management system that reassures stakeholders3) In heavily-regulated sectors like healthcare and finance, adopting and certifying AI technologies is complex. ISO 42001 helps these enterprises manage risks and maintain credibility by adhering to industry standards.Check out the full post...
2024-07-02
03 min
PromptCast: The Voice of AI and Security
Walter Haydock shares best practices on how to get started with AI Risk Governance
Welcome to the very first episode of PromptCast, the podcast for AI, Security and all the in between, hosted by Itamar Golan. For this inaugural episode we hosted Walter Haydock, Founder & CEO of StackAware. During this episode, Walter and Itamar discussed: AI Risk & Cybersecurity Concerns Existing Regulations and Frameworks around AI How are the auditing firms looking at AI compliance Best practices for setting up an AI Governance Committee, and who should be a part of it Learn more about Stack Aware: https://stackaware.com/ Learn more about Prom...
2024-06-25
43 min
Deploy Securely
Compliance and AI - 3 quick observations
Here are the top 3 things I'm seeing:1️⃣ Auditors don’t (yet) have strong opinions on how to deploy AI securely2️⃣ Enforcement is here, just not evenly distributed.3️⃣ Integrating AI-specific requirements with existing security, privacy, and compliance ones isn’t going to be easyWant to see a full post? Check out the Deploy Securely blog: https://blog.stackaware.com/p/ai-governance-compliance-auditors-enforcement
2024-04-17
04 min
Secure Ventures with Kyle McNulty
StackAware: Walter Haydock on Understanding Market Appetite
Walter: Founder and CEO of StackAware, which started as a vulnerability management tool and is now an AI risk consulting company Creator of the popular security blog "Deploy Securely" that started his entrepreneurial journey Worked in the National Counterterrorism Center for two years Check out the episode for our discussion on his pivot away from the initial product to a services model, why that might change in the future, and the role of his security blog Deploy Securely in growing StackAware. blog.stackaware.com stackaware.com
2024-01-30
44 min
Deploy Securely
Code Llama: 5-minute risk analysis
Someone asked me what the unintended training and data retention risk with Meta's code Llama is.My answer:the same as every other model you host and operate on your own.And, all other things being equal, it's lower than that of anything operating -as-a-Service (-aaS) like ChatGPT or Claude.Check out this video for deeper dive?Or read the full post on Deploy Securely: https://blog.stackaware.com/p/code-llama-self-hosted-model-unintended-trainingWant more AI security resources? Check out: https://products.stackaware.com/
2023-12-13
04 min
Deploy Securely
4th party AI processing and retention risk
So you have your AI policy in place and are carefully controlling access to new apps as they launch, but then......you realize your already-approved tools are themselves starting to leverage 4th party AI vendors.Welcome to the modern digital economy.Things are complex and getting even more so.That's why you need to incorporate 4th party risk into your security policies, procedures, and overall AI governance program.Check out the full post with the Asana and Databricks examples I mentioned: https://blog.stackaware.com/p/ai-supply-chain-processing-retention-risk
2023-12-04
06 min
Deploy Securely
Sensitive Data Generation
I’m worried about data leakage from LLMs, but probably not why you think.While unintended training is a real risk that can’t be ignored, something else is going to be a much more serious problem: sensitive data generation (SDG).A recent paper (https://arxiv.org/pdf/2310.07298v1.pdf) shows how LLMs can infer huge amounts of personal information from seemingly innocuous comments on Reddit.And this phenomenon will have huge impacts for:- Material nonpublic information- Executive moves- Trade secretsand the ability to k...
2023-11-27
06 min
Deploy Securely
Artificial Intelligence Risk Scoring System (AIRSS) - Part 2
What does "security" even mean with AI?You'll need to define things like:BUSINESS REQUIREMENTS- What type of output is expected?- What format should it be?- What is the use case?SECURITY REQUIREMENTS- Who is allowed to see which outputs?- Under which conditions?Having these things spelled out is a hard requirement before you can start talking about the risk of a given AI model.Continuing the build-out of the Artificial Intelligence Risk Scoring System (AIRSS), I...
2023-11-13
10 min
Deploy Securely
Artificial Intelligence Risk Scoring System (AIRSS) - Part 1
AI cyber risk management needs a new paradigm.Logging CVEs and using CVSS just does not make sense for AI models, and won't cut it going forward.That's why I launched the Artificial Intelligence Risk Scoring System (AIRSS).A quantitative approach to measuring cybersecurity risk from artificial intelligence systems, I am building it in public to help refine and improve the approach.Check out the first post in a series where I lay out my methodology: https://blog.stackaware.com/p/artificial-intelligence-risk-scoring-system-p1
2023-11-07
14 min
Deploy Securely
How should we track AI vulnerabilities?
The Cybersecurity and Infrastructure Security Agency (CISA) released a post earlier this year saying the AI engineering community should use something like the existing CVE system for tracking vulnerabilities in AI models.Unfortunately, this is a pretty bad recommendation.That's because:- CVEs already create a lot of noise- AI systems are non-deterministic- So things would just get worseIn this episode, I dive into these issues and discuss the way ahead.Check out the full blog post: https://blog.stackaware.com/p/how-should-we-identify-ai-vulnerabilities
2023-10-30
07 min
Deploy Securely
Generative AI and Unintended Training
🔐 Think self-hosting your AI models is more secure?It might be...or not!In this video, we dig into the topic of AI model security and introduce the concept of "unintended training."▶️ Key Highlights:- The myth that self-hosting AI models is necessarily better for security- Decision factors when choosing between SaaS vs. IaaS- Defining "Unintentional Training" and its implicationsRead more about unintended training and AI Security: https://blog.stackaware.com/p/unintended-trainingAnd for a deep dive on the security benefits...
2023-10-23
07 min
Deploy Securely
Who should make cyber risk management decisions?
It's a tougher challenge than many security folks talk about.Who should have the final say about whether to accept, mitigate, transfer, or avoid risk?- Cybersecurity?- Compliance?- Legal?The answer:None of them.Check out this episode of Deploy Securely to learn who should.Or read the original blog post here: https://blog.stackaware.com/p/who-should-make-cyber-risk-management
2023-10-23
14 min
Risk Grustlers
Al with a Pinch of Responsibility
Taking a slight departure from our regular themes of exploring the journeys of Risk Grustlers, we’re here with an on-demand podcast with the one and only, Walter Haydock, Founder and CEO of StackAware, to demystify and dig into the role of responsibility in today’s AI threat landscape. In this episode, Walter gives us a crash course on all things LLM – from listing the differences between using a self-hosted LLM and a third-party LLM to explaining the top five risks to watch out for while using them. Application developers are often overwhelmed with the bundle of...
2023-08-28
42 min
Conversations in Cybersecurity
Compliance and Generative AI
Walter Haydock and I discuss the compliance implications of generative AI. https://maven.com/harness-ai/ai-security https://www.store.stackaware.com/l/ai-security-policy https://stackaware.com/
2023-07-27
24 min
The Paramify Podcast
#2 - Using AI Securely with Walter Haydock
Walter Haydock is a dynamic and multifaceted professional specializing in the intersection of cybersecurity and artificial intelligence. As the founder and CEO of StackAware, Walter leverages industry-standard frameworks, his own extensive experience, and responsible use of AI tools to help businesses manage AI-related cybersecurity, privacy, and compliance risks. Through StackAware, businesses can harness the power of new technologies by building effective and repeatable AI risk management programs. Additionally, in his role as a Cybersecurity Author, Consultant, and Ghostwriter for Deploy Securely, Walter utilizes his expertise to transform cybersecurity CEOs into thought leaders within the industry. His approach...
2023-07-14
43 min
The Paramify Podcast
#2 - Using AI Securely with Walter Haydock
Walter Haydock is a dynamic and multifaceted professional specializing in the intersection of cybersecurity and artificial intelligence. As the founder and CEO of StackAware, Walter leverages industry-standard frameworks, his own extensive experience, and responsible use of AI tools to help businesses manage AI-related cybersecurity, privacy, and compliance risks. Through StackAware, businesses can harness the power of new technologies by building effective and repeatable AI risk management programs. Additionally, in his role as a Cybersecurity Author, Consultant, and Ghostwriter for Deploy Securely, Walter utilizes his expertise to transform cybersecurity CEOs into thought leaders within the industry. His approach...
2023-07-14
43 min
Champions of Security
How to Ensure Secure Governance for Generative AI w/ Walter Haydock, Founder & CEO of StackAware
In Episode 9 of Champions of Security, Jacob Garrison interviews Walter Haydock, Founder & CEO of StackAware.Walter Haydock is the Founder and Chief Executive Officer of StackAware, a cybersecurity risk management and communication platform. He is also the author of the blog Deploying Securely. Previously, he was a Director of Product Management at Privacera - a data governance startup backed by Accel and Insight Partners - as well as PTC - where he helped to secure the company’s industrial IoT product lines. Before entering the private sector, he served as a professional staff member for the Homeland Secu...
2023-06-07
49 min
Scale to Zero - No Security Questions Left Unanswered
Vulnerability management deep dive | Ep 28 with Walter Haydock | Scale to Zero | Cloud Podcast
This week's episode with Walter Haydock delved deep into vulnerability management! Walter, thanks for sharing such insightful information with our viewers. Walter's Social Media Handles: LinkedIn: https://www.linkedin.com/in/walter-haydock/ Twitter: https://twitter.com/Walter_Haydock StackAware: http://stackaware.com/ Check out our LinkedIn: https://www.linkedin.com/company/31460997 And Twitter: https://twitter.com/cloudanix If you want to come onto the show as an expert or if you are one of those curious minds who wants...
2023-04-14
45 min
The Cybersecurity Defenders Podcast
#14 - Simply Cyber Report for Jan 12. Plus a conversation with Walter Haydock, Founder and CEO of StackAware.
Unknown threat actors have been observed hiding malware execution behind a legitimate Windows support binary. S3 buckets are now encrypted by default. A powerful Android malware has been tuned to target banking applications. And it is the end of life for Windows Server 2008.We also sit down with Walter Haydock, Founder and CEO of StackAware. We learn about StackAware and their approach to vulnerability management, and also how Walter got his company off of the ground using low-code tooling. A fascinating conversation for anyone looking to start their own cybersecurity company.The Cybersecurity Defenders Podcast...
2023-01-12
40 min