Look for any podcast host, guest or anyone
Showing episodes and shows of

StackAware

Shows

Deploy SecurelyDeploy SecurelyAI Action Plan, "tool-squatting" attacks, jobless college grads, and insurance for AIFederal AI action plan: https://www.ai.gov/action-planTool-squatting attack paper: https://arxiv.org/pdf/2504.19951Burning Glass Institute report: https://static1.squarespace.com/static/6197797102be715f55c0e0a1/t/6889055d25352c5b3f28c202/1753810269213/No+Country+for+Young+Grads+V_Final7.29.25+%281%29.pdfAIUC: https://aiuc.com2025-08-0637 minEthical MachinesEthical MachinesHow Do You Control Unpredictable AI?LLMs behave in unpredictable ways. That’s a gift and a curse. It both allows for its “creativity” and makes it hard to control (a bit like a real artist, actually). In this episode, we focus on the cyber risks of AI with Walter Haydock, a former national security policy advisor and the Founder of StackAware.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy2025-07-1051 minDeploy SecurelyDeploy SecurelyBig Beautiful AI Moratorium fails, ISO 42005, and automating yourself out of a jobWalter kicks off a recurring series with Steve Dufour, talking about:- Trump's "Big Beautiful Bill" moving through the Senate and how a key AI-related provision was just removed.- Some key court decisions related to generative AI training on copyrighted material- ISO/IEC 42005:2025, which gives guidance on AI impact assessments- Ways to (avoid) automating yourself out of a job2025-07-0250 minRisk Management ShowRisk Management ShowAI Risk Layers EXPLAINED: Models, Applications, Agents with Walter HaydockIn this episode of GRC Chats, we explore "AI Risk Layers EXPLAINED: Models, Applications, Agents" with Walter Haydock, founder of StackAware and a leader in AI risk management and cybersecurity. Walter shares his expert insights on the three critical layers of AI risk—models, applications, and agents—and discusses how organizations can navigate these complexities. From the importance of data provenance at the model level to potential chain reactions in AI agents, this conversation is packed with actionable strategies for effective risk mitigation and governance. We discussed how businesses can implement AI policies, maintain a robu...2025-06-1612 minSmarter Online Safety with Jocelyn KingSmarter Online Safety with Jocelyn KingInside the Mind of an AI Hacker: How Safe Is Your Data?In this must-watch episode, host Jocelyn King sits down with renowned cybersecurity and AI expert Walter Haydock—the founder & CEO of StackAware, Harvard Business School graduate, and former Marine Corps intelligence officer. Walter has protected everything from Fortune 100 companies to national security assets, and today he’s sharing his insider knowledge with you! 🔥 What You'll Discover: * The 3 key ways AI is changing cybersecurity—for better and for worse! * Real-world stories of AI hacking, “unintended training” nightmares, and how even giant companies like Amazon have fallen victim! * The truth about nonhuman identities (NHIs): Why your next cus...2025-05-2737 minBlak CyberBlak CyberThe AI Governance Jedi That's Empowering AI Companies To Innovate While Managing RisksABOUT THIS EPISODE:The Blak Cyber podcast presents "The AI Governance Mentors Series, Episode 1 featuring Walter Haydock, ownerof Stackaware, an AI Governance consulting company. Please subscribe and share to support this podcast.Be sure to tap the "SUBSCRIBE" buttonl!Walter's LinkedIn: https://linkedin.com/in/walter-haydockWalter's Website: https://stackaware.com/Walter's Youtube: https://www.youtube.com/@StackAware2025-04-2608 minSecurity & GRC DecodedSecurity & GRC DecodedNavigating DeepSeek’s AI Risks: Insights for Security & Compliance TeamsIn this episode of Security & GRC Decoded, Raj Krishnamurthy, CEO of ComplianceCow, sits down with Walter Haydock, CEO of StackAware, to discuss the evolving landscape of AI security, governance, risk, and compliance (GRC). Walter shares insights on emerging AI threats, the importance of ISO 42001 certification, and the challenges organizations face when integrating AI into their security and compliance programs. Key topics include: DeepSeek and AI Privacy Risks Regulatory Challenges in AI Security & Compliance The Intersection of AI Governance and GRC Building a Business Case for AI Security Programs How Security & GRC Teams Can Adapt to Rapid...2025-02-0640 minre:invent securityre:invent securityDr. Nikki Robinson (IBM) on Effective Vulnerability Management: Beyond Tools, Towards PeopleIn this episode of Reinvent Security, we dive deep into the world of vulnerability management with Dr. Nikki Robinson, a distinguished cybersecurity expert, author, and educator. With years of experience in IT operations and cybersecurity, Dr. Robinson brings a unique perspective to managing vulnerabilities in today’s ever-evolving threat landscape. During the episode, Dr. Robinson shares her journey from IT operations to earning a doctorate in cybersecurity, highlighting the pivotal moments that shaped her approach to vulnerability management. She emphasizes the importance of looking beyond patching to address the broader aspects of risk reduction, including human factors, automation, and AI...2024-11-2852 minResilient CyberResilient CyberResilient Cyber w/ Walter Haydock - Implementing AI GovernanceIn this episode, we sit down with StackAware Founder and AI Governance Expert Walter Haydock. Walter specializes in helping companies navigate AI governance and security certifications, frameworks, and risks. We will dive into key frameworks, risks, lessons learned from working directly with organizations on AI Governance, and more.We discussed Walter’s pivot with his company StackAware from AppSec and Supply Chain to a focus on AI Governance and from a product-based approach to a services-oriented offering and what that entails.Walter has been actively helping organizations with AI Governance, including helping them meet emerging and newly formed st...2024-11-2228 minDeploy SecurelyDeploy SecurelyGetting patients to better doctors, faster with generative AIThe basics of healthcare can often be a nightmare:- Finding the right doctor- Setting up and appointment- Getting simple questions answeredWhile these things might seem like an inconvenience, on the grand scale they cost a lot - of money, and unfortunately, lives.That’s why the Embold Virtual Assistant (EVA) is such a breakthrough.A generative AI-powered chatbot with access to up-to-date doctor listings and performance ratings, it’s literally a lifesaver.StackAware was honored to conduct a pre-deployment AI risk assessment and pene...2024-11-1538 minBrand Stories PodcastsBrand Stories PodcastsLeveraging AI for Effective Healthcare Solutions | A Brand Story Conversation From HITRUST Collaborate 2024 | A HITRUST Story with Walter Haydock and Steve DufourThe Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their...2024-10-1725 minITSPmagazine PodcastsITSPmagazine PodcastsLeveraging AI for Effective Healthcare Solutions | A Brand Story Conversation From HITRUST Collaborate 2024 | A HITRUST Story with Walter Haydock and Steve DufourThe Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their...2024-10-1725 minOn Location With Sean Martin And Marco CiappelliOn Location With Sean Martin And Marco CiappelliLeveraging AI for Effective Healthcare Solutions | A Brand Story Conversation From HITRUST Collaborate 2024 | A HITRUST Story with Walter Haydock and Steve DufourThe Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their...2024-10-1725 minRedefining CyberSecurityRedefining CyberSecurityLeveraging AI for Effective Healthcare Solutions | A Brand Story Conversation From HITRUST Collaborate 2024 | A HITRUST Story with Walter Haydock and Steve DufourThe Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their...2024-10-1725 minDeploy SecurelyDeploy SecurelyTackling AI governance with federal dataOn this episode of the Deploy Securely podcast, I spoke with Kenny Scott, Founder and CEO of Paramify.Paramify gets companies ready for the U.S. government's Federal Risk and Authorization Management Program (FedRAMP). And in this conversation, we talked about:- Paramify "walking the walk" by getting FedRAMP High authorized- How AI is impacting FedRAMP authorizations- The future of AI regulation2024-09-2636 minDeploy SecurelyDeploy SecurelyThe state of AI assurance in 2024I was thrilled to have a leading voice on AI governance and assurance on the Deploy Securely podcast: Patrick Sullivan.Patrick is the Vice President of Strategy and Innovation at A-LIGN, a cybersecurity assurance firm. He’s an expert on the intersection of AI and compliance, regularly sharing expert insights about ISO 42001, the EU AI Act, and their interplay with existing regulations and best practices.We chatted about what he's seen from his customer base when it comes to AI-related:- Cybersecurity- Compliance- PrivacyCheck out the fu...2024-09-1235 minDeploy SecurelyDeploy SecurelySecurely harnessing AI in financial servicesI spoke with Matt Adams, Head of Security Enablement at Citi, about:- The EU AI Act and other laws and regulations impacting AI governance and security- What financial services organizations can do to secure their AI deployments- Some of the biggest myths and misconceptions when it comes to AI governance2024-09-0540 minDeploy SecurelyDeploy SecurelyHow Conveyor deploys AI securely (for security)While using AI securely is a key concern (especially for companies like StackAware), on the flipside, AI has been supercharging security and compliance teams.Especially when tackling mundane tasks like security questionnaires, AI can accelerate sales and build trust.I chatted with Chas Ballew, CEO of Conveyor, about:- How AI can help with customer security reviews- What sort of controls Conveyor has in place- What Chas thinks the future will look like- The regulatory landscape for AIHere are some resources Chas mentions in...2024-07-2637 minDeploy SecurelyDeploy Securely3 AI governance frameworksDrive sales, improve customer trust, and avoid regulatory penalties with the NIST AI RMF, EU AI Act, and ISO 42001.Check out the full post on the Deploy Securely blog: https://blog.stackaware.com/p/eu-ai-act-nist-rmf-iso-42001-picking-frameworks2024-07-1204 minDeploy SecurelyDeploy SecurelyAccelerating AI governance at Embold HealthNo sector is more in need of effective, well-governed AI than healthcare.The United States spends vastly more per person than any other nation, yet is in the middle of the pack when it comes to life expectancy.That’s why I was so excited to work with Embold Health to measure and manage their AI-related cybersecurity, compliance, and privacy risk.Recently I had the pleasure of speaking with their Chief Security and Privacy Officer, Steve Dufour, and Vice President of Engineering, Mark Blackham on the Deploy Securely podcast.We went in...2024-07-0839 minDeploy SecurelyDeploy SecurelyThe top 3 AI security concerns in healthcare2024-07-0203 minDeploy SecurelyDeploy SecurelyWho should get ISO 42001 certified?1) Early-stage AI startups often grapple with customer security reviews, making certifications like SOC 2 or ISO 27001 essential. However, ISO 42001 might be more suitable for AI-focused companies due to its comprehensive coverage.2) Larger corporations using AI to manage sensitive data face scrutiny and criticism. These companies can validate their AI practices through ISO 42001, offering a certified risk management system that reassures stakeholders3) In heavily-regulated sectors like healthcare and finance, adopting and certifying AI technologies is complex. ISO 42001 helps these enterprises manage risks and maintain credibility by adhering to industry standards.Check out the full post...2024-07-0203 minPromptCast: The Voice of AI and SecurityPromptCast: The Voice of AI and SecurityWalter Haydock shares best practices on how to get started with AI Risk GovernanceWelcome to the very first episode of PromptCast, the podcast for AI, Security and all the in between, hosted by Itamar Golan. For this inaugural episode we hosted Walter Haydock, Founder & CEO of StackAware. During this episode, Walter and Itamar discussed: AI Risk & Cybersecurity Concerns Existing Regulations and Frameworks around AI How are the auditing firms looking at AI compliance Best practices for setting up an AI Governance Committee, and who should be a part of it Learn more about Stack Aware: ⁠https://stackaware.com/⁠ Learn more about Prom...2024-06-2543 minDeploy SecurelyDeploy SecurelyCompliance and AI - 3 quick observationsHere are the top 3 things I'm seeing:1️⃣ Auditors don’t (yet) have strong opinions on how to deploy AI securely2️⃣ Enforcement is here, just not evenly distributed.3️⃣ Integrating AI-specific requirements with existing security, privacy, and compliance ones isn’t going to be easyWant to see a full post? Check out the Deploy Securely blog: https://blog.stackaware.com/p/ai-governance-compliance-auditors-enforcement2024-04-1704 minSecure Ventures with Kyle McNultySecure Ventures with Kyle McNultyStackAware: Walter Haydock on Understanding Market AppetiteWalter: Founder and CEO of StackAware, which started as a vulnerability management tool and is now an AI risk consulting company Creator of the popular security blog "Deploy Securely" that started his entrepreneurial journey Worked in the National Counterterrorism Center for two years Check out the episode for our discussion on his pivot away from the initial product to a services model, why that might change in the future, and the role of his security blog Deploy Securely in growing StackAware. blog.stackaware.com stackaware.com2024-01-3044 minDeploy SecurelyDeploy SecurelyCode Llama: 5-minute risk analysisSomeone asked me what the unintended training and data retention risk with Meta's code Llama is.My answer:the same as every other model you host and operate on your own.And, all other things being equal, it's lower than that of anything operating -as-a-Service (-aaS) like ChatGPT or Claude.Check out this video for deeper dive?Or read the full post on Deploy Securely: https://blog.stackaware.com/p/code-llama-self-hosted-model-unintended-trainingWant more AI security resources? Check out: https://products.stackaware.com/2023-12-1304 minDeploy SecurelyDeploy Securely4th party AI processing and retention riskSo you have your AI policy in place and are carefully controlling access to new apps as they launch, but then......you realize your already-approved tools are themselves starting to leverage 4th party AI vendors.Welcome to the modern digital economy.Things are complex and getting even more so.That's why you need to incorporate 4th party risk into your security policies, procedures, and overall AI governance program.Check out the full post with the Asana and Databricks examples I mentioned: https://blog.stackaware.com/p/ai-supply-chain-processing-retention-risk2023-12-0406 minDeploy SecurelyDeploy SecurelySensitive Data GenerationI’m worried about data leakage from LLMs, but probably not why you think.While unintended training is a real risk that can’t be ignored, something else is going to be a much more serious problem: sensitive data generation (SDG).A recent paper (https://arxiv.org/pdf/2310.07298v1.pdf) shows how LLMs can infer huge amounts of personal information from seemingly innocuous comments on Reddit.And this phenomenon will have huge impacts for:- Material nonpublic information- Executive moves- Trade secretsand the ability to k...2023-11-2706 minDeploy SecurelyDeploy SecurelyArtificial Intelligence Risk Scoring System (AIRSS) - Part 2What does "security" even mean with AI?You'll need to define things like:BUSINESS REQUIREMENTS- What type of output is expected?- What format should it be?- What is the use case?SECURITY REQUIREMENTS- Who is allowed to see which outputs?- Under which conditions?Having these things spelled out is a hard requirement before you can start talking about the risk of a given AI model.Continuing the build-out of the Artificial Intelligence Risk Scoring System (AIRSS), I...2023-11-1310 minDeploy SecurelyDeploy SecurelyArtificial Intelligence Risk Scoring System (AIRSS) - Part 1AI cyber risk management needs a new paradigm.Logging CVEs and using CVSS just does not make sense for AI models, and won't cut it going forward.That's why I launched the Artificial Intelligence Risk Scoring System (AIRSS).A quantitative approach to measuring cybersecurity risk from artificial intelligence systems, I am building it in public to help refine and improve the approach.Check out the first post in a series where I lay out my methodology: https://blog.stackaware.com/p/artificial-intelligence-risk-scoring-system-p12023-11-0714 minDeploy SecurelyDeploy SecurelyHow should we track AI vulnerabilities?The Cybersecurity and Infrastructure Security Agency (CISA) released a post earlier this year saying the AI engineering community should use something like the existing CVE system for tracking vulnerabilities in AI models.Unfortunately, this is a pretty bad recommendation.That's because:- CVEs already create a lot of noise- AI systems are non-deterministic- So things would just get worseIn this episode, I dive into these issues and discuss the way ahead.Check out the full blog post: https://blog.stackaware.com/p/how-should-we-identify-ai-vulnerabilities2023-10-3007 minDeploy SecurelyDeploy SecurelyGenerative AI and Unintended Training🔐 Think self-hosting your AI models is more secure?It might be...or not!In this video, we dig into the topic of AI model security and introduce the concept of "unintended training."▶️ Key Highlights:- The myth that self-hosting AI models is necessarily better for security- Decision factors when choosing between SaaS vs. IaaS- Defining "Unintentional Training" and its implicationsRead more about unintended training and AI Security: https://blog.stackaware.com/p/unintended-trainingAnd for a deep dive on the security benefits...2023-10-2307 minDeploy SecurelyDeploy SecurelyWho should make cyber risk management decisions?It's a tougher challenge than many security folks talk about.Who should have the final say about whether to accept, mitigate, transfer, or avoid risk?- Cybersecurity?- Compliance?- Legal?The answer:None of them.Check out this episode of Deploy Securely to learn who should.Or read the original blog post here: https://blog.stackaware.com/p/who-should-make-cyber-risk-management2023-10-2314 minRisk GrustlersRisk GrustlersAl with a Pinch of ResponsibilityTaking a slight departure from our regular themes of exploring the journeys of Risk Grustlers, we’re here with an on-demand podcast with the one and only, Walter Haydock, Founder and CEO of StackAware, to demystify and dig into the role of responsibility in today’s AI threat landscape. In this episode, Walter gives us a crash course on all things LLM – from listing the  differences between using a self-hosted LLM and a third-party LLM to explaining the top five risks to watch out for while using them. Application developers are often overwhelmed with the bundle of...2023-08-2842 minConversations in CybersecurityConversations in CybersecurityCompliance and Generative AIWalter Haydock and I discuss the compliance implications of generative AI. https://maven.com/harness-ai/ai-security https://www.store.stackaware.com/l/ai-security-policy https://stackaware.com/2023-07-2724 minThe Paramify PodcastThe Paramify Podcast#2 - Using AI Securely with Walter HaydockWalter Haydock is a dynamic and multifaceted professional specializing in the intersection of cybersecurity and artificial intelligence. As the founder and CEO of StackAware, Walter leverages industry-standard frameworks, his own extensive experience, and responsible use of AI tools to help businesses manage AI-related cybersecurity, privacy, and compliance risks. Through StackAware, businesses can harness the power of new technologies by building effective and repeatable AI risk management programs. Additionally, in his role as a Cybersecurity Author, Consultant, and Ghostwriter for Deploy Securely, Walter utilizes his expertise to transform cybersecurity CEOs into thought leaders within the industry. His approach...2023-07-1443 minThe Paramify PodcastThe Paramify Podcast#2 - Using AI Securely with Walter HaydockWalter Haydock is a dynamic and multifaceted professional specializing in the intersection of cybersecurity and artificial intelligence. As the founder and CEO of StackAware, Walter leverages industry-standard frameworks, his own extensive experience, and responsible use of AI tools to help businesses manage AI-related cybersecurity, privacy, and compliance risks. Through StackAware, businesses can harness the power of new technologies by building effective and repeatable AI risk management programs. Additionally, in his role as a Cybersecurity Author, Consultant, and Ghostwriter for Deploy Securely, Walter utilizes his expertise to transform cybersecurity CEOs into thought leaders within the industry. His approach...2023-07-1443 minChampions of SecurityChampions of SecurityHow to Ensure Secure Governance for Generative AI w/ Walter Haydock, Founder & CEO of StackAwareIn Episode 9 of Champions of Security, Jacob Garrison interviews Walter Haydock, Founder & CEO of StackAware.Walter Haydock is the Founder and Chief Executive Officer of StackAware, a cybersecurity risk management and communication platform. He is also the author of the blog Deploying Securely. Previously, he was a Director of Product Management at Privacera - a data governance startup backed by Accel and Insight Partners - as well as PTC - where he helped to secure the company’s industrial IoT product lines. Before entering the private sector, he served as a professional staff member for the Homeland Secu...2023-06-0749 minScale to Zero - No Security Questions Left UnansweredScale to Zero - No Security Questions Left UnansweredVulnerability management deep dive | Ep 28 with Walter Haydock | Scale to Zero | Cloud PodcastThis week's episode with Walter Haydock delved deep into vulnerability management! Walter, thanks for sharing such insightful information with our viewers. Walter's Social Media Handles: LinkedIn: https://www.linkedin.com/in/walter-haydock/ Twitter: https://twitter.com/Walter_Haydock StackAware: http://stackaware.com/ Check out our LinkedIn: https://www.linkedin.com/company/31460997 And Twitter: https://twitter.com/cloudanix If you want to come onto the show as an expert or if you are one of those curious minds who wants...2023-04-1445 minThe Cybersecurity Defenders PodcastThe Cybersecurity Defenders Podcast#14 - Simply Cyber Report for Jan 12. Plus a conversation with Walter Haydock, Founder and CEO of StackAware.Unknown threat actors have been observed hiding malware execution behind a legitimate Windows support binary. S3 buckets are now encrypted by default. A powerful Android malware has been tuned to target banking applications. And it is the end of life for Windows Server 2008.We also sit down with Walter Haydock, Founder and CEO of StackAware. We learn about StackAware and their approach to vulnerability management, and also how Walter got his company off of the ground using low-code tooling. A fascinating conversation for anyone looking to start their own cybersecurity company.The Cybersecurity Defenders Podcast...2023-01-1240 min