
AI CyberSecurity Podcast (Kaizenteq Team)
Explorez tous les épisodes de AI CyberSecurity Podcast
Date | Titre | Durée | |
---|---|---|---|
16 Nov 2023 | Types of Artificial Intelligence | AI Explained | 00:30:40 | |
To understand what role AI will play in the world of cybersecurity, it important to understand the technology behind it. Caleb and Ashish are levelling up the playing field and laying the foundations with AI primers for cybersecurity in the season 1 of AI CyberSecurity Podcast. What was discussed: (00:00) Introduction (02:36) Learning about AI/ML (08:00) Acronyms of AI (10:49) AGI - Artificial General Intelligence (11:29) Three states of AGI (13:48) AI/ML in Security Products (17:03) Different kinds of learning (21:51) Whats hot in the AI Section!! | |||
18 Mar 2024 | AI's role in Security Operation Automation | 00:51:57 | |
What is the current reality for AI automation in Cybersecurity? Caleb and Ashish spoke to Edward Wu, founder and CEO of Dropzone AI about the current capabilities and limitations of AI technologies, particularly large language models (LLMs), in the cybersecurity domain. From the challenges of achieving true automation to the nuanced process of training AI systems for cyber defense, Edward, Caleb and Ashish shared their insights into the complexities of implementing AI and the importance of precision in AI prompt engineering, the critical role of reference data in AI performance, and how cybersecurity professionals can leverage AI to amplify their defense capabilities without expanding their teams. Questions asked: (00:00) Introduction (05:22) A bit about Edward Wu (08:31) What is a LLM? (11:36) Why have we not seen entreprise ready automation in cybersecurity? (14:37) Distilling the AI noise in the vendor landscape (18:02) Solving challenges with using AI in enterprise internally (21:35) How to deal with GenAI Hallucinations? (27:03) Protecting customer data from a RAG perspective (29:12) Protecting your own data from being used to train models (34:47) What skillset is required in team to build own cybersecurity LLMs? (38:50) Learn how to prompt engineer effectively | |||
20 Mar 2025 | The Future of Digital Identity: Fighting AI Deepfakes & Identity Fraud | 00:57:29 | |
Can you prove you’re actually human? In a world of AI deepfakes, synthetic identities, and evolving cybersecurity threats, digital identity is more critical than ever. With AI-generated voices, fake videos, and evolving fraud tactics, the way we authenticate ourselves online is rapidly changing. So, what’s the future of digital identity? And how can you protect yourself in this new era? In this episode, hosts Caleb Sima and Ashish Rajan is joined by Adrian Ludwig, CISO at Tools For Humanity (World ID project), former Chief Trust Officer at Atlassian, and ex-Google security lead for Android. Together, they explore:
Questions asked: (00:00) Introduction (03:55) Digital Identity in 2025 (14:13) How has AI impacted Identity? (29:33) Trust and Transparency with AI (32:18) Authentication and Identity (49:53) What can people do today? (52:05) Where can people learn about World Foundation? (53:49) Adoption of new identity protocols Resources spoken about during the episode: | |||
12 Apr 2024 | How AI can be used in Cybersecurity Operations? | 00:44:35 | |
How can AI change a Security Analyst's workflow? Ashish and Caleb caught up with Ely Kahn, VP of Product at SentinelOne, to discuss the revolutionary impact of generative AI on cybersecurity. Ely spoke about the challenges and solutions in integrating AI into cybersecurity operations, highlighting how can simplify complex processes and empowering junior to mid-tier analysts. Questions asked: (00:00) Introduction (03:27) A bit about Ely Kahn (04:29) Current State of AI in Cybersecurity (06:45) How AI could impact Cybersecurity User Workflow? (08:37) What are some of the concerns with such a model? (14:22) How does it compare to a analyst not using this model? (21:41) Whats stopping models for going into autopilot? (30:14) The reasoning for using multiple LLMs (34:24) ChatGPT vs Anthropic vs Mistral You can discover more about SentinelOne's Purple AI here! | |||
28 Feb 2025 | The Truth Behind AI Agents: Hype vs. Reality | 01:19:06 | |
AI is evolving fast, and AI agents are the latest buzzword. But what exactly are they? Are they truly intelligent, or just automation in disguise? In this episode, Caleb Sima and Ashish Rajan spoke to Daniel Miessler—a cybersecurity veteran who is now deep into AI security research. 🎙️ In this episode, we cover: ✅ What AI agents really are (and what they’re NOT) ✅ How AI is shifting from searching to making decisions ✅ The biggest myths and misconceptions about AI automation ✅ Why most companies calling their tools “AI agents” are misleading you ✅ How AI agents will impact cybersecurity, business, and the future of work ✅ The security risks and opportunities no one is talking about Questions asked: (00:00) Introduction (03:50) What are AI Agents? (06:53) Use case for AI Agents (14:39) Can AI Agents be used for security today? (22:06) AI Agent’s impact on Attackers and Defenders in Cybersecurity (37:05) AI Agents and Non Human Identities (45:22) The big picture with AI Agents (48:28) Transparency and Ethics for AI Agents (58:36) Whats exciting about future of AI Agents? (01:08:00) Would there still be value for foundational knowledge | |||
22 Nov 2024 | AI Red Teaming in 2024 and Beyond | 00:51:24 | |
Host Caleb Sima and Ashish Rajan caught up with experts Daniel Miessler (Unsupervised Learning), Joseph Thacker (Principal AI Engineer, AppOmni) to talk about the true vulnerabilities of AI applications, how prompt injection is evolving, new attack vectors through images, audio, and video and predictions for AI-powered hacking and its implications for enterprise security. Whether you're a red teamer, a blue teamer, or simply curious about AI's impact on cybersecurity, this episode is packed with expert insights, practical advice, and future forecasts. Don’t miss out on understanding how attackers leverage AI to exploit vulnerabilities—and how defenders can stay ahead. Questions asked: (00:00) Introduction (02:11) A bit about Daniel Miessler (02:22) A bit about Rez0 (03:02) Intersection of Red Team and AI (07:06) Is red teaming AI different? (09:42) Humans or AI: Better at Prompt Injection? (13:32) What is a security vulnerability for a LLM? (14:55) Jailbreaking vs Prompt Injecting LLMs (24:17) Whats new for Red Teaming with AI? (25:58) Prompt injection in Multimodal Models (27:50) How Vulnerable are AI Models? (29:07) Is Prompt Injection the only real threat? (31:01) Predictions on how prompt injection will be stored or used (32:45) What’s changed in the Bug Bounty Toolkit? (35:35) How would internal red teams change? (36:53) What can enterprises do to protect themselves? (41:43) Where to start in this space? (47:53) What are our guests most excited about in AI? Resources | |||
18 Apr 2025 | MCP vs A2A Explained: AI Agent Communication Protocols & Security Risks | 00:54:21 | |
Dive deep into the world of AI agent communication with this episode. Join hosts Caleb Sima and Ashish Rajan as they break down the crucial protocols enabling AI agents to interact and perform tasks: Model Context Protocol (MCP) and Agent-to-Agent (A2A). Discover what MCP and A2A are, why they're essential for unlocking AI's potential beyond simple chatbots, and how they allow AI to gain "hands and feet" to interact with systems like your desktop, browsers, or enterprise tools like Jira. The hosts explore practical use cases, the underlying technical architecture involving clients and servers, and the significant security implications, including remote execution risks, authentication challenges, and the need for robust authorization and privilege management. The discussion also covers Google's entry with the A2A protocol, comparing and contrasting it with Anthropic's MCP, and debating whether they are complementary or competing standards. Learn about the potential "AI-ification" of services, the likely emergence of MCP firewalls, and predictions for the future of AI interaction, such as AI DNS. If you're working with AI, managing cybersecurity in the age of AI, or simply curious about how AI agents communicate and the associated security considerations, this episode provides critical insights and context. Questions asked: (00:00) Introduction: AI Agents & Communication Protocols (02:06) What is MCP (Model Context Protocol)? Defining AI Agent Communication (05:54) MCP & Agentic Workflows: Enabling AI Actions & Use Cases (09:14) Why MCP Matters: Use Cases & The Need for AI Integration (14:27) MCP Security Risks: Remote Execution, Authentication & Vulnerabilities (19:01) Google's A2A vs Anthropic's MCP: Protocol Comparison & Debate (31:37) Future-Proofing Security: MCP & A2A Impact on Security Roadmaps (38:00) - MCP vs A2A: Predicting the Dominant AI Protocol (44:36) - The Future of AI Communication: MCP Firewalls, AI DNS & Beyond (47:45) - Real-World MCP/A2A: Adoption Hurdles & Practical Examples | |||
02 Feb 2024 | Innovating Security Practices with AI | 00:42:26 | |
AI Security using LLM, AI Agents & more can be used to innovate cyber security practices. In this episode Ashish and Caleb sit down to chat about the nuances of creating custom AI agents, the implications of prompt engineering, and the innovative uses of AI in detecting and preventing security threats. From discussing the complexity of Data Loss Prevention (DLP) in today's world to debating the realistic timeline for the advent of Artificial General Intelligence (AGI). Questions asked: (00:26) The impact of GenAI on Workforce (04:11) Understanding Artificial General Intelligence (05:57) Using Custom Agents in OpenAI (09:37) Exploring Custom AI Agents: Definition and Uses (12:08) Security Concerns with Custom AI Agents (14:32) AI's Role in Data Protection (18:41) AI’s Role in API Security (20:56) Complexity of Data Protection with AI (25:42) Protecting Against Prompt Injections in AI Systems (27:53) Prompt Engineering and Penetration Testing (31:16) Risks of Prompt Engineering in AI Security (37:03) What's Hot in AI Security and Innovation? | |||
17 Nov 2023 | What are LLMs? | AI Explained | 00:44:08 | |
You cant protect what you don't understand. We are continuing Part 2 of our AI Primer on the AI Cybersecurity Podcast to understand what role AI will play in the world of cybersecurity. In this episde, Caleb and Ashish are levelling up the playing field, talking all things LLMs (Large Language Models), GenAI and laying the foundations with AI primers for cybersecurity in the season 1 of AI CyberSecurity Podcast. Questions asked: (00:00) Introduction (02:34) Evolution of LLM and GenAI (09:20) How does LLM work? (17:15) Differentiating between LLMs (22:05) The cost of running LLMs (23:43) Deploying an LLM (26:10 Big Companies vs Startups (32:21) Whats hot in AI this week! If you found this episode valuable, listen to Part-1 of the AI Primer Series ! If you have any questions about AI & it's security please drop that as a comment or reach out to us on info@kaizenteq.com | |||
03 Jan 2024 | How are LLMs deployed in enterprise | AI Explained | 00:44:00 | |
How to efficiently secure, scale and deploy LLMs in an Enterprise? Kicking off 2024 with the final instalment of our AI Cybersecurity Primer. In this episode Caleb and Ashish talk about large language models (LLMs), their deployment in enterprise settings, and the nuances of their operation. They explore the challenges and opportunities in ensuring the security of these systems, emphasising the importance of cybersecurity measures in the evolving landscape of AI. Questions asked: (00:00) Introduction (02:23) Deployment of LLM System (07:13) Deployment in an Enterprise (12:01) Threats with LLMs (15:30) Protecting Data (18:17) LLMs and Compliance (19:51) LLM Control Plane (26:36) Whats hot in AI! (36:57) Vendor risk assessment If you found this episode valuable, you can catch Part-1 & Part 2 of the AI Primer Series here - If you have any questions about AI & it's security please drop that as a comment or reach out to us on info@kaizenteq.com #aicybersecurity #largelanguagemodels #ai | |||
23 Oct 2024 | What is AI Native Security? | 00:27:48 | |
In this episode of the AI Cybersecurity Podcast, Caleb and Ashish sat down with Vijay Bolina, Chief Information Security Officer at Google DeepMind, to explore the evolving world of AI security. Vijay shared his unique perspective on the intersection of machine learning and cybersecurity, explaining how organizations like Google DeepMind are building robust, secure AI systems. We dive into critical topics such as AI native security, the privacy risks posed by foundation models, and the complex challenges of protecting sensitive user data in the era of generative AI. Vijay also sheds light on the importance of embedding trust and safety measures directly into AI models, and how enterprises can safeguard their AI systems. Questions asked: (00:00) Introduction (01:39) A bit about Vijay (03:32) DeepMind and Gemini (04:38) Training data for models (06:27) Who can build an AI Foundation Model? (08:14) What is AI Native Security? (12:09) Does the response time change for AI Security? (17:03) What should enterprise security teams be thinking about? (20:54) Shared fate with Cloud Service Providers for AI (25:53) Final Thoughts and Predictions | |||
07 Feb 2025 | How AI is changing Detection Engineering & SOC Operations? | 00:57:43 | |
AI is revolutionizing many things, but how does it impact detection engineering and SOC teams? In this episode, we sit down withDylan Williams, a cybersecurity practitioner with nearly a decade of experience in blue team operations and detection engineering. We speak about how AI is reshaping threat detection and response, the future role of detection engineers in an AI-driven world, can AI reduce false positives and speed up investigations, the difference between automation vs. agentic AI in security and practical AI tools you can use right now in detection & response Questions asked: (00:00) Introduction (02:01) A bit about Dylan Williams (04:05) Keeping with up AI advancements (06:24) Detection with and without AI (08:11) Would AI reduce the number of false positives? (10:28) Does AI help identity what is a signal? (14:18) The maturity of the current detection landscape (17:01) Agentic AI vs Automation in Detection Engineering (19:35) How prompt engineering is evolving with newer models? (25:52) How AI is imapcting Detection Engineering today? (36:23) LLM Models become the detector (42:03) What will be the future of detection? (47:58) What can detection engineers practically do with AI today? (52:57) Favourite AI Tool and Final thoughts on Detection Engineering Resources spoken about during the episode: exa.ai - The search engine for AI Building effective agents (Athropic’s blog different architecture and design patterns for agents)-https://www.anthropic.com/research/building-effective-agents - Introducing Ambient Agents (LangChain’s blog on Ambient Agents) -https://blog.langchain.dev/introducing-ambient-agents/ - Jared Atkinson’s Blog on Capability Abstraction -https://posts.specterops.io/capability-abstraction-fbeaeeb26384 LangGraph Studio -https://studio.langchain.com/ n8n -https://n8n.io/ Flowise -https://flowiseai.com/ CrewAI -https://www.crewai.com/ | |||
23 Feb 2024 | Where is the Balance Between AI Innovation and Security? | 00:31:29 | |
There is a complex interplay between innovation and security in the age of GenAI. As the digital landscape evolves at an unprecedented pace, Daniel, Caleb and Ashish share their insights on the challenges and opportunities that come with integrating AI into cybersecurity strategies Caleb challenges the current trajectory of safety mechanisms in technology and how overregulation may inhibit innovation and the advancement of AI's capabilities. Daniel Miessler, on the other hand, emphasizes the necessity of accepting technological inevitabilities and adapting to live in a world shaped by AI. Together, they explore the potential overreach in AI safety measures and discuss how companies can navigate the fine line between fostering innovation and ensuring security. Questions asked: (00:00) Introduction (03:19) Maintaining Balance of Innovation and Security (06:21) Uncensored LLM Models (09:32) Key Considerations for Internal LLM Models (12:23) Balance between Security and Innovation with GenAI (16:03) Enterprise risk with GenAI (25:53) How to address enterprise risk with GenAI? (28:12) Threat Modelling LLM Models | |||
04 Apr 2024 | The Evolution of Pentesting with AI | 00:53:30 | |
How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM. Questions asked: (00:00) Introductions (02:12) A bit about Rob Ragan (03:33) AI in Security Assessment and Pentesting (09:15) How is AI impacting pentesting? (14:50 )Where to start with AI implementation in offensive Security? (18:19) AI and Static Code Analysis (21:57) Key components of LLM pentesting (24:37) Testing whats inside a functional model? (29:37) Whats the right way to threat model an LLM? (33:52) Current State of Security Frameworks for LLMs (43:04) Is AI changing how Red Teamers operate? (44:46) A bit about Claude 3 (52:23) Where can you connect with Rob Resources spoken about in this episode: https://github.com/AbstractEngine/pentest-muse-cli https://github.com/Azure/PyRIT | |||
04 Nov 2024 | The Current State of AI and the Future for CyberSecurity in 2024 | 01:16:34 | |
In this jam-packed episode, with our panel we explored the current state and future of AI in the cybersecurity landscape. Hosts Caleb Sima and Ashish Rajan were joined by industry leaders Jason Clinton (CISO, Anthropic), Kristy Hornland (Cybersecurity Director, KPMG) and Vijay Bolina (CISO, Google DeepMind) to dive into the critical questions surrounding AI security. We’re at an inflection point where AI isn’t just augmenting cybersecurity—it’s fundamentally changing the game. From large language models to the use of AI in automating code writing and SOC operations, this episode examines the most significant challenges and opportunities in AI-driven cybersecurity. The experts discuss everything from the risks of AI writing insecure code to the future of multimodal models communicating with each other, raising important questions about trust, safety, and risk management. For anyone building a cybersecurity program in 2024 and beyond, you will find this conversation valuable as our panelist offer key insights into setting up resilient AI strategies, managing third-party risks, and navigating the complexities of deploying AI securely. Whether you're looking to stay ahead of AI's integration into everyday enterprise operations or explore advanced models, this episode provides the expert guidance you need Questions asked: (00:00) Introduction (02:28) A bit about Kristy Hornland (02:50) A bit about Jason Clinton (03:08) A bit about Vijay Bolina (04:04) What are frontier/foundational models? (06:13) Open vs Closed Model (08:02) Securing Multimodal models and inputs (12:03) Business use cases for AI use (13:34) Blindspots with AI Security (27:19) What is RPA? (27:47) AI’s talking to other AI’s (32:31) Third Party Risk with AI (38:42) Enterprise view of risk with AI (40:30) CISOs want Visibility of AI Usage (45:58) Third Party Risk Management for AI (52:58) Starting point for AI in cybersecurity program (01:02:00) What the panelists have found amazing about AI | |||
11 Jul 2024 | Exploring Top AI Security Frameworks | 00:44:49 | |
Which AI Security Framework is right for you? As AI is gaining momentum, we are starting to see quite a few frameworks appearing but the question is, which one should you start with and can AI help you decide! Caleb and Ashish tackle this challenge head-on, comparing three major AI security frameworks: Databricks, NIST, and OWASP Top 10. They break down the key components of each framework, discuss practical implementation strategies, and provide actionable insights for CISOs and security leaders. They may have had some help along the way. Questions asked: (00:00) Introduction (02:54) Databricks AI Security Framework (DASF) (06: 38) Top 3 things from DASF by Claude 3 (07:32) Top 3 things from DASF by ChatGPT (08:46) DASF Use Case Scenario (11:01) Thoughts on DASF (13:18) OWASP Top 10 for LLM Models (20:12) Google's Secure AI Framework (SAIF) (21:31) NIST AI Risk Management Framework (25:18) Claude 3 summarises NIST RMF for 5 year old (28:00) ChatGPT compares NIST RMF and NIST CSF (28:48) How do the frameworks compare? (36:46) Summary of all the frameworks Resources from this episode: Databricks AI Security Framework (DASF) | |||
17 Jun 2024 | Practical Applications and Future Predictions for AI Security in 2024 | 00:44:43 | |
What is the current state and future potential of AI Security? This special episode was recorded LIVE at BSidesSF (thats why its a little noisy), as we were amongst all the exciting action. Clint Gibler, Caleb Sima and Ashish Rajan sat down to talk about practical uses of AI today, how AI will transform security operations, if AI can be trusted to manage permissions and the importance of understanding AI's limitations and strengths. Questions asked: (00:00) Introduction (02:24) A bit about Clint Gibler (03:10) What top of mind with AI Security? (04:13) tldr of Clint’s BSide SF Talk (08:33) AI Summarisation of Technical Content (09:47) Clint’s favourite part of the talk - Fuzzing (15:30) Questions Clint got about his talk (17:11) Human oversight and AI (25:04) Perfection getting in the way of good (30:15) AI on the engineering side (36:31) Predictions for AI Security Resources from this coversation: | |||
22 May 2024 | AI Highlights from RSAC 2024 and BSides SF 2024 | 00:43:36 | |
Key AI Security takeaways from RSA Conference 2024, BSides SF 2024 and all the fringe activities that happen in SF during that week. Caleb and Ashish were speakers, panelists, participating in several events during that week and this episode captures all the highlights from all the conversations they had and they trends they saw during what they dubbed the "Cybersecurity Fringe Festival” in SF. Questions asked: (00:00) Introduction (02:53) Caleb’s Keynote at BSides SF (05:14) Clint Gibler’s Bsides SF Talk (06:28) What are BSides Conferences? (13:55) Cybersecurity Fringe Festival (17:47) RSAC 2024 was busy (19:05) AI Security at RSAC 2024 (23:03) RSAC Innovation Sandbox (27:41) CSA AI Summit (28:43) Interesting AI Talks at RSAC (30:35) AI conversations at RSAC (32:32) AI Native Security (33:02) Data Leakage in AI Security (30:35) Is AI Security all that different? (39:26) How to filter vendors selling AI Solutions? | |||
02 Aug 2024 | AI Code Generation - Security Risks and Opportunities | 01:10:56 | |
How much can we really trust AI-generated code more over Human generated Code today? How does AI-Generated code compare to Human generated code in 2024? Caleb and Ashish spoke to Guy Podjarny, Founder and CEO at Tessl about the evolving world of AI generated code, the current state and future trajectory of AI in software development. They discuss the reliability of AI-generated code compared to human-generated code, the potential security risks, and the necessary precautions organizations must take to safeguard their systems. Guy has also recently launched his own podcast with Simon Maple called The AI Native Dev, which you can check out if you are interested in hearing more about the AI Native development space. Questions asked: (00:00) Introduction (02:36) What is AI Generated Code? (03:45) Should we trust AI Generated Code? (14:34) The current usage of AI in Code Generated (18:27) Securing AI Generated Code (23:44) Reality of Security AI Generated Code Today (30:22) The evolution of Security Testing (37:36) Where to start with AI Security today? (50:18) Evolution of the broader cybersecurity industry with AI (54:03) The Positives of AI for Cybersecurity (01:00:48) The startup Landscape around AI (01:03:16) The future of AppSec (01:05:53) The future of security with AI | |||
21 Aug 2024 | Our insights from Google's AI Misuse Report | 00:33:46 | |
In this episode of the AI Cybersecurity Podcast, we dive deep into the latest findings from Google's DeepMind report on the misuse of generative AI. Hosts Ashish and Caleb explore over 200 real-world cases of AI misuse across critical sectors like healthcare, education, and public services. They discuss how AI tools are being used to create deepfakes, fake content, and more, often with minimal technical expertise. They analyze these threats from a CISO's perspective but also include an intriguing comparison between human analysis and AI-generated insights using tools like ChatGPT and Anthropic's Claude. From the rise of AI-powered impersonation to the manipulation of public opinion, this episode uncovers the real dangers posed by generative AI in today’s world. Questions asked: (00:00) Introduction (03:39) Generative Multimodal Artificial Intelligence (09:16) Introduction to the report (17:07) Enterprise Compromise of GenAI systems (20:23) Gen AI Systems Compromise (27:11) Human vs Machine Resources spoken about during the episode: Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data | |||
09 Oct 2023 | AI CyberSecurity Podcast Launch Trailer | 00:03:26 | |
Ashish Rajan and Caleb Sima, who have been Cybersecurity practitioners and CISOs for over a decade, are combining forces to bring to you how CyberSecurity can be applied to AI without FUD. Each episode discuss a AI Theme and What's Hot in AI. You can expect the episodes on your favorite Podcast Player every two weeks. This is a Audio & Video podcast so you can find video of each episode on AI CyberSecurity Podcast YouTube Channel If you have any AI & CyberSecurity queries or topics you would like us to cover, please reach out to us on info@kaizenteq.com You can also check out our sister podcast - Cloud Security Podcast for all your cloud and cloud native security topics. | |||
08 Jan 2025 | AI Cybersecurity Predictions 2025: Revolution or Reality? | 00:56:53 | |
In this episode, to kick of 2025, we dive deep into AI and cybersecurity predictions for 2025 exploring the opportunities, challenges, and trends shaping the future of the industry. Our hosts, Ashish Rajan and Caleb Sima sat down to discuss the evolution of SOC automation and its real-world impact on cybersecurity, the practical use cases for AI-enhanced security tools in organizations, why data security might be the real winner in 2025, the potential of agentic AI and its role in transforming security operations and predictions for AI-powered startups and their production-ready innovations in 2025. Questions asked: (00:00) Introduction (06:32) Current AI Innovation in Cybersecurity (21:57) AI Security Predictions for 2025 (25:02) Data Security and AI in 2025 (30:56) The rise of Agentic AI (35:40) Planning for AI Skills in the team (42:53) What to ditch from 2024? (48:00) AI Making Security Predictions for 2025 | |||
05 Apr 2025 | How to Hack AI Applications: Real-World Bug Bounty Insights | 00:50:29 | |
In this episode, we sit down with Joseph Thacker, a bug bounty hunter and AI security researcher, to uncover the evolving threat landscape of AI-powered applications and agents. Joseph shares battle-tested insights from real-world AI bug bounty programs, breaks down why AI AppSec is different from traditional AppSec, and reveals common vulnerabilities most companies miss, like markdown image exfiltration, XSS from LLM responses, and CSRF in chatbots. He also discusses the rise of AI-driven pentesting agents ("hack bots"), their current limitations, and how augmented human hackers will likely outperform them, at least for now. If you're wondering whether AI can really secure or attack itself, or how AI is quietly reshaping the bug bounty and AppSec landscape, this episode is a must-listen. Questions asked: (00:00) Introduction (02:14) A bit about Joseph (03:57) What is AI AppSec? (05:11) Components of AI AppSec (08:20) Bug Bounty for AI Systems (10:48) Common AI security issues (15:09) How will AI change pentesting? (20:23) How is the attacker landscape changing? (22:33) Where would autimation add the most value? (27:03) Is code being deployed less securely? (32:56) AI Red Teaming (39:21) MCP Security (42:13) Evolution of pentest with AI Resources shared during the interview: - How to Hack AI Agents and Applications - Critical Thinking Bug Bounty Podcast - Nuclei | |||
06 Sep 2024 | BlackHat USA 2024 AI Cybersecurity Highlights | 00:46:56 | |
What were the key AI Cybersecurity trends at BlackHat USA? In this episode of the AI Cybersecurity Podcast, hosts Ashish Rajan and Caleb Sima dive into the key insights from Black Hat 2024. From the AI Summit to the CISO Summit, they explore the most critical themes shaping the cybersecurity landscape, including deepfakes, AI in cybersecurity tools, and automation. The episode also features discussions on the rising concerns among CISOs regarding AI platforms and what these mean for security leaders. Questions asked: (00:00) Introduction (02:49) Black Hat, DEF CON and RSA Conference (07:18) Black Hat CISO Summit and CISO Concerns (11:14) Use Cases for AI in Cybersecurity (21:16) Are people tired of AI? (21:40) AI is mostly a side feature (25:06) LLM Firewalls and Access Management (28:16) The data security challenge in AI (29:28) The trend with Deepfakes (35:28) The trend of pentest automation (38:48) The role of an AI Security Engineer | |||
26 Jan 2025 | What does your AI cybersecurity plan look like for 2025? | 00:38:25 | |
Welcome to 2025! In this episode our hosts Ashish Rajan and Caleb Sima, tackle the pressing question: What should your AI cybersecurity game plan look like this year? The rapid evolution of agentic AI—where AI agents can perform tasks autonomously—is set to transform businesses, but it comes with unprecedented security challenges. From the resurgence of Identity and Access Management (IAM) to the urgent need for least privilege strategies, this episode captures actionable insights for CISOs and security leaders.
Questions asked: (00:00) Introduction (01:59) The current state of AI in Enterprise (10:22) Different Levels of Agentic AI (12:05) CISO AI Cybersecurity Game plan for 2025 (15:57) IAM’s fire comeback (23:11) Top 3 things for AI Cybersecurity Plan | |||
09 Feb 2024 | Breaking Down AI's Impact on Cybersecurity | 00:46:55 | |
What does AI mean for Cybersecurity in 2024? Caleb and Ashish sat down with Daniel Miessler. This episode is a must listen for CISOs and cybersecurity practitioners exploring AI's potential and pitfalls. From the intricacies of Large Language Models (LLM) and API security to the nuances of data protection, Ashish, Caleb and Daniel unpack the most pressing threats and opportunities facing the cybersecurity landscape in 2024. Questions asked: (00:00) Introduction (06:06) A bit about Daniel Miessler (06:23) Current State of Artificial General Intelligence (13:57) What going to change in security with AI? (16:40) AI’s role in spear phishing (19:10) AI’s role in Recon (21:08) Where to start with AI Security? (26:48) AI focused cybersecurity startups (31:12) Security Challenges with self hosted LLMs (39:34) Are the models becoming too restrictive Resources spoken about during the episode: |