The Secure Developer

Snyk
The Secure Developer
Nieuwste aflevering

Beschikbare afleveringen

5 van 171
  • Vulnerabilities In Enterprise AI Workflows With Nicolas Dupont
    Episode SummaryAs AI systems become increasingly integrated into enterprise workflows, a new security frontier is emerging. In this episode of The Secure Developer, host Danny Allan speaks with Nicolas Dupont about the often-overlooked vulnerabilities hiding in vector databases and how they can be exploited to expose sensitive data.Show NotesAs organizations shift their focus from training massive models to deploying them for inference and ROI, they are increasingly centralizing proprietary data into vector databases to power RAG (Retrieval-Augmented Generation) and agentic workflows. However, these vector stores are frequently deployed with insufficient security measures, often relying on the dangerous misconception that vector embeddings are unintelligible one-way hashes.Nicolas Dupont explains that vector embeddings are simply dense representations of semantic meaning that can be inverted back to their original text or media formats relatively trivially. Because vector databases traditionally require plain text access to perform similarity searches efficiently, they often lack encryption-in-use, making them susceptible to data exfiltration and prompt injection attacks via context loading. This is particularly concerning when autonomous agents are over-provisioned with write access, potentially allowing malicious actors to poison the knowledge base or manipulate system prompts.The discussion highlights the need for a "secure by inception" approach, advocating for granular encryption that protects data even during processing without incurring massive performance penalties. Beyond security, this architectural rigor is essential for meeting privacy regulations like GDPR and HIPAA in regulated industries. The episode concludes with a look at the future of AI security, emphasizing that while AI can accelerate defense, attackers are simultaneously leveraging the same tools to create more sophisticated threats.LinksCyborgOWASP LLM Top 10Snyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    --------  
    34:34
  • Autonomous Identity Governance With Paul Querna
    Episode SummaryCan multi-factor authentication really “solve” security, or are attackers already two steps ahead? In this episode of The Secure Developer, we sit down with Paul Querna, CTO and co-founder at ConductorOne, to unpack the evolving landscape between authentication and authorisation. In our conversation, Paul delves into the difference between authorisation and authentication, why authorisation issues have only been solved for organisations that invest properly, and why that progress has pushed attackers toward session theft and abusing standing privilege.Show NotesIn this episode of The Secure Developer, host Danny Allan sits down with Paul Querna, CTO and co-founder of ConductorOne, to discuss the evolving landscape of identity and access management (IAM). The conversation begins by challenging the traditional assumption that multi-factor authentication (MFA) is a complete solution, with Paul explaining that while authentication is "solved-ish," attackers are now moving to steal sessions and exploit authorization weaknesses. He shares his journey into the identity space, which began with a realization that old security models based on firewalls and network-based trust were fundamentally broken.The discussion delves into the critical concept of least privilege, a core pillar of the zero-trust movement. Paul highlights that standing privilege—where employees accumulate access rights over time—is a significant risk that attackers are increasingly targeting, as evidenced by reports like the Verizon Data Breach Investigations Report. This is even more critical with the rise of AI, where agents could potentially have overly broad access to sensitive data. They explore the idea of just-in-time authorization and dynamic access control, where privileges are granted for a specific use case and then revoked, a more mature approach to security.Paul and Danny then tackle the provocative topic of using AI to control authorization. While they agree that AI-driven decisions are necessary to maintain user experience and business speed, they acknowledge that culturally, we are not yet ready to fully trust AI with such critical governance decisions. They discuss how AI could act as an orchestrator, making recommendations for low-risk entitlements while high-risk ones remain policy-controlled. Paul also touches on the complexity of this new world, with non-human identities, personal productivity agents, and the need for new standards like extensions to OAuth. The episode concludes with Paul sharing his biggest worries and hopes for the future. He is concerned about the speed of AI adoption outpacing security preparedness, but is excited by the potential for AI to automate away human toil, empowering IAM and security teams to focus on strategic, high-impact work that truly secures the organization.LinksConductorOneVerizon Data Breach Investigations ReportAWS CloudWatchSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    --------  
    31:24
  • Retrieval-Augmented Generation With Bob Remeika From Ragie
    Episode SummaryBob Remeika, CEO and Co-Founder of Ragie, joins host Danny Allan to demystify Retrieval-Augmented Generation (RAG) and its role in building secure, powerful AI applications. They explore the nuances of RAG, differentiating it from fine-tuning, and discuss how it handles diverse data types while mitigating performance challenges. The conversation also covers the rise of AI agents, security best practices like data segmentation, and the exciting future of AI in amplifying developer productivity.Show NotesIn this episode of The Secure Developer, host Danny Allan is joined by Bob Remeika, co-founder and CEO of Ragie, a company focused on providing a RAG-as-a-Service platform for developers. The conversation dives deep into Retrieval-Augmented Generation (RAG) and its practical applications in the AI world.Bob explains RAG as a method for providing context to large language models (LLMs) that they have not been trained on. This is particularly useful for things like a company's internal data, such as a parental leave policy, that would be unknown to a public model. The discussion differentiates RAG from fine-tuning an LLM, highlighting that RAG doesn't require a training step, making it a simple way to start building an AI application. The conversation also covers the challenges of working with RAG, including the variety of data formats (like text, audio, and video) that need to be processed and the potential for performance slowdowns with large datasets.The episode also explores the most common use cases for RAG-based systems, such as building internal chatbots and creating AI-powered applications for users. Bob addresses critical security concerns, including how to manage authorization and prevent unauthorized access to data using techniques like data segmentation and metadata tagging. The discussion then moves to the concept of "agents," which Bob defines as multi-step, action-oriented AI systems. Bob and Danny discuss how a multi-step approach with agents can help mitigate hallucinations by building in verification steps. Finally, they touch on the future of AI, with Bob expressing excitement about the "super leverage" that AI provides to amplify developer productivity, allowing them to get 10x more done with a smaller team. Bob and Danny both agree that AI isn't going to replace developers, but rather make them more valuable by enabling them to be more productive.LinksRagie - Fully Managed Multimodal RAG-as-a-Service for DevelopersRagie ConnectOpenAIGemini 2.5Claude Sonneto4-miniClaudeClaude OpusCursorSnowflakeSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    --------  
    36:45
  • Securing The Future Of AI With Dr. Peter Garraghan
    Episode SummaryMachine learning has been around for decades, but as it evolves rapidly, the need for robust security grows even more urgent. Today on the Secure Developer, co-founder and CEO of Mindgard, Dr. Peter Garraghan, joins us to discuss his take on the future of AI. Tuning in, you’ll hear all about Peter’s background and career, his thoughts on deep neural networks, where we stand in the evolution of machine learning, and so much more! We delve into why he chooses to focus on security in deep neural networks before he shares how he performs security testing. We even discuss large language model attacks and why security is the responsibility of all parties within an AI organisation. Finally, our guest shares what excites him and scares him about the future of AI.Show NotesIn this episode of The Secure Developer, host Danny Allan welcomes Dr. Peter Garraghan, CEO and CTO of Mindgard, a company specializing in AI red teaming. He is also a chair professor in computer science at Lancaster University, where he specializes in the security of AI systems.Dr. Garraghan discusses the unique challenges of securing AI systems, which he began researching over a decade ago, even before the popularization of the transformer architecture. He explains that traditional security tools often fail against deep neural networks because they are inherently random and opaque, with no code to unravel for semantic meaning. He notes that AI, like any other software, has risks—technical, economic, and societal.The conversation delves into the evolution of AI, from early concepts of artificial neural networks to the transformer architecture that underpins large language models (LLMs) today. Dr. Garraghan likens the current state of AI adoption to a "great sieve theory," where many use cases are explored, but only a few, highly valuable ones, will remain and become ubiquitous. He identifies useful applications like coding assistance, document summarization, and translation.The discussion also explores how attacks on AI are analogous to traditional cybersecurity attacks, with prompt injection being similar to SQL injection. He emphasizes that a key difference is that AI can be socially engineered to reveal information, which is a new vector of attack. The episode concludes with a look at the future of AI security, including the emergence of AI security engineers and the importance of everyone in an organization being responsible for security. Dr. Garraghan shares his biggest fear—the anthropomorphization of AI—and his greatest optimism—the emergence of exciting and useful new applications.LinksMindgard - Automated AI Red Teaming & Security Testing‍Snyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    --------  
    38:19
  • The Future is Now with Michael Grinich (WorkOS)
    Episode SummaryWill AI replace developers? In this episode, Snyk CTO Danny Allan chats with Michael Grinich, the founder and CEO of WorkOS, about the evolving landscape of software development in the age of AI. Michael shares a fascinating analogy, comparing the shift in software engineering to the historical evolution of music, from every family having a piano to the modern era of digital creation with tools like GarageBand. They explore the concept of "vibe coding," the future of development frameworks, and how lessons from the browser wars—specifically the advent of sandboxing—can inform how we build secure AI-driven applications.Show NotesIn this episode, Danny Allan, CTO at Snyk, is joined by Michael Grinich, Founder and CEO of WorkOS, to explore the profound impact of AI on the world of software development. Michael discusses WorkOS's mission to enhance developer joy by providing robust, enterprise-ready features like authentication, user management, and security, allowing developers to remain in a creative flow state. The conversation kicks off with the provocative question of whether AI will replace developers. Michael offers a compelling analogy, comparing the current shift to the historical evolution of music, from a time when a piano was a household staple to the modern era where tools like GarageBand and Ableton have democratized music creation. He argues that while the role of a software engineer will fundamentally change, it won't disappear; rather, it will enable more people to create software in entirely new ways.The discussion then moves into the practical and security implications of this new paradigm, including the concept of "vibe coding," where applications can be generated on the fly based on a user's description. Michael cautions that you can't "vibe code" your security infrastructure, drawing a parallel to the early, vulnerable days of web browsers before sandboxing became a standard. He predicts that a similar evolution is necessary for the AI world, requiring new frameworks with tightly defined security boundaries to contain potentially buggy, AI-generated code.Looking to the future, Michael shares his optimism for the emergence of open standards in the AI space, highlighting the collaborative development around the Model Context Protocol (MCP) by companies like Anthropic, OpenAI, Cloudflare, and Microsoft. He believes this trend toward openness, much like the open standards of the web (HTML, HTTP), will prevent a winner-take-all scenario and foster a more innovative and accessible ecosystem. The episode wraps up with a look at the incredible energy in the developer community and how the challenge of the next decade will be distributing this powerful new technology to every industry in a safe, secure, and trustworthy manner.LinksWorkOS - Your app, enterprise readyWorkOS on YouTubeMITMCP Night 2025Snyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    --------  
    33:11

Meer Zaken en persoonlijke financiën podcasts

Over The Secure Developer

Securing the future of DevOps and AI: real talk with industry leaders.
Podcast website

Luister naar The Secure Developer, Dutch Dragons en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies

The Secure Developer: Podcasts in familie

Social
v8.1.2 | © 2007-2025 radio.de GmbH
Generated: 12/13/2025 - 10:01:32 PM