Powered by RND
PodcastsTechnologieTech Lead Journal

Tech Lead Journal

Henry Suryawirawan
Tech Lead Journal
Nieuwste aflevering

Beschikbare afleveringen

5 van 254
  • #241 - Your Code as a Crime Scene: The Psychology Behind Software Quality - Adam Tornhill
    (04:00) Brought to you by UnleashUnleash is a private, flexible, and scalable feature flag system that lets teams decouple deployments from releases. It reduces the risk of shipping new features and gives organizations real-time control over what reaches production. And as AI accelerates development, Unleash helps engineering teams move fast and stay stable with safe rollouts and instant kill switches. Start a free trial of Unleash at ⁠getunleash.io/pricing⁠.Why do so many software projects still fail despite modern tools? The answer often lies in the psychology of the team, not the technology stack.Software development is often viewed purely as a technical challenge, yet many projects fail due to human factors and cognitive bottlenecks. In this episode, Adam Tornhill, CTO and Founder of CodeScene, shares his unique journey combining software engineering with psychology to solve these persistent industry problems. He explains the concept of “Your Code as a Crime Scene,” a method for using behavioral analysis to identify high-risk areas in a codebase that static analysis tools often miss.Adam covers the tangible business impact of code health, specifically how it drives predictability and development speed. He explains why 1-2% of our codebase accounts for up to 70% of our development work, and how focusing on these hotspots can make our team 2x faster and 10x more predictable. Adam also provides a critical reality check on the rise of AI in coding, exploring whether it will help reduce technical debt or accelerate it, and offers strategies for maintaining quality in an AI-assisted future.Key topics discussed:Combining psychology and software engineeringWhy predictability matters more than speedTreating your codebase as a crime sceneBehavioral analysis vs. static analysisThe hidden danger of the “Bus Factor”Will AI help or hurt code quality?Why healthy code helps both humans and AIEssential guardrails for AI-generated codeTimestamps:(00:00) Trailer & Intro(01:29) Career Turning Point: From Developer to Psychologist(02:36) Combining Psychology and Software Engineering(04:00) Why Engineering Leaders Need Psychology Knowledge(05:46) The Root Cause of Failing Software Projects(07:43) Why Code Abstractness Makes Quality Hard to Measure(09:29) Aligning Code Quality with Business Outcomes(11:37) Code Health: 2x Speed, 10x Predictability(12:58) Why Predictability is Undervalued in Software(19:53) Introducing “Your Code as a Crime Scene”(21:57) Behavioral Code Analysis: Hotspot Analysis vs Static Code Analysis(24:06) Behavioral Code Analysis: Understanding Change Coupling(26:30) Dealing with God Classes(29:40) Behavioral Code Analysis: The Social Side of Code(31:33) Why Developers Aren’t Interchangeable(33:14) Introduction to CodeScene(36:48) Will AI Help or Hurt Code Quality?(39:14) Essential Guardrails for AI-Generated Code(42:06) Using CodeScene to Maintain Quality in the AI Era(43:06) How AI Accelerates Technical Debt at Scale(45:54) Why AI-Friendly Code is Human-Friendly Code(48:32) Documentation: Capturing the “Why” for Humans and AI(50:42) The Reality Check: Future of Software Development with AI(52:41) 3 Tech Lead Wisdom_____Adam Tornhill’s BioAdam Tornhill is the founder and CTO of CodeScene and the best-selling author of Your Code as a Crime Scene. Combining degrees in engineering and psychology, Adam helps companies optimize software quality using AI-driven methodologies. He is an international keynote speaker and researcher who enjoys retro computing and martial arts in his spare time.Follow Adam:LinkedIn – linkedin.com/in/adam-tornhill-71759b48CodeScene – codescene.com Your Code as a Crime Scene – pragprog.com/titles/atcrime2/your-code-as-a-crime-scene-second-editionLike this episode?Show notes & transcript: techleadjournal.dev/episodes/241.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.
    --------  
    1:01:51
  • #240 - AI as Your Thought Partner: Break Boundaries & Do What You Never Could Before - Greg Shove
    (06:03) Brought to you by UnleashUnleash is a private, flexible, and scalable feature flag system that lets teams decouple deployments from releases. It reduces the risk of shipping new features and gives organizations real-time control over what reaches production. And as AI accelerates development, Unleash helps engineering teams move fast and stay stable with safe rollouts and instant kill switches. Start a free trial of Unleash at getunleash.io/pricing.Are you making critical decisions without consulting AI? Greg argues it’s now irresponsible for any leader to make high-stakes decisions without talking to AI first.In this episode, Greg Shove, CEO of Section and a multi-time founder with 30 years of entrepreneurial experience, shares how AI is fundamentally different from any previous technology wave. Unlike traditional software that makes us more productive within our existing boundaries, AI allows us to jump capability boundaries – enabling individuals and organizations to do things they simply couldn’t do before.Greg explains why most enterprise AI rollouts are failing (hint: they’re treating AI like software when it’s actually co-intelligence), how to cultivate resilience through multiple startup failures, and the practical strategies for getting teams to adopt AI (from simple hacks like putting a post-it note on your monitor to creating an entire AI-dedicated screen).This conversation goes beyond the hype to explore both the superpowers and limitations of AI, the real organizational outcomes you can expect (spoiler: it’s not just about layoffs), and why moving from efficiency to creation is the key to unlocking AI’s true potential in your organization.Key topics discussed:Why AI breaks capability boundaries unlike any other techTreating AI as a thought partner, not just a productivity toolWhy most large organizations fail at AI deploymentManaging workforce anxiety during AI transformationThe four possible team outcomes when rolling out AIMoving from efficiency (cut) to growth (create) with AIThe Post-it note hack that changed how teams use AI dailyWalking the walk: leading authentically in AI adoptionTimestamps:(00:00:00) Trailer & Intro(00:02:44) Career Turning Points(00:06:03) Cultivating Entrepreneurial Resilience(00:07:49) Understanding the AI Wave: Scale and Transformation(00:12:29) Pivoting to AI: Section’s Transformation Journey(00:17:57) AI as a Thought Partner(00:22:57) Practical Tips for Leaders Using AI Daily(00:30:49) Rolling Out AI Organization-Wide: Managing Change and Anxiety(00:41:30) AI ROI: Beyond Efficiency to Creation(00:51:01) AI-Powered Education: The ProfAI Approach(00:57:53) 1 Tech Lead Wisdom_____Greg Shove’s BioGreg Shove is a seven-time CEO, all in on AI. After first using ChatGPT in February 2023, he pivoted his company Section to be AI-powered. Now he helps enterprise organizations move from AI-anxious to AI-proficient with a proven playbook, delivered through keynote speaking and executive workshops.Greg is also the founder of Machine & Partners, an AI lab building custom enterprise AI applications, and co-author of Personal Math, a weekly newsletter sharing business insights for early-career leaders and founders.Follow Greg:LinkedIn – linkedin.com/in/gregshoveNewsletter – personalmath.substack.comSection AI – sectionai.comProf AI – prof.aiLike this episode?Show notes & transcript: techleadjournal.dev/episodes/240.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.
    --------  
    1:06:31
  • #239 - Taming Your Technical Debt: Mastering the Trade-Off Problem - Andrew Brown
    (06:06) Brought to you by JellyfishAI tools alone won’t transform your engineering org. Jellyfish provides insights into AI tool adoption, cost, and delivery impact – so you can make better investment decisions and build teams that use AI effectively. See for yourself at jellyfish.co/platform/ai-impact.Why do organizations constantly complain about having too much technical debt? Because they’re solving the wrong problem.In this episode, Dr. Andrew Brown, author of “Taming Your Dragon: Addressing Your Technical Debt,” reveals a profound insight: technical debt isn’t fundamentally a technical problem. It’s a trade-off problem rooted in human bias, organizational systems, and economic incentives. Through his innovative “Technical Debt Onion Model,” Andrew shows how decisions about code quality happen across five interconnected layers, from individual cognitive biases to wicked problem dynamics.Andrew explains why the financial debt analogy is dangerously misleading and, more importantly, how others can rack up debt you’ll eventually pay for. Drawing from behavioral economics, systems thinking, and organizational theory, he reveals why our emotions, not logic, drive most technical decisions, and how to work with this reality rather than against it.Key topics discussed:Why technical debt is a trade-off problem, not technicalHow emotions override logic in critical decisionsThe Technical Debt Onion Model framework explainedPrincipal-agent problems sabotaging your codebaseExternalities: who pays for shortcuts taken today?Why burning down debt is already too lateUlysses contracts for managing future obligationsSystems thinking applied to software developmentWicked problems: why different teams see different solutionsAI’s impact on technical debt creationTimestamps:(00:00:00) Trailer & Intro(00:02:24) Career Turning Points(00:06:06) The Importance of Skilling Up in Tech(00:06:49) The Definition of Technical Debt(00:09:08) The Broken Analogy of Technical Debt as a Financial Debt(00:09:58) The Role of Human Bias and Organization Issues in Technical Debt(00:12:41) Tech Debt is a Trade-off Problem(00:13:07) Building a Healthier Relationship with Technical Debt(00:15:15) The Technical Debt Onion Model(00:18:17) The Onion Model: Trade-Off Layer(00:25:10) The Ulysses Contract for Managing Technical Debt(00:33:03) The Onion Model: Systems Layer(00:36:32) The Onion Model: Economics/Game-Theory Layer(00:41:50) The Onion Model: Wicked Problem Layer(00:48:10) How Organizations Can Start Managing Technical Debt Better(00:52:03) The Al Impact on Technical Debt(00:56:16) 3 Tech Lead Wisdom_____Andrew Brown’s BioAndrew Richard Brown has worked in software since 1999, starting as an SAP programmer fixing Y2K bugs. He realized the biggest problems in software development were human, not technical, and has since helped teams improve performance by addressing these issues.Andrew coaches organizations on software development and quality engineering, focusing on technical debt, risk in complex systems, and project underestimation. He investigates how cognitive biases drive software problems and applies behavioral science techniques to solve them. His research has produced counterintuitive insights and fresh approaches. He regularly speaks at international conferences and runs a growing YouTube channel on these topics.Follow Andrew:LinkedIn – linkedin.com/in/andrew-brown-4b38062YouTube – @behaviouralsoftwareclub705Email – [email protected] Taming Your Dragon – https://www.amazon.com/Taming-Your-Dragon-Addressing-Technical/dp/B0CV4TTP32/Like this episode?Show notes & transcript: techleadjournal.dev/episodes/239.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.
    --------  
    1:06:29
  • #238 - AI is Smart Until It's Dumb: Why LLM Will Fail When You Least Expect It - Emmanuel Maggiori
    Why does an AI that brilliantly generates code suddenly fail at basic math? The answer explains why your LLM will fail when you least expect it.In this episode, Emmanuel Maggiori, author of “Smart Until It’s Dumb” and “The AI Pocket Book,” cuts through the AI hype to reveal what LLMs actually do and, more importantly, what they can’t. Drawing from his experience building AI systems and witnessing multiple AI booms and busts, Emmanuel explains why machine learning works brilliantly until it makes mistakes no human would ever make.He shares why businesses repeatedly fail at AI adoption, how hallucinations are baked into the technology, and what developers need to know about building reliable AI products.Whether you’re implementing AI at work or concerned about your career, this conversation offers a grounded perspective on navigating the current AI wave without getting swept away by unrealistic promises.Key topics discussed:Why AI projects fail the same way repeatedlyHow LLMs work and why they brilliantly failWhy hallucinations can’t be fixed with better promptsWhy self-driving cars still need human operatorsAdopting AI without falling into hype trapsHow engineers stay relevant in the AI eraWhy AGI predictions are mostly marketingBuilding valuable products in boring industriesTimestamps:(00:00:00) Trailer & Intro(00:02:32) Career Turning Points(00:06:41) Writing “Smart Until It’s Dumb” and “The AI Pocket Book”(00:08:14) The History of AI Booms & Winters(00:11:34) Why Generative AI Hype is Different Than the Past AI Waves(00:13:26) AI is Smart Until It’s Dumb(00:16:45) How LLM and Generative AI Actually Work(00:22:53) What Makes LLMs Smart(00:27:25) Foundational Model(00:30:01) RAG and Agentic AI(00:34:09) Tips on How to Adopt AI Within Companies(00:37:56) How to Reduce & Avoid AI Hallucination Problem(00:45:49) The Important Role of Benchmarks When Building AI Products(00:50:57) Advice for Software Engineers to Deal With AI Concerns(00:56:49) Advice for Junior Developers(00:59:34) Vibe Coders and Prompt Engineers: New Jobs or Just Hype?(01:01:55) The AGI Possibility(01:07:23) Three Tech Lead Wisdom_____Emmanuel Maggiori’s BioEmmanuel Maggiori, PhD, is a software engineer and 10-year AI industry insider. He has developed AI for a variety of applications, from processing satellite images to packaging deals for holiday travelers. He is the author of the books Smart Until It’s Dumb, Siliconned, and The AI Pocket Book.Follow Emmanuel:LinkedIn – linkedin.com/in/emaggioriWebsite – emaggiori.comLike this episode?Show notes & transcript: techleadjournal.dev/episodes/238.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.
    --------  
    1:16:25
  • #237 - Tackling AI and Modern Complexity with Deming's System of Profound Knowledge - John Willis
    Can decades-old management philosophy actually help us tackle AI’s biggest challenges?In this episode, John Willis, a foundational figure in the DevOps movement and co-author of the DevOps Handbook, takes us through Dr. W. Edwards Deming’s System of Profound Knowledge and its surprising relevance to today’s most pressing challenges. John reveals how Deming’s four-lens framework—theory of knowledge, understanding variation, psychology, and systems thinking—provides a practical approach to managing complexity.The conversation moves beyond theoretical management principles into real-world applications, including incident management mistakes that have killed people, the polymorphic nature of AI agents, and why most organizations are getting AI adoption dangerously wrong.Key topics discussed:Deming’s System of Profound Knowledge and 14 Points of Management—what they actually mean for modern organizationsHow Deming influenced Toyota, DevOps, Lean, and Agile (and why the story is more nuanced than most people think)The dangers of polymorphic agentic AI and what happens when quantum computing enters the pictureA practical framework for managing Shadow AI in your organization (learning from the cloud computing era)Why incidents are “unplanned investments” and the fatal cost of dismissing P3 alertsTreating AI as “alien cognition” rather than human-like intelligenceThe missing piece in AI conversations: understanding the philosophy of AI, not just the technologyTimestamps:(00:00:00) Trailer & Intro(00:02:27) Career Turning Points(00:05:31) Why Writing a Book About Deming(00:12:53) Deming’s Influence on Toyota Production System(00:19:31) Deming’s System of Profound Knowledge(00:28:12) The Importance of Systems Thinking in Complex Tech Organizations(00:31:43) Deming’s 14 Points of Management(00:44:17) The Impact of AI Through the Lens of Deming’s Profound Knowledge(00:49:56) The Danger of Polymorphic Agentic AI Processes(00:53:12) The Challenges of Getting to Understand AI Decisions(00:55:43) A Leader’s Guide to Practical AI Implementation(01:05:03) 3 Tech Lead Wisdom_____John Willis’ BioJohn Willis is a prolific author and a foundational figure in the DevOps movement, co-authoring the seminal The DevOps Handbook. With over 45 years of experience in IT, his work has been central to shaping modern IT operations and strategy. He is also the author of Deming’s Journey to Profound Knowledge and Rebels of Reason, which explores the history leading to modern AI.John is a passionate mentor, a self-described “maniacal learner”, and a deep researcher into systems thinking, management theory, and the philosophical implications of new technologies like AI and quantum computing. He actively shares his insights through his “Dear CIO” newsletter (aicio.ai) and newsletters on LinkedIn covering Deming, AI, and Quantum.Follow John:LinkedIn – linkedin.com/in/johnwillisatlantaTwitter – x.com/botchagalupe AI CIO – aicio.ai Attention Is All You Need – linkedin.com/newsletters/attention-is-all-you-need-7167889892029505536 Profound – linkedin.com/newsletters/profound-7161118352210288640 Rebels of Uncertainty – linkedin.com/newsletters/rebels-of-uncertainty-7359198621222719490Like this episode?Show notes & transcript: techleadjournal.dev/episodes/237.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.
    --------  
    1:09:53

Meer Technologie podcasts

Over Tech Lead Journal

Great technical leadership requires more than just great coding skills. It requires a variety of other skills that are not well-defined, and they are not something that we can fully learn in any school or book. Hear from experienced technical leaders sharing their journey and philosophy for building great technical teams and achieving technical excellence. Find out what makes them great and how to apply those lessons to your work and team.
Podcast website

Luister naar Tech Lead Journal, AI Report en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v8.0.4 | © 2007-2025 radio.de GmbH
Generated: 12/2/2025 - 6:37:01 AM