The Vernon Richard Show

Vernon Richards and Richard Bradshaw
The Vernon Richard Show
Nieuwste aflevering

33 afleveringen

  • The Vernon Richard Show

    Six Principles of Automation in Testing: Still Relevant in 2026?

    23-02-2026 | 1 u. 3 Min.
    In this episode, Richard Bradshaw and Vernon discuss the relevance and application of the six principles of automation in testing in the context of AI advancements. They explore how these principles hold up in 2026, the challenges faced in automation, and the future of testing strategies.

    00:00 - Intro
    01:47 - Welcome (Richard is not at home 👀)
    02:07 - Ramadan, cooking without tasting, and plastic teeth 🦷
    04:01 - Today's topic: revisiting the AiT principles ahead of a keynote
    04:58 - What is Automation in Testing (AiT)?
    06:49 - Principle 1: Supporting Testing over Replicating Testing
    07:01 - Vernon's take: testing is a performance, not a click sequence
    08:22 - What the industry promised vs what automation actually does
    08:49 - The serendipity you lose when a human isn't testing
    09:59 - Agentic testing: observing more, but still not replicating humans
    10:56 - The danger of anthropomorphising AI output
    12:10 - LLMs always give an answer — and that's the problem
    13:03 - Principle 2: Testability over Automatability
    13:14 - Vernon's take: narrow vs broad — operate, control, observe
    14:38 - Making apps automatable for the robots but not the humans
    15:37 - The shiniest framework in a broken testing context
    16:40 - If it's testable, it's probably automatable — but not vice versa
    16:55 - Automation strategy vs testing strategy: when they compete, everyone loses
    17:46 - The problem has always been testing, not automation
    19:57 - Principle 3: Testing Expertise over Coding Expertise
    20:18 - Vernon's take: testing expertise lets you leverage the tools
    21:47 - The spoonfed tests problem: great at automating, lost without guidance
    22:36 - The "code school" era: everyone told to learn to code
    22:51 - Coding agents have changed the maths on this
    26:01 - The new nuance: test design and framework knowledge over writing the code
    28:44 - Evaluating code is a testing problem — and LLMs can help you do it
    30:43 - Are agents as good as a junior developer?
    31:42 - Outcome Engineering (O16G) and the race to write the AI principles
    32:13 - Simon Wardley: we're in the wild west again
    33:22 - Principle 4: Problems over Tools
    33:29 - Vernon's take: the hammer and the nail
    34:07 - Don't let your problems be shaped by the framework you have
    34:36 - New automation opportunities beyond testing: PRs, logs, story review
    35:30 - Principle 5: Risk over Coverage
    36:12 - Vernon's take: 100% coverage ≠ 100% risk coverage
    38:00 - The one test case, one automated test fallacy
    39:04 - Where in the system is the risk? Do you even know your layers?
    39:49 - Probabilistic vs non-deterministic: refining the language around AI
    40:53 - Coverage as intentional vs coverage as a number someone picked once
    43:15 - Principle 6: Observability over Understanding
    43:24 - Vernon's take: just-in-time understanding vs reading everything upfront
    44:12 - What the principle was actually about: making automation results observable
    47:00 - Does this principle belong in testing, or has it grown into quality?
    49:00 - So... what's missing?
    50:00 - The four pillars: Strategy, Creation, Usage, and Education
    57:05 - Automation in Quality: the bigger opportunity
    01:01:00 - Wrap up + Vern's Lead Dev panel

    Links to stuff we mentioned during the pod:
    04:00 - Automation in Testing (AiT)The principles live at automationintesting.com
    AiT was co-created by Richard Bradshaw and Mark Winteringham

    04:00 - Test Automation Days
    The conference where Richard is giving his keynote — testautomationdays.com
    24:48 - James Thomas
    The "kid in a candy shop" himself — James's blog and LinkedIn
    31:42 - Outcome Engineering (016G)
    The article Richard shared before recording — worth tracking down if you're interested in where agentic development practices are heading
    32:13 - Simon Wardley
    If you're not following Simon Wardley, please follow Simon Wardley! His work on Wardley Maps and situational awareness in strategy is essential reading
    Simon's LinkedIn
    43:30 - Abby Bangser
    Vern's go-to person for all things observability. Abby's LinkedIn
    46:04 - Noah Susman
    As it turns out, the quote Vern's referencing: advanced monitoring as "indistinguishable from testing" was not by Noah! It was Ed Keyes at GTAC 2007.
    Noah's blog and LinkedIn
    59:30 - Angie Jones
    Vern's been reading Angie's work on testing AI-enabled applications here and here.
    Angie's website and LinkedIn
    01:01:30 - The Lead Dev panel Vernon will be part of
    "How to Measure the Business Impact of AI" — happening 25th February, free to sign up
    01:02:00 - Richard's Selenium Conf talk"Redefining Test Automation" — the talk that the Test Automation Days keynote is shaping up to be a spiritual successor to.
  • The Vernon Richard Show

    This Was Supposed to Be About Testing

    26-01-2026 | 53 Min.
    This was supposed to be about testing.Instead, it turned into a conversation about burnout, money, leadership, community, AI, and what it actually takes to build a sustainable life in tech.Richard and Vernon kick off 2026 reflecting on what they’re changing, what they’re rebuilding, and how testing and quality fit into a future shaped by intention rather than hustle.
    Links to stuff we mentioned during the pod:
    05:19 - The Malazan Book of the Fallen by Steven Erikson
    14:59 - The $1k Challenge by Ali Abdaal Vernon took part in last year
    17:23 - The video from Daniel Pink on how to have a successful yearHere's where Daniel talks about having a Challenger Network (but the whole video is 😙🤌🏾)

    18:46 - Toby SinclairToby's website
    Toby's LinkedIn

    19:24 - Keith KlainKeith's blog
    Keith's podcast
    Keith's LinkedIn

    19:25 - Agile Testing Days conference
    35:45 - What is Model Drift?
    41:06 - Glue workTanya's Glue Work presentation which you can read or watch
    Vernon's talk about how glue work impacts Quality Engineers, Testers, etc.

    48:06 - Gary "GaryVee" VaynerchukGary's website
    Gary's YouTube

    00:00 - Intro
    00:54 - Greetings & where have we been?
    01:32 - The holidays
    02:34 - Rest & mood
    04:00 - Routines for success
    05:59 - Push-up challenge!
    08:35 - Dopamine detox
    10:28 - THE EPISODE BEGINS!
    10:29 - What are our personal 2026 themes (rather than resolutions)?
    10:59 - Rich's 2026 themes
    13:10 - Vern's themes
    17:58 - Friendship, loneliness, and being the initiator
    21:28 - Rich has a two itches. One about writing...
    21:56 - ...and another about hats
    25:23 - Vern's leadership focus and testing foundations
    31:06 - AI work: data mindset, agents, and the vibe coding divide
    40:11 - Rant about AI testing being stuck in the past
    46:37 - Do "cool" shit and "talk" about it. How to stand out from AI Slop
    50:10 - Our podcast themes for 2026
  • The Vernon Richard Show

    Shifting Left: Agile vs. Waterfall in QA

    21-10-2025 | 1 u.
    In this episode of the Vernon and Richard show, the hosts engage in light-hearted banter about football before diving into a deep discussion on QA, QE, and testing. They explore the concept of 'shift left' in software development, comparing its application in agile versus waterfall methodologies. The conversation shifts to the evolving roles of QA and QE in the context of AI's impact on the industry, emphasizing the importance of task analysis and building a quality culture within teams. The episode concludes with reflections on managing expectations in QA roles and the future of jobs in the field.
    00:00 - Intro
    00:48 - Welcome and "Hey" (may contain traces of ⚽️)
    04:45 - Olly's first question: Does shift left lend itself more to waterfall (than other methodologies)?
    14:41 - Olly's second question: Does this limit how much agile can be used? Is there potentially a new methodology that can emerge from this?
    22:31 - Olly's third question (remixed by Rich a little): "...is it more now a case of making people aware that they can, should be considering things ahead of development?"
    34:24 - Olly's fourth question: How far can you shift-left before it becomes overstepping?
    51:53 - Olly's... which question is this now?! Next question! That works!: Where does the QA role end?

    Links to stuff we mentioned during the pod:
    04:26 - Olly FairhallOlly's LinkedIn
    Here's a link to what Olly sent us

    04:45 - Waterfall (in software development)Wikipedia article about the history of the term 
    This article goes into a little more detail about the different phases and characteristics of the model 

    07:29 - Dan Ashby's (yes DAN'S!) famous diagram is part of his often cited "Continuous Testing" post
    07:50 - For folks who don't understand that reference, it's... taken (🥁) scene from the movie Taken
    08:10 - Rich's Whiteboard used to get a lot more love😞 
    22:31 - Olly's questions and thoughts that are guiding our conversation. Thanks Olly!
    44:12 - The book "Who Not How" by Dan Sullivan and Dr. Benjamin Hardy
    46:33 - Elisabeth HendricksonGet Elisabeth's excellent book Explore It!
    Elisabeth's LinkedIn

    46:49 - Alan PageAlan's newsletter
    Alan and Brent's podcast
    Alan's LinkedIn

    51:53 - Kelsey HightowerKelsey did a Q&A at Cloud Native PDX and you can listen to the question and answer I was trying to describe here.
    I urge you to listen to the whole thing. Kelsey is an excellent orator, storyteller, and all-around human ❤️

    55:33 - Rob SabourinMy quick Perplexity search for Rob's public material on Task Analysis
    Rob's Linkedin

    56:59 - Vernon's newsletter "Yeah But Does it Work?!"The issue mentioned is called "What Is The Vaughn Tan Rule and How Does It Impact Testing?" and talks about where we might start with unbundling
  • The Vernon Richard Show

    Measuring Software Testing When The Labels Don’t Fit

    01-10-2025 | 1 u.
    This episode is about the struggle to explain, measure, and name the work testers and quality advocates actually do — especially when traditional labels and metrics fall short.
    Links to stuff we mentioned during the pod:
    05:05 - Defect Detection Rate (DDR)The rate at which bugs are detected per test case (automated or manual)
    No. of defects found by test team / No. of Test Cases executed) *100

    15:06 - David Evans' LinkedIn
    24:57 - Janet GregoryJanet's website
    Janet's LinkedIn

    26:01 - Defect Prevention RatePerplexity search results here

    28:28 - Jerry WeinbergJerry's Wikipedia page (his books are highly recommended)

    49:33 - Shift-Left: The concept of moving testing activities earlier in the software development lifecycyle.

    Some resources explaining the Shift-Left concept (Perplexity link)
    00:00 - Intro
    01:11 - Welcome & "woke" testing 😳
    03:15 - QA, QE, Testing… whatever we call it, how do we measure if we're doing a good job?
    03:44 - Vernon’s first experience with testing metrics: more = better?
    05:00 - Defect Detection Rate enters the chat
    06:41 - Rich reverse engineers quality skills needed in the AI era
    10:54 - How do we know if we’re doing any of this well?
    12:40 - Trigger warning: the topic of coverage is incoming 😅
    16:54 - Bugs in production
    21:09 - Automation metrics: flakiness, pass rates, and execution time
    24:29 - Can you measure something that didn’t happen? (Prevention metrics)
    27:43 - Do DORA metrics actually measure prevention?
    32:03 - Here comes Jerry!
    33:50 - The one metric the business cares about...
    36:23 - QA vs QE: whose “quality” are we "assuring"?
    39:25 - What's the story behind the numbers?
    48:29 - Rich brings in Shift Left Testing
    50:14 - Metrics that reach beyond engineering
    53:14 - Rich gets a new perspective on QE and the business
    56:50 - Who does this work? Testers? QEs? Or someone else?
  • The Vernon Richard Show

    When Everything Sounds Like Testing… How Do You Explain What You Really Do?

    09-09-2025 | 53 Min.
    In this episode, Richard and Vernon delve into the complexities of Quality Assurance (QA), Quality Engineering (QE), and testing in software development. They explore the evolution of these concepts, their interrelations, and the importance of metrics in assessing quality. The conversation highlights the need for a holistic approach to quality, emphasizing that both prevention and detection of bugs are essential. The hosts also discuss the challenges of defining these terms and the future of quality in the industry.
    Links to stuff we mentioned during the pod:
    08:50 - Dan AshbyWe're referring to Dan's's excellent post called "Continuous Testing" (featuring his famous diagram!)

    17:13 - Jit GosaiJit's blog 
    Jit's Quality Engineering Newsletter 
    Jit's LinkedIn

    19:24 - Quality Talks PodcastStu's Quality Talks podcast that he co-hosts with Chris Henderson
    Stu's LinkedIn
    Chris's Linkedin

    19:55 - The Testing Peers podcast
    22:00 - DORA Metrics: DORA metrics are a set of key performance indicators developed by Google’s DevOps Research and Assessment team to measure the effectiveness of software delivery and DevOps processes, focusing on both throughput and stability
    26:13 - A link from Episode 10 where Vern discusses Glue Work (be sure to check out the show notes on that episode)Quick overview of DORA metrics

    34:43 - The Credibility PlaybookA video course by Vernon as he experiments with building digital products.Check it out and let him know what you think of it! 😊

    46:24 - Ali AbdaalAli's website
    Ali's YouTube

    00:00 - Intro
    01:36 - Welcome
    02:40 - Today's topic: What the hell is QA? QE? Testing? And is it all changing?
    03:00 - Why is this bugging Rich?
    05:11 - Fruit fly tangent 🍌🍊🍎🪰🐝🦋
    06:27 - Rich's take on QA, QE, and Testing
    08:31 - Vern's take on QA, QE, and Testing
    11:15 - Is shift-left testing the same as QE?
    13:05 - When the team tests early... is that QE then?!
    16:18 - What's the big deal if we can’t define QE clearly?
    19:27 - Why the Efficiency Era makes this even harder
    22:55 - Trying to draw the Testing, QA, QE, Venn diagram
    27:24 - Getting the QA, QE, Testing blend just right. What's the right mix?
    29:52 - The kinds of work we take on as our careers grow
    34:08 - What Testers get rewarded for
    45:34 - How Ali Abdaal helped Vern think differently about quality
    48:18 - Rich talks measurement

Meer Zaken en persoonlijke financiën podcasts

Over The Vernon Richard Show

Vernon Richards and Richard Bradshaw discuss all things software testing, quality engineering and life in the world of software development. Plus our own personal journeys navigating our careers and lifes.
Podcast website

Luister naar The Vernon Richard Show, The Diary Of A CEO with Steven Bartlett en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies