Reliability Enablers

Ash Patel & Sebastian Vietz
Reliability Enablers
Nieuwste aflevering

Beschikbare afleveringen

5 van 70
  • You (and AI) can't automate reliability away
    What if the hardest part of reliability has nothing to do with tooling or automation? Jennifer Petoff explains why real reliability comes from the human workflows wrapped around the engineering work.Everyone seems to think AI will automate reliability away. I keep hearing the same story: “Our tooling will catch it.” “Copilots will reduce operational load.” “Automation will mitigate incidents before they happen.”But here’s a hard truth to swallow: AI only automates the mechanical parts of reliability — the machine in the machine.The hard parts haven’t changed at all.You still need teams with clarity on system boundaries.You still need consistent approaches to resolution.You still need postmortems that drive learning rather than blame.AI doesn’t fix any of that. If anything, it exposes every organizational gap we’ve been ignoring. And that’s exactly why I wanted today’s guest on.Jennifer Petoff is Director of Program  Management for Google Cloud Platform and Technical Infrastructure education. Every day, she works with SREs at Google, as well as with SREs at other companies through her public speaking and Google Cloud Customer engagements.Even if you have never touched GCP, you have still been influenced by her work at some point in your SRE career. She is co-editor of Google’s original Site Reliability Engineering book from 2016. Yeah, that one!It was my immense pleasure to have her join me to discuss the internal dynamics behind successful reliability initiatives. Here are 5 highlights from our talk:3 issues stifling individual SREs’ workTo start, I wanted to know from Jennifer the kinds of challenges she has seen individual SREs face when attempting to introduce or reinforce reliability improvements within their teams or the broader organization.She categorized these challenges into 3 main categories* Cultural issues (with a look into Westrum’s typology of organizational culture)* Insufficient buy-in from stakeholders* Inability to communicate the value of reliability workOrganizations with generative cultures have 30% better organizational performance.A key highlight from this topic came from her look at DORA research, an annual survey of thousands of tech professionals and the research upon which the book Accelerate is based.It showed that organizations with generative cultures have 30% better organizational performance. In other words, you can have the best technology, tools, and processes to get good results, but culture further raises the bar. A generative culture also makes it easier to implement the more technical aspects of DevOps or SRE that are associated with improved organizational performance.Hands-on is the best kind of trainingWe then explored structured approaches that ensure consistency, build capability, and deliberately shape reliability culture. As they say – Culture eats strategy for breakfast!One key example Jennifer gave was the hands-on approach they take at Google. She believes that adults learn by doing. In other words, SREs gain confidence by doing hands-on work. Where possible, training programs should move away from passive listening to lectures toward hands-on exercises that mimic real SRE work, especially troubleshooting.One specific exercise that Google has built internally is Simulating Production Breakages. Engineers undergoing that training have a chance to troubleshoot a real system built for this purpose in a safe environment. The results have been profound, with a tremendous amount of confidence that Jennifer’s team saw in survey results. This confidence is focused on job-related behaviors, which when repeated over time reinforce that culture of reliability.Reliability is mandatory for everybodyAnother thing Jennifer told me Google did differently was making reliability a mandatory part of every engineer’s curriculum, not only SREs.When we first spun up the SRE Education team, our focus was squarely on our SREs. However, that’s like preaching to the choir. SREs are usually bought into reliability. A few years in, our leadership was interested in propagating the reliability-focused culture of SRE to all of Google’s development teams, a challenge an order of magnitude greater than training SREs. How did they achieve this mandate?* They developed a short and engaging (and mandatory) production safety training* That training has now been taken by tens of thousands of Googlers* Jennifer attributes this initiative’s success to how they“SRE’ed the program”. “We ran a canary followed by a progressive roll-out. We instituted monitoring and set up feedback loops so that we could learn and drive continuous improvement.”The result of this massive effort? A very respectable 80%+ net promoter score with open text feedback: “best required training ever.”What made this program successful is that Jennifer and her team SRE’d its design and iterative improvement. You can learn more about “How to SRE anything” (from work to life) using her rubric: https://www.reliablepgm.com/how-to-sre-anything/Reliability gets rewarded just like feature workJennifer then talked about how Google mitigates a risk that I think every reliability engineer wishes could be solved at their organization. That is, having great reliability work rewarded at the same level as great feature work.For development and operations teams alike at Google, this means making sure “grungy work” like tech debt reduction, automation, and other activities that improve reliability are rewarded equally to shiny new product features. Organizational reward programs that recognize outstanding work typically have committees. These committees not only look for excellent feature development work, but also reward and celebrate foundational activities that improve reliability. This is explicitly built into the rubric for judging award submissions.Keep a scorecard of reliability performanceJennifer gave another example of how Google judges reliability performance, but more specifically for SRE teams this time. Google’s Production Excellence (ProdEx) program was created in 2015 to assess and improve production excellence (aka reliability improvements) across SRE teams.ProdEx acts like a central scorecard to aggregate metrics from various production health domains to provide a comprehensive overview of an SRE team’s health and the reliability of the services they manage. Here are some specifics from the program:* Domains include SLOs, on-call workload, alerting quality, and postmortem discipline* Reviews are conducted live every few quarters by senior SREs (directors or principal engineers) who are not part of the team’s direct leadership* There is a focus on coaching and accountability without shame (to elicit psychological safety)ProdEx serves various levels of the SRE organization through:* providing strategic situational awareness regarding organizational and system health to leadership and* keeping forward momentum around reliability and surfacing team-level issues early to support engineers in addressing themWrapping upHaving an inside view of reliability mechanisms within a few large organizations, I know that few are actively doing all — or sometimes any — of the reliability enhancers that Google uses and Jennifer has graciously shared with us. It’s time to get the ball rolling. What will you do today to make it happen? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    --------  
    28:20
  • #67 Why the SRE Book Fails Most Orgs — Lessons from a Google Veteran
    A new or growing SRE team. A copy of the book. A company that says it cares about reliability. What happens next? Usually… not much.In this episode, I sit down with Dave O’Connor, a 16-year Google SRE veteran, to talk about what happens when organizations cargo-cult reliability practices without understanding the context they were born in.You might know him for his self-deprecating wit and legendary USENIX blurb about being “complicit in the development of the SRE function.”This one’s a treat — less “here’s a shiny new tool” and more “here’s what reliability actually looks like when you’ve seen it all.”✨ No vendor plugs from Dave at all, just a good old-fashioned chat about what works and what doesn’t.Here’s what we dive into:* The adoption trap: Why SRE efforts often fail before they begin—especially when new hires care more about reliability than the org ever intended.* The SRE book dilemma: Dave’s take on why following the SRE book chapter-by-chapter is a trap for most companies (and what to do instead).* The cost of “caring too much”: How engineers burn out trying to force reliability into places it was never funded to live.* You build it, you run it (but should you?): Not everyone’s cut out for incident command—and why pretending otherwise sets teams up to fail.* Buying vs. building: The real reason even conservative enterprises are turning into software shops — and the reliability nightmare that follows.We also discuss the evolving role of reliability in organizations today, from being mistaken for “just ops” to becoming a strategic investment (when done right).Dave's seen the waves come and go in SRE — and he's still optimistic. That alone is worth a listen. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    --------  
    30:47
  • #66 - Unpacking 2025 SRE Report’s Damning Findings
    I know it’s already six months into 2025, but we recorded this almost three months ago. I’ve been busy with my foray into the world of tech consulting and training —and, well, editing these podcast episodes takes time and care.This episode was prompted by the 2025 Catchpoint SRE Report, which dropped some damning but all-too-familiar findings:* 53% of orgs still define reliability as uptime only, ignoring degraded experience and hidden toil* Manual effort is creeping back in, reversing five years of automation gains* 41% of engineers feel pressure to ship fast, even when it undermines long-term stabilityTo unpack what this actually means inside organizations, I sat down with Sebastian Vietz, Director of Reliability Engineering at Compass Digital and co-host of the Reliability Enablers podcast. Sebastian doesn’t just talk about technical fixes — he focuses on the organizational frictions that stall change, burn out engineers, and leave “reliability” as a slide deck instead of a lived practice.We dig into:* How SREs get stuck as messengers of inconvenient truths* What it really takes to move from advocacy to adoption — without turning your whole org into a cost center* Why tech is more like milk than wine (Sebastian explains)* And how SREs can strengthen—not compete with—security, risk, and compliance teamsThis one’s for anyone tired of reliability theatrics. No kumbaya around K8s here. Just an exploration of the messy, human work behind making systems and teams more resilient. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    --------  
    30:16
  • #65 - In Critical Systems, 99.9% Isn’t Reliable — It’s a Liability
    Most teams talk about reliability with a margin for error. “What’s our SLO? What’s our budget for failure?” But in the energy sector? There is no acceptable downtime. Not even a little.In this episode, I talk with Wade Harris, Director of FAST Engineering in Australia, who’s spent 15+ years designing and rolling out monitoring and control systems for critical energy infrastructure like power stations, solar farms, SCADA networks, you name it.What makes this episode different is that Wade isn’t a reliability engineer by title, but it’s baked into everything his team touches. And that matters more than ever as software creeps deeper into operational technology (OT), and the cloud tries to stake its claim in critical systems.We cover:* Why 100% uptime is the minimum bar, not a stretch goal* How the rise of renewables has increased system complexity — and what that means for monitoring* Why bespoke integration and SCADA spaghetti are still normal (and here to stay)* The reality of cloud risk in critical infrastructure (“the cloud is just someone else’s computer”)* What software engineers need to understand if they want their products used in serious environmentsThis isn’t about observability dashboards or DevOps rituals. This is reliability when the lights go out and people risk getting hurt if you get it wrong.And it’s a reminder: not every system lives in a feature-driven world. Some systems just have to work. Always. No matter what. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    --------  
    28:28
  • #64 - Using AI to Reduce Observability Costs
    Exploring how to manage observability tool sprawl, reduce costs, and leverage AI to make smarter, data-driven decisions.It's been a hot minute since the last episode of the Reliability Enablers podcast.Sebastian and I have been working on a few things in our realms. On a personal and work front, I’ve been to over 25 cities in the last 3 months and need a breather.Meanwhile, listen to this interesting vendor, Ruchir Jha from Cardinal, working on the cutting edge of o11y to help reduce costs from spiraling out of control. (To the skeptics, he did not pay me for this episode)Here’s an AI-generated summary of what you can expect in our conversation:In this conversation, we explore cutting-edge approaches to FinOps i.e. cost optimization for observability. You'll hear about three pressing topics:* Managing Tool Sprawl: Insights into the common challenge of juggling 5-15 tools and how to identify which ones deliver real value.* Reducing Observability Costs: Techniques to track and trim waste, including how to uncover cost hotspots like overused or redundant metrics.* AI for Observability Decisions: Practical ways AI can simplify complex data, empowering non-technical stakeholders to make informed decisions.We also touch on the balance between open-source solutions like OpenTelemetry and commercial observability tools. Learn how these strategies, informed by Ruchir's experience at Netflix, can help streamline observability operations and cut costs without sacrificing reliability. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    --------  
    20:42

Meer Zaken en persoonlijke financiën podcasts

Over Reliability Enablers

Software reliability is a tough topic for engineers in many organizations. The Reliability Enablers (Ash Patel and Sebastian Vietz) know this from experience. Join us as we demystify reliability jargon like SRE, DevOps, and more. We interview experts and share practical insights. Our mission is to help you boost your success in reliability-enabling areas like observability, incident response, release engineering, and more. read.srepath.com
Podcast website

Luister naar Reliability Enablers, Doorzetters | met Ruud Hendriks en Richard Bross en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v8.1.0 | © 2007-2025 radio.de GmbH
Generated: 12/8/2025 - 1:57:17 PM