PodcastsOnderwijs52 Weeks of Cloud

52 Weeks of Cloud

Noah Gift
52 Weeks of Cloud
Nieuwste aflevering

225 afleveringen

  • 52 Weeks of Cloud

    ELO Ratings Questions

    18-9-2025 | 3 Min.
    Key Argument
    Thesis: Using ELO for AI agent evaluation = measuring noise
    Problem: Wrong evaluators, wrong metrics, wrong assumptions
    Solution: Quantitative assessment frameworks
    The Comparison (00:00-02:00)
    Chess ELO
    FIDE arbiters: 120hr training
    Binary outcome: win/loss
    Test-retest: r=0.95
    Cohen's κ=0.92
    AI Agent ELO
    Random users: Google engineer? CS student? 10-year-old?
    Undefined dimensions: accuracy? style? speed?
    Test-retest: r=0.31 (coin flip)
    Cohen's κ=0.42
    Cognitive Bias Cascade (02:00-03:30)
    Anchoring: 34% rating variance in first 3 seconds
    Confirmation: 78% selective attention to preferred features
    Dunning-Kruger: d=1.24 effect size
    Result: Circular preferences (A>B>C>A)
    The Quantitative Alternative (03:30-05:00)
    Objective Metrics
    McCabe complexity ≤20
    Test coverage ≥80%
    Big O notation comparison
    Self-admitted technical debt
    Reliability: r=0.91 vs r=0.42
    Effect size: d=2.18
    Dream Scenario vs Reality (05:00-06:00)
    Dream
    World's best engineers
    Annotated metrics
    Standardized criteria
    Reality
    Random internet users
    No expertise verification
    Subjective preferences
    Key Statistics
    MetricChessAI AgentsInter-rater reliabilityκ=0.92κ=0.42Test-retestr=0.95r=0.31Temporal drift±10 pts±150 ptsHurst exponent0.890.31Takeaways
    Stop: Using preference votes as quality metrics
    Start: Automated complexity analysis
    ROI: 4.7 months to break even
    Citations Mentioned
    Kapoor et al. (2025): "AI agents that matter" - κ=0.42 finding
    Santos et al. (2022): Technical Debt Grading validation
    Regan & Haworth (2011): Chess arbiter reliability κ=0.92
    Chapman & Johnson (2002): 34% anchoring effect
    Quotable Moments
    "You can't rate chess with basketball fans"
    "0.31 reliability? That's a coin flip with extra steps"
    "Every preference vote is a data crime"
    "The psychometrics are screaming"
    Resources
    Technical Debt Grading (TDG) Framework
    PMAT (Pragmatic AI Labs MCP Agent Toolkit)
    McCabe Complexity Calculator
    Cohen's Kappa Calculator

    🔥 Hot Course Offers:
    🤖 Master GenAI Engineering - Build Production AI Systems
    🦀 Learn Professional Rust - Industry-Grade Development
    📊 AWS AI & Analytics - Scale Your ML in Cloud
    ⚡ Production GenAI on AWS - Deploy at Enterprise Scale
    🛠️ Rust DevOps Mastery - Automate Everything
    🚀 Level Up Your Career:
    💼 Production ML Program - Complete MLOps & Cloud Mastery
    🎯 Start Learning Now - Fast-Track Your ML Career
    🏢 Trusted by Fortune 500 Teams
    Learn end-to-end ML engineering from industry veterans at PAIML.COM
  • 52 Weeks of Cloud

    The 2X Ceiling: Why 100 AI Agents Can't Outcode Amdahl's Law"

    17-9-2025 | 4 Min.
    AI coding agents face the same fundamental limitation as parallel computing: Amdahl's Law. Just as 10 cooks can't make soup 10x faster, 10 AI agents can't code 10x faster due to inherent sequential bottlenecks.
    📚 Key Concepts
    The Soup Analogy
    Multiple cooks can divide tasks (prep, boiling water, etc.)
    But certain steps MUST be sequential (can't stir before ingredients are in)
    Adding more cooks hits diminishing returns quickly
    Perfect metaphor for parallel processing limits
    Amdahl's Law Explained
    Mathematical principle: Speedup = 1 / (Sequential% + Parallel%/N)
    Logarithmic relationship = rapid plateau
    Sequential work becomes the hard ceiling
    Even infinite workers can't overcome sequential bottlenecks
    💻 Traditional Computing Bottlenecks
    I/O Operations - disk reads/writes
    Network calls - API requests, database queries
    Database locks - transaction serialization
    CPU waiting - can't parallelize waiting
    Result: 16 cores ≠ 16x speedup in real world
    🤖 Agentic Coding Reality: The New Bottlenecks
    1. Human Review (The New I/O)
    Code must be understood by humans
    Security validation required
    Business logic verification
    Can't parallelize human cognition
    2. Production Deployment
    Sequential by nature
    One deployment at a time
    Rollback requirements
    Compliance checks
    3. Trust Building
    Can't parallelize reputation
    Bad code = deleted customer data
    Revenue impact risks
    Trust accumulates sequentially
    4. Context Limits
    Human cognitive bandwidth
    Understanding 100k+ lines of code
    Mental model limitations
    Communication overhead
    📊 The Numbers (Theoretical Speedups)
    1 agent: 1.0x (baseline)
    2 agents: ~1.3x speedup
    10 agents: ~1.8x speedup
    100 agents: ~1.96x speedup
    ∞ agents: ~2.0x speedup (theoretical maximum)
    🔑 Key Takeaways
    AI Won't Fully Automate Coding Jobs
    More like enhanced assistants than replacements
    Human oversight remains critical
    Trust and context are irreplaceable

    Efficiency Gains Are Limited
    Real-world ceiling around 2x improvement
    Not the exponential gains often promised
    Similar to other parallelization efforts

    Success Factors for Agentic Coding
    Well-organized human-in-the-loop processes
    Clear review and approval workflows
    Incremental trust building
    Realistic expectations

    🔬 Research References
    Princeton AI research on agent limitations
    "AI Agents That Matter" paper findings
    Empirical evidence of diminishing returns
    Real-world case studies
    💡 Practical Implications
    For Developers:
    Focus on optimizing the human review process
    Build better UI/UX for code review
    Implement incremental deployment strategies
    For Organizations:
    Set realistic productivity expectations
    Invest in human-agent collaboration tools
    Don't expect 10x improvements from more agents
    For the Industry:
    Paradigm shift from "replacement" to "augmentation"
    Need for new metrics beyond raw speed
    Focus on quality over quantity of agents
    🎬 Episode Structure
    Hook: The soup cooking analogy
    Theory: Amdahl's Law explanation
    Traditional: Computing bottlenecks
    Modern: Agentic coding bottlenecks
    Reality Check: The 2x ceiling
    Future: Optimizing within constraints
    🗣️ Quotable Moments
    "10 agents don't code 10 times faster, just like 10 cooks don't make soup 10 times faster"
    "Humans are the new I/O bottleneck"
    "You can't parallelize trust"
    "The theoretical max is 2x faster - that's the reality check"
    🤔 Discussion Questions
    Is the 2x ceiling permanent or can we innovate around it?
    What's more valuable: speed or code quality?
    How do we optimize the human bottleneck?
    Will future AI models change these limitations?
    📝 Episode Tagline
    "When infinite AI agents hit the wall of human review, Amdahl's Law reminds us that some things just can't be parallelized - including trust, context, and the courage to deploy to production."

    🔥 Hot Course Offers:
    🤖 Master GenAI Engineering - Build Production AI Systems
    🦀 Learn Professional Rust - Industry-Grade Development
    📊 AWS AI & Analytics - Scale Your ML in Cloud
    ⚡ Production GenAI on AWS - Deploy at Enterprise Scale
    🛠️ Rust DevOps Mastery - Automate Everything
    🚀 Level Up Your Career:
    💼 Production ML Program - Complete MLOps & Cloud Mastery
    🎯 Start Learning Now - Fast-Track Your ML Career
    🏢 Trusted by Fortune 500 Teams
    Learn end-to-end ML engineering from industry veterans at PAIML.COM
  • 52 Weeks of Cloud

    Plastic Shamans of AGI

    21-5-2025 | 10 Min.
    The plastic shamans of OpenAI 🔥 Hot Course Offers:

    - 🤖 Master GenAI Engineering - Build Production AI Systems
    - 🦀 Learn Professional Rust - Industry-Grade Development
    - 📊 AWS AI & Analytics - Scale Your ML in Cloud
    - ⚡ Production GenAI on AWS - Deploy at Enterprise Scale
    - 🛠️ Rust DevOps Mastery - Automate Everything

    🚀 Level Up Your Career:

    - 💼 Production ML Program - Complete MLOps & Cloud Mastery
    - 🎯 Start Learning Now - Fast-Track Your ML Career
    - 🏢 Trusted by Fortune 500 Teams

    Learn end-to-end ML engineering from industry veterans at PAIML.COM
  • 52 Weeks of Cloud

    The Toyota Way: Engineering Discipline in the Era of Dangerous Dilettantes

    21-5-2025 | 14 Min.
    Dangerous Dilettantes vs. Toyota Way Engineering
    Core Thesis
    The influx of AI-powered automation tools creates dangerous dilettantes - practitioners who know just enough to be harmful. The Toyota Production System (TPS) principles provide a battle-tested framework for integrating automation while maintaining engineering discipline.
    Historical Context
    Toyota Way formalized ~2001DevOps principles derive from TPSCoincided with post-dotcom crash startupsDecades of manufacturing automation parallels modern AI-based automationDangerous Dilettante Indicators
    Promises magical automation without understanding systems
    Focuses on short-term productivity gains over long-term stability
    Creates interfaces that hide defects rather than surfacing them
    Lacks understanding of production engineering fundamentals
    Prioritizes feature velocity over deterministic behavior
    Toyota Way Implementation for AI-Enhanced Development
    1. Long-Term Philosophy Over Short-Term Gains
    // Anti-pattern: Brittle automation scriptlet quick_fix = agent.generate_solution(problem, { optimize_for: "immediate_completion", validation: false});// TPS approach: Sustainable system designlet sustainable_solution = engineering_system .with_agent_augmentation(agent) .design_solution(problem, { time_horizon_years: 2, observability: true, test_coverage_threshold: 0.85, validate_against_principles: true });Build systems that remain maintainable across years
    Establish deterministic validation criteria before implementation
    Optimize for total cost of ownership, not just initial development
    2. Create Continuous Process Flow to Surface Problems
    Implement CI pipelines that surface defects immediately:Static analysis validation
    Type checking (prefer strong type systems)
    Property-based testing
    Integration tests
    Performance regression detection

    Build flow:make lint → make typecheck → make test → make integration → make benchmarkFail fast at each stageForce errors to surface early rather than be hidden by automation
    Agent-assisted development must enhance visibility, not obscure it
    3. Pull Systems to Prevent Overproduction
    Minimize code surface area - only implement what's needed
    Prefer refactoring to adding new abstractions
    Use agents to eliminate boilerplate, not to generate speculative features
    // Prefer minimal implementationsfunction processData(data: T[]): Result { // Use an agent to generate only the exact transformation needed // Not to create a general-purpose framework}4. Level Workload (Heijunka)
    Establish consistent development velocity
    Avoid burst patterns that hide technical debt
    Use agents consistently for small tasks rather than large sporadic generations
    5. Build Quality In (Jidoka)
    Automate failure detection, not just productionAny failed test/lint/check = full system haltEvery team member empowered to "pull the andon cord" (stop integration)
    AI-assisted code must pass same quality gates as human code
    Quality gates should be more rigorous with automation, not less
    6. Standardized Tasks and Processes
    Uniform build system interfaces across projects
    Consistent command patterns:make formatmake lintmake testmake deploy
    Standardized ways to integrate AI assistance
    Documented patterns for human verification of generated code
    7. Visual Controls to Expose Problems
    Dashboards for code coverage
    Complexity metrics
    Dependency tracking
    Performance telemetry
    Use agents to improve these visualizations, not bypass them
    8. Reliable, Thoroughly-Tested Technology
    Prefer languages with strong safety guarantees (Rust, OCaml, TypeScript over JS)
    Use static analysis tools (clippy, eslint)
    Property-based testing over example-based
    #[test]fn property_based_validation() { proptest!(|(input: Vec)| { let result = process(&input); // Must hold for all inputs assert!(result.is_valid_state()); });}9. Grow Leaders Who Understand the Work
    Engineers must understand what agents produce
    No black-box implementations
    Leaders establish a culture of comprehension, not just completion
    10. Develop Exceptional Teams
    Use AI to amplify team capabilities, not replace expertise
    Agents as team members with defined responsibilities
    Cross-training to understand all parts of the system
    11. Respect Extended Network (Suppliers)
    Consistent interfaces between systems
    Well-documented APIs
    Version guarantees
    Explicit dependencies
    12. Go and See (Genchi Genbutsu)
    Debug the actual system, not the abstraction
    Trace problematic code paths
    Verify agent-generated code in context
    Set up comprehensive observability
    // Instrument code to make the invisible visiblefunc ProcessRequest(ctx context.Context, req *Request) (*Response, error) { start := time.Now() defer metrics.RecordLatency("request_processing", time.Since(start)) // Log entry point logger.WithField("request_id", req.ID).Info("Starting request processing") // Processing with tracing points // ... // Verify exit conditions if err != nil { metrics.IncrementCounter("processing_errors", 1) logger.WithError(err).Error("Request processing failed") } return resp, err}13. Make Decisions Slowly by Consensus
    Multi-stage validation for significant architectural changes
    Automated analysis paired with human review
    Design documents that trace requirements to implementation
    14. Kaizen (Continuous Improvement)
    Automate common patterns that emerge
    Regular retrospectives on agent usage
    Continuous refinement of prompts and integration patterns
    Technical Implementation Patterns
    AI Agent Integration
    interface AgentIntegration { // Bounded scope generateComponent(spec: ComponentSpec): Promise; // Surface problems validateGeneration(code: string): Promise; // Continuous improvement registerFeedback(generation: string, feedback: Feedback): void;}Safety Control Systems
    Rate limiting
    Progressive exposure
    Safety boundaries
    Fallback mechanisms
    Manual oversight thresholds
    Example: CI Pipeline with Agent Integration
    # ci-pipeline.ymlstages: - lint - test - integrate - deploylint: script: - make format-check - make lint # Agent-assisted code must pass same checks - make ai-validation test: script: - make unit-test - make property-test - make coverage-report # Coverage thresholds enforced - make coverage-validation# ...Conclusion
    Agents provide useful automation when bounded by rigorous engineering practices. The Toyota Way principles offer proven methodology for integrating automation without sacrificing quality. The difference between a dangerous dilettante and an engineer isn't knowledge of the latest tools, but understanding of fundamental principles that ensure reliable, maintainable systems.

    🔥 Hot Course Offers:
    🤖 Master GenAI Engineering - Build Production AI Systems
    🦀 Learn Professional Rust - Industry-Grade Development
    📊 AWS AI & Analytics - Scale Your ML in Cloud
    ⚡ Production GenAI on AWS - Deploy at Enterprise Scale
    🛠️ Rust DevOps Mastery - Automate Everything
    🚀 Level Up Your Career:
    💼 Production ML Program - Complete MLOps & Cloud Mastery
    🎯 Start Learning Now - Fast-Track Your ML Career
    🏢 Trusted by Fortune 500 Teams
    Learn end-to-end ML engineering from industry veterans at PAIML.COM
  • 52 Weeks of Cloud

    DevOps Narrow AI Debunking Flowchart

    16-5-2025 | 11 Min.
    Extensive Notes: The Truth About AI and Your Coding Job
    Types of AI
    Narrow AI
    Not truly intelligent
    Pattern matching and full text search
    Examples: voice assistants, coding autocomplete
    Useful but contains bugs
    Multiple narrow AI solutions compound bugs
    Get in, use it, get out quickly

    AGI (Artificial General Intelligence)
    No evidence we're close to achieving this
    May not even be possible
    Would require human-level intelligence
    Needs consciousness to exist
    Consciousness: ability to recognize what's happening in environment
    No concept of this in narrow AI approaches
    Pure fantasy and magical thinking

    ASI (Artificial Super Intelligence)
    Even more fantasy than AGI
    No evidence at all it's possible
    More science fiction than reality

    The DevOps Flowchart Test
    Can you explain what DevOps is?
    If no → You're incompetent on this topic
    If yes → Continue to next question

    Does your company use DevOps?
    If no → You're inexperienced and a magical thinker
    If yes → Continue to next question

    Why would you think narrow AI has any form of intelligence?
    Anyone claiming AI will automate coding jobs while understanding DevOps is likely:A magical thinker
    Unaware of scientific process
    A grifter

    Why DevOps Matters
    Proven methodology similar to Toyota Way
    Based on continuous improvement (Kaizen)
    Look-and-see approach to reducing defects
    Constantly improving build systems, testing, linting
    No AI component other than basic statistical analysis
    Feedback loop that makes systems better
    The Reality of Job Automation
    People who do nothing might be eliminatedNot AI automating a job if they did nothing

    Workers who create negative valuePeople who create bugs at 2AM
    Their elimination isn't AI automation

    Measuring Software Quality
    High churn files correlate with defects
    Constant changes to same file indicate not knowing what you're doing
    DevOps patterns help identify issues through:Tracking file changes
    Measuring complexity
    Code coverage metrics
    Deployment frequency

    Conclusion
    Very early stages of combining narrow AI with DevOps
    Narrow AI tools are useful but limited
    Need to look beyond magical thinking
    Opinions don't matter if you:Don't understand DevOps
    Don't use DevOps
    Claim to understand DevOps but believe narrow AI will replace developers

    Raw Assessment
    If you don't understand DevOps → Your opinion doesn't matter
    If you understand DevOps but don't use it → Your opinion doesn't matter
    If you understand and use DevOps but think AI will automate coding jobs → You're likely a magical thinker or grifter

    🔥 Hot Course Offers:
    🤖 Master GenAI Engineering - Build Production AI Systems
    🦀 Learn Professional Rust - Industry-Grade Development
    📊 AWS AI & Analytics - Scale Your ML in Cloud
    ⚡ Production GenAI on AWS - Deploy at Enterprise Scale
    🛠️ Rust DevOps Mastery - Automate Everything
    🚀 Level Up Your Career:
    💼 Production ML Program - Complete MLOps & Cloud Mastery
    🎯 Start Learning Now - Fast-Track Your ML Career
    🏢 Trusted by Fortune 500 Teams
    Learn end-to-end ML engineering from industry veterans at PAIML.COM

Meer Onderwijs podcasts

Over 52 Weeks of Cloud

A weekly podcast on technical topics related to cloud computing including: MLOPs, LLMs, AWS, Azure, GCP, Multi-Cloud and Kubernetes.
Podcast website

Luister naar 52 Weeks of Cloud, Keuringsdienst van Waarde en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v8.3.1 | © 2007-2026 radio.de GmbH
Generated: 1/31/2026 - 8:54:20 PM