PodcastsManagementTech Transformed

Tech Transformed

EM360Tech
Tech Transformed
Nieuwste aflevering

343 afleveringen

  • Tech Transformed

    Why Are Companies Struggling to Integrate AI Models into Business Workflows?

    11-05-2026 | 27 Min.
    Podcast: Tech Transformed
    Guests: Maxim Fateev, Co-Founder and CTO, Temporal Technologies and Cornelia Davis, Developer Advocate, Temporal Technologies
    Host: Kevin Petrie, VP of Research at BARC
    Artificial Intelligence (AI) models have been breaking ground in the last three years. In the race to boost capabilities month by month among platforms like OpenAI, Anthropic, and Google’s Gemini models. However, for many enterprises, the main challenge is not creating AI prototypes; it's ensuring they can reliably support real business processes.
    In a recent episode of the Tech Transformed podcast, Kevin Petrie, VP of Research at BARC, hosted a discussion with Maxim Fateev, Co-Founder and CTO, Temporal Technologies and Cornelia Davis, Developer Advocate, Temporal Technologies. They talked about why enterprises find it hard to transition AI from experimentation to production and how infrastructure must change to support autonomous systems.
    Why AI Demos Break in the Real World
    According to Davis, many organisations make a common mistake: they focus on the "happy path" during experiments and overlook real-world operational challenges. “We have always ignored the non-functional requirements until we go to prod at our peril,” Davis said. “A lot of our experimentation is so focused on the models that we forget about the non-functional requirements.”
    This means developers often prioritise model performance but neglect reliability, scaling, and system resilience. Agent frameworks used in experiments—usually lightweight Python or TypeScript libraries—add to the issue.
    “What you’re really building is a highly distributed system that’s calling Large Language Models (LLMs) that will be rate-limited… networks are going to go down,” Davis explained. “When we move into prod, we haven’t considered scale or instability.”
    As enterprises expand AI into their workflows, these overlooked details become imperative. A single outage, rate limit, or infrastructure failure can disrupt a complicated workflow that involves multiple AI steps.
    Also Watch: Developer Productivity 5X to 10X: Is Durable Execution the Answer to AI Orchestration Challenges?
    What Risks are Surfacing Since the Rise of Agentic Systems?
    The transition from simple AI workflows to autonomous agents adds a new layer of complexity. Traditional AI applications have predictable flows—such as summarising documents, tagging data, or creating recommendations. In contrast, agentic systems choose tools and decide on actions dynamically.
    “When we move from non-agentic to agentic, we introduce unpredictability,” Davis said. “The tools and the order they run in are unpredictable. Whether we go through the agentic loop once or a hundred times is unpredictable.”
    Such unpredictability creates new governance and compliance challenges, especially in regulated industries. “Enterprises are still responsible for predictable outcomes,” Davis noted. “We need stronger audit trails to understand why the agent made the decisions it did.”
    For enterprises, this means AI systems must ensure traceability, accountability, and compliance, even when decision paths differ from one interaction to another.
    Why is Durable Execution the New Foundation for Enterprise AI
    Fateev argues that to manage such newly surfacing risks, enterprises need a new architectural layer focused on reliability. His concept, “Durable Execution,” aims to ensure that complex workflows keep running even when infrastructure fails.
    “You write code as if failures don’t exist,” Fateev explained. “If a process crashes, we recover all the state and continue executing.” In practical terms, Durable Execution allows long-running AI workflows to survive interruptions—from network outages to system crashes—without losing progress or data.
    This is essential as agents start interacting with real systems and taking real actions. “The moment agents start acting on the external world—changing files, submitting orders—you absolutely don’t want those things to get lost,” Fateev said.
    The Temporal co-founder further emphasised that enterprise AI will not completely replace traditional software systems.
    “You will always have deterministic code,” he said. “You can’t imagine banks dynamically deciding what a money transfer means.”
    Instead, the future architecture will combine deterministic software with agents that interact through controlled tools and reliable communication layers.
    Also Watch: How Do You Make AI Agents Reliable at Scale?
    Key Takeaways
    AI projects fail in production when non-functional requirements are ignored
    Agentic systems bring unpredictability, making governance, traceability, and auditability essential.
    Lightweight experimentation frameworks aren't suited for enterprise workloads.
    Durable execution enables reliable AI workflows, ensuring processes continue despite infrastructure failures.
    Enterprise AI will blend deterministic software with agents.

    Chapters
    00:00 Introduction to AI's Impact on Business
    03:53 Challenges in Integrating AI into Business Workflows
    13:00 Understanding Non-Functional Requirements in AI
    19:14 The Role of Orchestration in AI Systems
    24:26 Exploring Durable Execution in AI Workflows
    30:28 Future Architectures for Autonomous AI Systems
    36:05 Key Takeaways for Executives in AI Implementation

    For more information, please visit em360tech.com and temporal.io.
    To learn more about Temporal and Durable Execution, follow:
    Temporal LinkedIn: Temporal Technologies
    Temporal X: @Temporalio
    Temporal YouTube: @Temporalio
    EM360Tech YouTube: @enterprisemanagement360
    EM360Tech LinkedIn: @EM360Tech
    EM360Tech X: @EM360Tech
    #DurableExecution #EnterpriseAI #AIToProduction #AIOrchestration #TemporalTech #AutonomousAgents #SystemReliability #LLMs #TechTransformed #AIWorkflows
  • Tech Transformed

    Why Data Sovereignty Now Drives Enterprise Resilience and Autonomy

    05-05-2026 | 22 Min.
    For years, data sovereignty was treated as a compliance requirement, focused mainly on keeping data within specific geographic borders. Today, that definition is no longer sufficient. True data sovereignty now encompasses control, visibility, and accountability over data wherever it resides, moves, or is processed.
    In an era shaped by AI adoption and increasingly fragmented cloud environments, sovereignty has become a core driver of enterprise resilience and operational autonomy rather than a regulatory checkbox. In this episode of The Security Strategist, Tim Pfaelzer, Senior Vice President and General Manager, EMEA at Veeam, explains how the meaning of data sovereignty has fundamentally changed.
    From Compliance Concept to Strategic Priority
    A decade ago, data lived in well-defined corporate environments managed by internal IT teams. Today, it is distributed across public cloud platforms, SaaS ecosystems, edge devices, and third-party suppliers. This distribution has expanded the attack surface while making ownership and control significantly harder to define.
    As a result, organisations are being forced to rethink sovereignty not as a legal constraint, but as a foundation for resilience, security, and trust.
    Why Data Sovereignty Requires Cultural Change
    One of the key arguments Pfaelzer makes is that data sovereignty cannot be solved through technology alone. It requires organisational alignment and executive ownership.
    Data is now created and consumed across every business function, which means governance must extend beyond IT. Leadership teams must treat data as a critical business asset, with clear accountability structures across its lifecycle.
    This shift is reinforced by regulatory pressure. Frameworks such as GDPR, the EU Data Act, and emerging AI governance rules now require organisations to demonstrate not only where data is stored, but how it is accessed, processed, and protected.
    The Five Dimensions of Modern Data Control
    Pfaelzer outlines five core dimensions that define effective data sovereignty today:
    Visibility: Knowing where all data exists, including backups and third-party copies
    Ownership: Clear accountability for data across its lifecycle
    Access governance: Controlled and regularly reviewed permissions
    Portability: The ability to move data without vendor lock-in
    Compliance readiness: Continuous compliance rather than audit-only validation

    Together, these determine how much real control an organisation has over its data estate.
    Data Sovereignty as the Foundation of Resilience
    Modern resilience is no longer defined by backup alone. It is defined by recovery speed, completeness, and operational continuity. A prolonged outage or ransomware incident can cause significant damage, but the difference between minutes and days of downtime often comes down to recovery architecture and how rigorously it has been tested under real-world conditions. In this context, sovereignty and resilience are directly linked. Without control over data, there is no predictable recovery.
    AI Has Raised the Stakes
    Artificial intelligence has introduced a new layer of data risk that many organisations are still underestimating. As AI systems increasingly automate decision-making and customer interactions, the quality and integrity of training and operational data become critical. If that data is corrupted, incomplete, or outdated, the impact can spread silently across business processes before detection.
    Unlike infrastructure failures, AI-driven data issues are not always immediately visible. This makes governance even more important. Pfaelzer argues that AI systems should operate under the same strict data controls as human users, including lineage tracking, access controls, and continuous validation of data integrity.
    Why Data Sovereignty Now Defines Enterprise Autonomy
    Ultimately, data sovereignty has changed into a measure of enterprise independence. Organisations that understand, govern, and control their data are better positioned to manage risk, comply with regulation, and adopt new technologies such as AI safely. Those who do not risk becoming dependent on opaque systems where visibility and control are limited. In 2026 and beyond, sovereignty is no longer just about where data lives. It is about who controls it, how it is used, and how quickly an organisation can recover when things go wrong.
    Takeaways
    Data sovereignty beyond geographic boundaries
    Risks of data fragmentation across cloud and edge environments
    Strategies for rapid data recovery and resilience
    Ensuring data integrity and trust in AI systems
    Control and ownership of data in a distributed landscape

    Chapters
    00:00 Introduction to Data Sovereignty and Resilience
    02:49 The Evolution of Data Management
    06:03 Control, Risk Exposure, and Accountability in Data
    08:57 Data Sovereignty Beyond Geography
    12:04 Ensuring Data Integrity in AI Systems
    15:05 Human Error and Data Management
    18:02 Case Study: University of Manchester's Data Strategy
    21:01 Non-Negotiables for Building a Resilient Data Strategy
  • Tech Transformed

    How Can Enterprises Turn Fragmented Data into Strategic AI Advantage?

    22-04-2026 | 27 Min.
    Podcast: Tech Transformed podcast
    Guest: John Newton, Chief Innovation Strategist at Hyland
    Host: Dana Gardner, President and Principal Analyst at Interabor Solutions
    Enterprise leaders rushing to integrate artificial intelligence (AI) into their operations often think the biggest challenge is the technology itself. In reality, the issue is much closer to home. It’s in the piles of unstructured enterprise data spread across documents, systems, and repositories.
    In the recent episode of the Tech Transformed podcast, John Newton, Chief Innovation Strategist at Hyland, sits down with host Dana Gardner, President and Principal Analyst at Interabor Solutions. They discussed how enterprises can unlock the full value of enterprise AI by addressing fragmented information and building stronger governance frameworks.
    Their conversation highlights that unstructured data is not an obstacle; it is the foundation for next-generation AI-driven productivity. As Newton stated, “The opportunity to truly use AI and use it effectively in your organisation really depends on that unstructured information.”
    For companies looking to adopt AI on a large scale, the real work is in organising and contextualising their internal knowledge.
    Is Unstructured Data the Hidden Fuel for Enterprise AI?
    Most enterprise data does not sit neatly in structured databases. Instead, it exists in contracts, reports, emails, videos, policies, and operational documents, creating a vast amount of unstructured content.
    The enormous amount of such unstructured data ends up creating a challenge for AI projects that rely solely on foundation models. Large language models (LLMs) may be trained on public data, but they cannot inherently access proprietary business intelligence.
    Newton argued that enterprise AI must therefore be built around internal knowledge systems. “Foundation models can’t train on your internal information,” he explained. “What you really want is that information to be part of the AI when you’re answering questions, doing research, or executing business processes.”
    This change requires organisations to rethink how information flows across the enterprise. Instead of isolated systems—CRM platforms, ERP databases, content repositories—companies need an interconnected information structure that connects multiple sources in real time.
    Such a structure enables AI systems and AI agents to find the right data at the right time. This also improves decision-making, automation, and operational intelligence.
    How to Reorganise Chaotic Unstructured Data?
    If unstructured data is the fuel, curation is the engine that drives effective AI. Newton emphasised that an enterprise data strategy must start with mapping, organising, and cleaning information assets. The aim is to reduce noise and increase clarity.
    “I like to look at things from a signal-to-noise perspective,” Newton says. “Curation is the key to removing uncertainty in the information.”
    The method could typically comprise a combination of several enterprise technologies such as content management platforms with business process management (BPM) and AI agents and LLMs.
    A pairing of the above strategies is aimed at helping enterprise data become more valuable. Enterprises can implement AI models to automate workflows, enhance knowledge discovery, and speed up processes across departments—from finance and manufacturing to customer operations.
    Importantly, Newton noted that this work also allows flexibility in the AI ecosystem. With a solid information foundation, companies can use open-source models, hyperscaler services, or internal AI deployments without tying themselves to a single vendor.
    In other words, an enterprise AI strategy should first focus on data readiness, not model selection.
    Key Takeaways
    Unstructured data is the foundation for effective enterprise AI.
    Data curation improves AI accuracy and reduces information noise.
    Connecting enterprise systems enables AI to deliver real-time insights.
    AI guardrails help manage security, compliance, and data governance.
    AI automation boosts employee productivity by reducing repetitive work.

    Chapters
    00:00 Unlocking AI's Potential with Unstructured Data
    05:20 Signal to Noise: The Clarity Challenge
    11:21 Guardrails for AI: Balancing Control and Flexibility
    14:41 Harnessing the Enterprise Context Engine
    17:48 Real-World Applications: Case Studies in AI
    20:37 Curation: The Key to Effective Automation
    22:21 Future Business Value: Productivity and Beyond

    For more information, please visit hyland.com
    To stay updated on B2B Tech front and centre, follow EM360Tech:
    YouTube: @enterprisemanagement360
    LinkedIn: @EM360Tech
    X: @EM360Tech
    Follow Hyland on all its major platforms:
    YouTube: @HylandAI
    LinkedIn: Hyland
    X: @Hyland
    #UnstructuredData #EnterpriseAI #DataCuration #AIGuardrails #LLMs #AIAutomation #FragmentedData #InformationManagement #SignalToNoise #EnterpriseContext #TechTransformedPodcast #Hyland #B2BTech
  • Tech Transformed

    Building Resilience in Ecommerce Revenue and Conversion Growth

    23-03-2026 | 31 Min.
    Ecommerce no longer rewards scale alone. As customer expectations rise and margins tighten, revenue growth and conversion optimisation depend on how well organisations use their data, align their teams, and simplify their technology stack. Brands that fail to adapt are discovering that being data-rich but insight-poor is no longer a survivable position.
    In this episode of Tech Transformed, host Christina Stathopoulos, Founder of Dare to Data, speaks with Kailin Noivo, President and Co-Founder of Noibu, and Rohit Nathany, Chief Product and Technology Officer at Mejuri. Together, they unpack what is holding ecommerce teams back from sustained revenue and conversion growth and what actually works in practice.
    Ecommerce Revenue Growth in a High-Cost, High-Expectation Market
    Today’s ecommerce environment is shaped by rising acquisition costs, operational sprawl, and customers who expect speed, relevance, and reliability by default. Rohit points to macroeconomic pressure, tariffs, and shifting buying behaviour as forces that are squeezing margins while raising the bar for customer experience.
    At the same time, brands are struggling to connect the dots between marketing spend, on-site behaviour, and conversion outcomes. Personalisation is widely discussed, but execution often breaks down when teams cannot see how customer interactions move from ad click to checkout. Kailin describes this as a “perfect storm”, explaining that: “infrastructure scaled rapidly during the pandemic, and now needs consolidation, optimisation, and clearer ownership.”
    Customer Experience, Team Alignment, and the Practical Use of AI
    Improving customer experience at scale requires more than simply adopting new technology. Organisations also need the right data, processes, and operational alignment to turn those tools into meaningful customer outcomes. It requires teams to work from the same signals and trust the same data. Both Kailin and Rohit stress that AI and automation only deliver value when they remove friction from day-to-day operations rather than adding another layer of complexity.
    Used well, AI can support data analytics by automating routine monitoring, surfacing patterns that matter, and freeing teams to focus on higher-value work. Used poorly, it becomes just another disconnected tool. The difference comes down to team alignment and culture, like clear ownership, shared goals, and a willingness to continuously refine how decisions are made.
    For ecommerce leaders, this is less about digital transformation as a slogan and more about operational discipline. Simplifying the stack, aligning teams around outcomes, and treating customer experience as a measurable business driver are what sustain revenue growth when conditions are uncertain.
    If you would like to find out more, visit: https://www.noibu.com/
    Takeaways
    Building resilience in revenue and conversion growth is crucial.
    Ecommerce leaders face a perfect storm of challenges.
    AI is central to enhancing customer experience in ecommerce.
    Data-rich environments often lead to insight-poor outcomes.
    Connecting the dots between data and decisions is essential.
    A strong culture of experimentation fosters innovation.
    Tool consolidation can streamline operations and reduce costs.
    Visibility in data access is critical for effective decision-making.
    Speed of action is influenced by organisational culture.
    Establishing a KPI tree helps unify team efforts.

    Chapters
    00:00 Introduction to Ecommerce Challenges
    06:04 Real-World Applications of Ecommerce Analytics & Monitoring
    11:50 The Role of AI in Ecommerce
    17:57 Data Utilisation and Decision Making
    24:13 Culture and Team Alignment in Ecommerce
    29:56 Practical Strategies for Ecommerce Leaders
  • Tech Transformed

    The New Economics of SaaS: Why Usage-Based Models Are Reshaping Software Pricing

    11-03-2026 | 31 Min.
    SaaS companies moving toward usage-based and hybrid pricing models are discovering that revenue is no longer secured when the contract is signed.
    Instead, revenue is earned continuously through product usage, introducing new challenges for finance teams around billing accuracy, revenue visibility, forecasting, and managing increasingly complex cost structures driven by AI-powered products.
    In the latest episode of Tech Transformed, host Dana Gardner speaks with Lee Greene, Vice President of Sales at Vayu, about how AI and usage-based pricing are reshaping the economics of SaaS and why many companies are discovering that their pricing strategy is only as strong as the infrastructure behind it.
    One idea from the conversation
    “Pricing strategy is only as strong as the infrastructure behind it.”
    What you will learn in this episode
    Why usage-based pricing exposes hidden revenue leakage in many SaaS companies
    • How AI-driven products introduce unpredictable cost structures and margin pressure
    • Why disconnected CRM, product, and ERP systems break revenue visibility
    • What finance and revenue teams need to support scalable usage-based billing and forecasting

    Why SaaS Economics Are Breaking Away From Fixed Subscriptions
    Greene argues that usage-based pricing isn’t simply an emerging trend. It is a response to assumptions that no longer hold true.
    Traditional SaaS subscription models were built around predictable costs and relatively stable product usage. AI-driven products have fundamentally changed that equation. Each interaction with an AI-powered system can create variable cost, making static pricing models increasingly difficult to sustain.
    This shift is also changing buyer expectations. Customers increasingly resist flat pricing structures and instead prefer models that reflect the value they actually receive. Usage-based pricing aligns economic benefit with real consumption, allowing buyers to justify spend internally while pushing vendors to be accountable for measurable outcomes rather than bundled feature sets.
    AI’s Double Role
    The conversation also highlights how AI is introducing a structural challenge for SaaS finance and revenue teams.
    Usage-based pricing generates enormous volumes of data across product usage, customer behaviour, and cost inputs. Traditional billing systems were not designed to process this level of complexity.
    At the same time, AI is also becoming the only scalable way to manage it. Automated usage tracking, dynamic pricing logic, and real-time billing reconciliation are increasingly necessary to maintain operational accuracy and financial control.
    Treating AI solely as a product capability, rather than embedding it into revenue operations, can leave organizations exposed to billing errors, misaligned pricing models, and revenue leakage.
    Revenue Management Shifts From Contracts to Operations
    One of Greene’s key observations is that usage-based pricing does not necessarily create revenue leakage. Instead, it reveals problems that already existed.
    The difference is visibility.
    In traditional SaaS models, revenue was largely secured at the moment of contract signature. In usage-based models, revenue must be earned continuously through product consumption. This means billing accuracy, system integration, and data flow directly influence financial performance.
    Disconnected CRM, product, and ERP systems can create gaps that lead to misbilling, delayed revenue recognition, and customer disputes. As a result, the infrastructure supporting revenue operations becomes inseparable from pricing strategy itself.
    What SaaS Leaders Must Build to Stay Economically Viable
    The discussion concludes with a broader perspective on how SaaS companies must evolve to support this new economic model.
    The future belongs to organizations that design their pricing and revenue systems for variability. Pricing models must adapt to changing demand, and the systems behind them must support that flexibility without relying on heavy manual processes.
    Automation and no-code AI tools are increasingly enabling finance and revenue teams to adjust pricing models as usage patterns evolve. This agility is not simply about speed. It is about maintaining control in an environment where AI-driven cost structures and product usage can shift rapidly.
    Usage-based pricing is doing more than changing how SaaS products are sold. It is reshaping how companies think about value, risk, and revenue itself, making flexibility, intelligent automation, and data-driven decision making central to long-term success.
    About Vayu
    Vayu helps SaaS companies manage complex usage-based and hybrid revenue models by connecting product usage data, billing systems, and finance infrastructure.
    Learn more at:https://www.withvayu.com/
    Takeaways
    The shift from fixed subscription models to usage-based pricing driven by AI
    How AI is both creating and solving new pricing and billing challenges
    Why revenue infrastructure plays a critical role in preventing revenue leakage
    The importance of flexible pricing models that adapt to demand and usage patterns
    The growing role of automation and AI in modern revenue operations

    Chapters
    00:00 – Introduction
    02:30 – The economic shift in SaaS: Moving toward usage-based models
    05:00 – The role of AI in transforming SaaS pricing and revenue streams
    06:47 – Buyer preferences and evolving value quantification
    08:38 – Infrastructure's role in supporting flexible billing models
    11:49 – How finance teams can shape technology to control revenue
    14:24 – Process reengineering and AI-driven automation
    17:15 – Adaptable SaaS infrastructure and market signals
    20:30 – Preparing for the unknown: sandboxing and scenario modeling
    24:49 – Opportunities in connecting SaaS apps and managing data flow
    28:54 – Building automated, scalable billing and integration flow

Meer Management podcasts

Over Tech Transformed

Explore how tech is shaping the future of business and share best practices for implementing these innovations. With expert interviews, in-depth analysis, and practical advice, you'll stay ahead of the curve and make informed decisions for your enterprise. Join us to debunk myths, dive into the latest trends, and cut through the AI noise with “Tech Transformed.” Tune in and transform your understanding of technology and its potential.
Podcast website

Luister naar Tech Transformed, NVR Rentmeesterspodcast en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies

Tech Transformed: Podcasts in familie