Powered by RND
PodcastsOnderwijsAI Engineering Podcast

AI Engineering Podcast

Tobias Macey
AI Engineering Podcast
Nieuwste aflevering

Beschikbare afleveringen

5 van 57
  • Navigating the AI Landscape: Challenges and Innovations in Retail
    SummaryIn this episode of the AI Engineering Podcast machine learning engineer Shashank Kapadia explores the transformative role of generative AI in retail. Shashank shares his journey from an engineering background to becoming a key player in ML, highlighting the excitement of understanding human behavior at scale through AI. He discusses the challenges and opportunities presented by generative AI in retail, where it complements traditional ML by enhancing explainability and personalization, predicting consumer needs, and driving autonomous shopping agents and emotional commerce. Shashank elaborates on the architectural and operational shifts required to integrate generative AI into existing systems, emphasizing orchestration, safety nets, and continuous learning loops, while also addressing the balance between building and buying AI solutions, considering factors like data privacy and customization.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Shashank Kapadia about applications of generative AI in retailInterviewIntroductionHow did you get involved in machine learning?Can you summarize the main applications of generative AI that you are seeing the most benefit from in retail/ecommerce?What are the major architectural patterns that you are deploying for generative AI workloads?Working at an organization like WalMart, you already had a substantial investment in ML/MLOps. What are the elements of that organizational capability that remain the same, and what are the catalyzed changes as a result of generative models?When working at the scale of Walmart, what are the different types of bottlenecks that you encounter which can be ignored at smaller orders of magnitude?Generative AI introduces new risks around brand reputation, accuracy, trustworthiness, etc. What are the architectural components that you find most effective in managing and monitoring the interactions that you provide to your customers?Can you describe the architecture of the technical systems that you have built to enable the organization to take advantage of generative models?What are the human elements that you rely on to ensure the safety of your AI products?What are the most interesting, innovative, or unexpected ways that you have seen generative AI break at scale?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI?When is generative AI the wrong choice?What are your paying special attention to over the next 6 - 36 months in AI?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksWalmart LabsThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    52:09
  • The Anti-CRM CRM: How Spiro Uses AI to Transform Sales
    SummaryIn this episode of the AI Engineering podcast Adam Honig, founder of Spiro AI, about using AI to automate CRM systems, particularly in the manufacturing sector. Adam shares his journey from running a consulting company focused on Salesforce to founding Spiro, and discusses the challenges of traditional CRM systems where data entry is often neglected. He explains how Spiro addresses this issue by automating data collection from emails, phone calls, and other communications, providing a rich dataset for machine learning models to generate valuable insights. Adam highlights how Spiro's AI-driven CRM system is tailored to the manufacturing industry's unique needs, where sales are relationship-driven rather than funnel-based, and emphasizes the importance of understanding customer interactions and order histories to predict future business opportunities. The conversation also touches on the evolution of AI models, leveraging powerful third-party APIs, managing context windows, and platform dependencies, with Adam sharing insights into Spiro's future plans, including product recommendations and dynamic data modeling approaches.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Adam Honig about using AI to automate CRM maintenanceInterviewIntroductionHow did you get involved in machine learning?Can you describe what Spiro is and the story behind it?What are the specific challenges posed by the manufacturing industry with regards to sales and customer interactions?How does the type of manufacturing and target customer influence the level of effort and communication involved in the sales and customer service cycles?Before we discuss the opportunities for automation, can you describe the typical interaction patterns and workflows involved in the care and feeding of CRM systems?Spiro has been around since 2014, long pre-dating the current era of generative models. What were your initial targets for improving efficiency and reducing toil for your customers with the aid of AI/ML?How have the generational changes of deep learning and now generative AI changed the ways that you think about what is possible in your product?Generative models reduce the level of effort to get a proof of concept for language-oriented workflows. How are you pairing them with more narrow AI that you have built?Can you describe the overall architecture of your platform and how it has evolved in recent years?While generative models are powerful, they can also become expensive, and the costs are hard to predict. How are you thinking about vendor selection and platform risk in the application of those models?What are the opportunities that you see for the adoption of more autonomous applications of language models in your product? (e.g. agents)What are the confidence building steps that you are focusing on as you investigate those opportunities?What are the most interesting, innovative, or unexpected ways that you have seen Spiro used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI in the CRM space?When is AI the wrong choice for CRM workflows?What do you have planned for the future of Spiro?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksSpiroDeepgramCognee EpisodeAgentic MemoryGraphRAGPodcast EpisodeOpenAI Assistant APIThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    46:48
  • Unlocking AI Potential with AMD's ROCm Stack
    SummaryIn this episode of the AI Engineering podcast Anush Elangovan, VP of AI software at AMD, discusses the strategic integration of software and hardware at AMD. He emphasizes the open-source nature of their software, fostering innovation and collaboration in the AI ecosystem, and highlights AMD's performance and capability advantages over competitors like NVIDIA. Anush addresses challenges and opportunities in AI development, including quantization, model efficiency, and future deployment across various platforms, while also stressing the importance of open standards and flexible solutions that support efficient CPU-GPU communication and diverse AI workloads.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Anush Elangovan about AMD's work to expand the playing field for AI training and inferenceInterviewIntroductionHow did you get involved in machine learning?Can you describe what your work at AMD is focused on?A lot of the current attention on hardware for AI training and inference is focused on the raw GPU hardware. What is the role of the software stack in enabling and differentiating that underlying compute?CUDA has gained a significant amount of attention and adoption in the numeric computation space (AI, ML, scientific computing, etc.). What are the elements of platform risk associated with relying on CUDA as a developer or organization?The ROCm stack is the key element in AMD's AI and HPC strategy. What are the elements that comprise that ecosystem?What are the incentives for anyone outside of AMD to contribute to the ROCm project?How would you characterize the current competitive landscape for AMD across the AI/ML lifecycle stages? (pre-training, post-training, inference, fine-tuning)For teams who are focused on inference compute for model serving, what do they need to know/care about in regards to AMD hardware and the ROCm stack?What are the most interesting, innovative, or unexpected ways that you have seen AMD/ROCm used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AMD's AI software ecosystem?When is AMD/ROCm the wrong choice?What do you have planned for the future of ROCm?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksImageNetAMDROCmCUDAHuggingFaceLlama 3Llama 4QwenDeepSeek R1MI300XNokia SymbianUALink StandardQuantizationHIPIFYROCm TritonAMD Strix HaloAMD EpycLiquid NetworksMAMBA ArchitectureTransformer ArchitectureNPU == Neural Processing Unitllama.cppOllamaPerplexity ScoreNUMA == Non-Uniform Memory AccessvLLMSGLangThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    42:18
  • Applying AI To The Construction Industry At Buildots
    SummaryIn this episode of the Machine Learning Podcast Ori Silberberg, VP of Engineering at Buildots, talks about transforming the construction industry with AI. Ori shares how Buildots uses computer vision and AI to optimize construction projects by providing real-time feedback, reducing delays, and improving efficiency. Learn about the complexities of digitizing the construction industry, the technical architecture of Buildoz, and how its AI-driven solutions create a digital twin of construction sites. Ori emphasizes the importance of explainability and actionable insights in AI decision-making, highlighting the potential of generative AI to further enhance the construction process from planning to execution.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Ori Silberberg about applications of AI for optimizing building constructionInterviewIntroductionHow did you get involved in machine learning?Can you describe what Buildotds is and the story behind it?What types of construction projects are you focused on? (e.g. residential, commercial, industrial, etc.)What are the main types of inefficiencies that typically occur on those types of job sites?What are the manual and technical processes that the industry has typically relied on to address those sources of waste and delay?In many ways the construction industry is as old as civilization. What are the main ways that the information age has transformed construction?What are the elements of the construction industry that make it resistant to digital transformation?Can you describe how you are applying AI to this complex and messy problem?What are the types of data that you are able to collect?How are you automating that data collection so that construction crews don't have to add extra work or distractions to their day?For construction crews that are using Buildots, can you talk through how it integrates into the overall process from site planning to project completion?Can you describe the technical architecture of the Buildots platform?Given the safety critical nature of construction, how does that influence the way that you think about the types of AI models that you use and where to apply them?What are the most interesting, innovative, or unexpected ways that you have seen Buildots used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Buildots?What do you have planned for the future of AI usage at Buildots?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksBuildotsCAD == Computer Aided DesignComputer VisionLIDARGC == General ContractorKubernetesThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    49:29
  • The Future of AI Systems: Open Models and Infrastructure Challenges
    SummaryIn this episode of the AI Engineering Podcast Jamie De Guerre, founding SVP of product at Together.ai, explores the role of open models in the AI economy. As a veteran of the AI industry, including his time leading product marketing for AI and machine learning at Apple, Jamie shares insights on the challenges and opportunities of operating open models at speed and scale. He delves into the importance of open source in AI, the evolution of the open model ecosystem, and how Together.ai's AI acceleration cloud is contributing to this movement with a focus on performance and efficiency.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Jamie de Guerre about the role of open models in the AI economy and how to operate them at speed and at scaleInterviewIntroductionHow did you get involved in machine learning?Can you describe what Together AI is and the story behind it?What are the key goals of the company?The initial rounds of open models were largely driven by massive tech companies. How would you characterize the current state of the ecosystem that is driving the creation and evolution of open models?There was also a lot of argument about what "open source" and "open" means in the context of ML/AI models, and the different variations of licenses being attached to them (e.g. the Meta license for Llama models). What is the current state of the language used and understanding of the restrictions/freedoms afforded?What are the phases of organizational/technical evolution from initial use of open models through fine-tuning, to custom model development?Can you outline the technical challenges companies face when trying to train or run inference on large open models themselves?What factors should a company consider when deciding whether to fine-tune an existing open model versus attempting to train a specialized one from scratch?While Transformers dominate the LLM landscape, there's ongoing research into alternative architectures. Are you seeing significant interest or adoption of non-Transformer architectures for specific use cases? When might those other architectures be a better choice?While open models offer tremendous advantages like transparency, control, and cost-effectiveness, are there scenarios where relying solely on them might be disadvantageous?When might proprietary models or a hybrid approach still be the better choice for a specific problem?Building and scaling AI infrastructure is notoriously complex. What are the most significant technical or strategic challenges you've encountered at Together AI while enabling scalable access to open models for your users?What are the most interesting, innovative, or unexpected ways that you have seen open models/the TogetherAI platform used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on powering AI model training and inference?Where do you see the open model space heading in the next 1-2 years? Any specific trends or breakthroughs you anticipate?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksTogether AIFine TuningPost-TrainingSalesforce ResearchMistralAgentforceLlama ModelsRLHF == Reinforcement Learning from Human FeedbackRLVR == Reinforcement Learning from Verifiable RewardsTest Time ComputeHuggingFaceRAG == Retrieval Augmented GenerationPodcast EpisodeGoogle GemmaLlama 4 MaverickPrompt EngineeringvLLMSGLangHazy Research labState Space ModelsHyena ModelMamba ArchitectureDiffusion Model ArchitectureStable DiffusionBlack Forest Labs Flux ModelNvidia BlackwellPyTorchRustDeepseek R1GGUFPika Text To VideoThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    51:01

Meer Onderwijs podcasts

Over AI Engineering Podcast

This show is your guidebook to building scalable and maintainable AI systems. You will learn how to architect AI applications, apply AI to your work, and the considerations involved in building or customizing new models. Everything that you need to know to deliver real impact and value with machine learning and artificial intelligence.
Podcast website

Luister naar AI Engineering Podcast, Een Beetje Nederlands en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies

AI Engineering Podcast: Podcasts in familie

Social
v7.22.0 | © 2007-2025 radio.de GmbH
Generated: 8/9/2025 - 4:51:22 AM