Knowledge Graph Insights

Larry Swanson
Knowledge Graph Insights
Nieuwste aflevering

21 afleveringen

  • Knowledge Graph Insights

    Quentin Reul: Solving Business Problems with Neuro-Symbolic AI – Episode 44

    16-2-2026 | 29 Min.
    Quentin Reul

    The complementary nature of knowledge graphs and LLMs has become clear, and long-time knowledge engineering professionals like Quentin Reul now routinely combine them in hybrid neuro-symbolic AI systems.

    While it's tempting to get caught up in the details of rapidly advancing AI technology, Quentin emphasizes the importance of always staying focused on the business problems your systems are solving.

    We talked about:

    his extensive background in semantic technologies, dating back to the early 2000s
    his contribution to the SKOS standard
    an overview of the strengths and weaknesses of LLMs
    the importance of entity resolution, especially when working with the general information that LLMs are trained on
    how LLMs accelerate knowledge graph creation and population
    his take on the scope of symbolic AI, in which he includes expert systems and rule-based systems
    his approach to architecting neuro-symbolic systems, which always starts with, and stays focused on, the business problem he's trying to solve
    his advice to avoid the temptation to start projects with technology, and instead always focus on the problems you're solving
    the importance of staying abreast of technology developments so that you're always able to craft the most efficient solutions

    Quentin's bio
    Dr. Quentin Reul is an AI Strategy & Innovation Executive who bridges the gap between high-level business goals and deep technical implementation. As a Director of AI Strategy & Solutions at expert.ai, he specializes in the convergence of Generative AI, Knowledge Graphs, and Agentic Workflows. His focus is moving companies beyond "PoC Purgatory" into production-grade systems that deliver measurable ROI.

    Unlike traditional strategists, he remains deeply hands-on, continuously prototyping with emerging AI research to stress-test its real-world impact. He doesn't just advocate for AI; he builds the technical roadmaps that translate the latest lab breakthroughs into safe, scalable, and high-value enterprise solutions.
    Connect with Quentin online

    LinkedIn
    BlueSky
    YouTube
    Medium

    Video
    Here’s the video version of our conversation:

    https://youtu.be/J8fgIezoNxE
    Podcast intro transcript
    This is the Knowledge Graph Insights podcast, episode number 44. We're far enough along now in the development of both generative AI learning models and symbolic AI technology like knowledge graphs to see the strengths and weaknesses of each. Quentin Reul has worked with both technologies, and the technologies that preceded them, for many years. He now builds systems that combine the best of both types of AI to deliver solutions that make it easier for people to discover and explore the knowledge and information that they need.
    Interview transcript
    Larry:
    Hi, everyone. Welcome to episode number 44 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Quentin Reul. Quentin is the director of AI Strategy and solutions at expert.ai in the US in Chicago. So welcome, Quentin. Tell the folks a little bit more about what you're up to these days.

    Quentin:
    Hi, thank you, Larry, for accepting me and getting me on your podcast. So my name is Quentin Reul. I actually have been around the RDF and the knowledge graph since before it was cool in the early 2000. And today, what I'm helping people in news, media, and entertainment is to see how they can leverage all of the unstructured data that they have and make it in a way that can be structured and they can make their content more findable and discoverable as part of what they are offering to their customers.

    Larry:
    Nice. And I love that you've been doing this forever. And one of the things we talked about before we went on the air was your early involvement in the SKOS standard. Can you talk a little bit about your little contribution to that project?

    Quentin:
    Yeah. So for this, we do know what SKOS stands for Simple Knowledge Organization System. It's a standard that has been created by the W3C standard around 2005. And being at the University of Aberdeen in Scotland, we had a lot of involvement with the W3C voicing the web ontology language and SKOS.

    Quentin:
    For SKOS, I was actually working on my PhD, and the idea of my PhD was to look at two ontologies and trying to map entities from one ontology to the entities in the other one. And a lot of the approach that were taken at the time were either leveraging philosophical kind of representation. And there was not really a lot of things that were looking at linguistics. So the approach that we were taking was looking at WordNet and using the structure of WordNet and maps that to the linguistic information, so the labels that were associated with nodes in the taxonomy.

    Quentin:
    But to do that, we needed to have a structure that was transitive. And at the time, SKOS only had broader and narrower, and they didn't have the transitive property. So my contribution was to push for the W3C standard and SKOS to include the SKOS broaderTransitive and SKOS narrowerTransitive, so that I could now have that if A broader B and B broader C, that A broader C was also correct, and having that description logic structure that would enable that.

    Larry:
    Well, that's so cool. I love that you have your ideas are ensconced in this 20-year-old standard now. But hey, what I wanted to talk about today and really focus on, I know I was excited to get you on the show because you're doing a lot of work in the area of neuro-symbolic AI, the idea of integrating LLMs and other machine learning technologies with knowledge graphs and other symbolic AI stuff.

    Larry:
    It's one of those things that everybody's talking about, but I haven't had the chance to talk on the podcast with many people who are actually doing it. So I'm hoping that you can help the listeners take the leap from this conceptual understanding of the natural complimentary nature of them to actually putting them together in an enterprise architecture. I guess maybe start with the strengths and weaknesses of each of the kinds of AI that we're talking about here.

    Quentin:
    Yeah. So if we look at the history of AI, symbolic AI was a thing that came up in the '70s and led to the first AI winter and the second AI winter for that matter. But where they were very good was in the structure and the explainability. So if you aren't very well set set of rules or predictive kind of aspect, it would do it consistently, repeatably, and all of that type of things.

    Quentin:
    Now, when you were trying to adopt a rule-based system for new data, it would die off because you had never seen that or a new set of rules or a new set of business requirements, it would just not handle that. And that's where machine learning really helped in making that transition to where we are today.

    Quentin:
    And the LLM, contributing further to that, in as much as the machine learning was pretty good at dealing with new patterns, as long as it was similar to the data that you were training with. I think one thing that the LLMs have really shine is in the way that it's able to surface things that you were not predicting from the data.

    Quentin:
    One thing that I think that we could have predicted or seen from the data if we had LLMs back in 2020 is we could probably have seen the topic of COVID emerging a bit earlier than what it did. And the reason is, it's because it's very good at surfacing things that it's never seen before. It's able at interpreting the language and analyzing the language in its structure. And by the sentence structure, understanding that things are very similar, and you may use different words for them, but now you're able to interpret them.

    Quentin:
    So if we think about information retrieval in the '90s, 2000s, and even in the 2010s, the way that we were doing a lot of these things was using control vocabulary, CISORI, or other dictionaries, and they were used to do query expansion. So you add a keyword, you were looking in the dictionaries, the dictionary were doing an expansion, and then you add something else.

    Quentin:
    Well, now with the LLM, that kind of expansion is intuitive to the actual LLM because you had seen so many different aspect and so many occurrence of text that it can actually predict and see what these different terms are associated with a holistic concept.

    Quentin:
    Now, that's a good thing. On the bad thing, the LLMs don't have ... Well, they have a cutoff point or knowledge cutoff point, which means that when they are trained, they are trained of information that is in the past. So they're not always that great at predicting, especially current event or information about things that are happening today, they're not very good at that.

    Quentin:
    I think if I look at the data, generally between the release of a new model and the nature of the data or the cutoff point, it's about six months to a year. This is like going a bit slower now or shorter in terms, but you have to remember that the time that it takes to train these models, we're speaking about days, weeks, and sometime months as opposed to hours with machine learning models. So they're expensive as well from that perspective.

    Quentin:
    Another aspect that they don't have, it's a knowledge base to just take a higher level from a knowledge graph, like the knowledge base. So it's not able to disambiguate information in a large corpus. It's very good to do entity linking within the context of one document.

    Quentin:
    So if you pass it one document, let's say a financial document, and it refers to Acme as an enterprise, if Acme is mentioned several times during the document, it will infer that there is only one entity and that entity is Acme.

    Quentin:
    But now, imagine that you have a group of financial reports, and these financial reports refer to Acme, a bakery in Illinois, and Acme, a construction company in Maryland.
  • Knowledge Graph Insights

    Jim Hendler: Scaling AI and Knowledge with the Semantic Web – Episode 43

    22-1-2026 | 54 Min.
    Jim Hendler

    As the World Wide Web emerged in the late 1990s, AI experts like Jim Hendler spotted an opportunity to imbue in the new medium, in a scale-able way, knowledge about the information on the web along with its simple representation as content.

    With his colleagues Tim Berners-Lee, the inventor of the web, and Ora Lasilla, an early expert on AI agents, Jim set out their vision in the famous "Semantic Web" article for the May 2001 issue of Scientific American magazine.

    Since then, semantic web implementations have blossomed, deployed in virtually every large enterprise on the planet and adding meaning to the web by appearing in the majority of pages on the internet.

    We talked about:

    his academic and administrative history at the University of Maryland, Rensselaer Polytechnic Institute, and DARPA
    the origins of his assertion that "a little semantics goes a long way"
    his early thinking on the role of memory in AI and its connections to knowledge representation and to SHOE, the first semantic web language
    his goal to scale up knowledge representation in his work as a grant administrator at DARPA
    how different departments in the US Air Force used different language to describe airplanes
    the origins and development of his relationship with Tim Berners-Lee and how his use of URLs in SHOE caused it to click
    how he and Berners-Lee brought Ora Lassila into the semantic web article
    how his and Berners-Lee's shared interest in scale contributed to the "a little semantics goes a long way" idea
    why he lives in awe of Tim Berners-Lee
    Berners-Lee's insight that a scaleable web needed the 404 error code
    how including an inverse functionality property like in a relational database would have ruined the semantic web
    how they came to open the Scientific American paper with an anecdote about agents
    his early involvement in the AI agent community along with Ora Lassila
    their shared conviction of the foundational importance of interoperability in their conception of the semantic web
    how the lack of interoperability between big internet players now is part of the reason for the inability to fully execute on the agent version they set out in the SciAm article
    the impact of LLMs on the semantic web
    early examples of semantic web linked data interoperability
    Google's reclamation of the term "knowledge graph"
    the reason that the shape of the semantic web was always in their mind a graph
    how the growth of enterprise data led to their adoption of semantic web technology
    how the answer to so many modern AI questions is, "knowledge"

    Jim's bio
    James Hendler is the Tetherless World Professor of Computer, Web and Cognitive Sciences at RPI where he also serves as a special academic advisor to the Provost and the Head of the Cognitive Science Department. He also serves as a member of the Board, and former chair of the UK’s charitable Web Science Trust. Hendler is a long-time researcher in the widespread use of experimental AI techniques including semantics on the Web, scientific data integration, and data policy in government. One of the originators of the Semantic Web, he has authored over 500 books, technical papers, and articles in the areas of Open Data, the Semantic Web, AI, and data policy and governance. He is the former Chief Scientist of the Information Systems Office at the US Defense Advanced Research Projects Agency (DARPA) and was awarded a US Air Force Exceptional Civilian Service Medal in 2002. In 2010, Hendler was selected as an “Internet Web Expert” by the US government, helping in the development and launch of the US data.gov open data website and from 2015 to 2024 served as an advisor to DHS and DoE board. From 2021-2024 he served as chair of the ACM’s global Technology Policy Council. Hendler is a Fellow of the AAAI, AAIA, AAAS, ACM, BCS, IEEE and the US National Academy of Public Administration. In 2025, Hendler was awarded the Feigenbaum Prize by the Association for the Advancement of Artificial Intelligence, recognizing a “sustained record of high-impact seminal contributions to experimental AI research.”
    Connect with Jim online

    RPI faculty page

    People and resources mentioned in this interview

    Tim Berners-Lee
    Ora Lassila
    Deb McGuinness
    The Semantic Web, Scientific American, May 2001
    Introducing the Knowledge Graph: things, not strings
    Massively Parallel Artificial Intelligence paper
    Attention Is all You Need paper
    Vision conference
    Is There An Agent in Your Future? article
    "And then a miracle occurs" cartoon

    Jim's SHOE (simple HTML ontology extensions) t-shirt
    Video
    Here’s the video version of our conversation:

    https://youtu.be/DpQki6Y0zx0
    Podcast intro transcript
    This is the Knowledge Graph Insights podcast, episode number 43. Twenty-five years ago, as AI experts like Jim Hendler navigated the new World Wide Web, they saw an opportunity to imbue in the medium, in a scale-able way, more knowledge than was included in the text on web pages. Jim combined forces with the web's inventor, Tim Berners-Lee, and their mutual friend Ora Lasilla, an expert on AI agents, to set out their vision in the now-famous "Semantic Web" article for Scientific American magazine. The rest, as they say, is history.
    Interview transcript
    Larry:
    Hi everyone. Welcome to episode number 43 of the Knowledge Graph Insights Podcast. I am super extra delighted today to welcome to the show, Jim Hendler. Jim, I think it's fair to say he literally needs no introduction. He was one of the co-authors of the original Semantic Web article in Scientific American. He's been a longtime well-known professor at Rensselaer Polytechnic Institute. So welcome, Jim. Tell the folks a little bit more about what you're up to these days.

    Jim:
    Sure. Just to go back a little further in history, I've been doing AI a long time and my first paper was about '77, but a lot of the work we're going to be talking today happened when I was a professor at the University of Maryland, which was from '86 to 2007. And then from 2007 on, I've been at RPI where I was really hired to create a lab that really would be a visionary lab on semantic web and related technologies. I think the president of the university saw the data science revolution coming and saw that that was a key part of it.

    Jim:
    So who am I? What am I? Really, what happened was very early in the days of AI, I was working in a lot of different things. I started under Roger Schank at Yale, took a few years off to work professionally at Texas Instruments, which had the first industrial AI lab outside of the well-known ones at Xerox Park and stuff. Then decided no, I really was an academic at heart. So I came back, went to grad school with Gene Charniak at Brown and went from there to the University of Maryland. So you know my job life history. I've bumped around during that time. Living in Maryland, you tend to bump into the Defense Department and things like that and funding and things like that. I was on a few committees and things like that. Eventually asked to come to DARPA for a few years, which is really where a lot of our conversation today probably starts.

    Jim:
    And then again, just because it was successful and we had a visionary president here at RPI, she asked me to come and said, "Not only do I want to hire you, but I want you to hire a couple other people you'll work with who'll help put us on the map and this stuff." And I hired Deb McGuinness and I'm sure that'll come up later. And then past 15 years have been a combination of research and administration. So I've done both, doing my own work, working with my students, and also trying to really set up some significant presence of AI on our campus, AI and beyond.

    Larry:
    Nice. Yeah, and we'll talk definitely more about your research work and everything. But hey, I want to set a little bit of context about how we met, because I know Dean Allemang from the Knowledge Graph Conference community, and we'll talk a little bit more about the book that you wrote with him later on. But one of the things that he famously says, and always attributes it to you, is that phrase "A little semantics goes a long way." I'd love to open up by talking a little bit about that.

    Jim:
    So early on in AI, it was becoming very, very clear to me, and now I'm talking 70s, early 80s, so a long time before we were where scaling means what it does today. But it's very clear to me that a lot of the problem with AI is it didn't scale. And meanwhile, I was seeing these other technologies coming along, the ones that really led to the web, that were looking at a much, much broader thing than the typical AI system. So one of the things I started asking is, how do we scale up AI? And we were looking at traditional knowledge representation languages. I actually have a paper from the 80s. I actually did a book with Hiroki Katano, who's now the... I believe he's still the vice president for research at Sony, if not something higher. And Katanosan and I actually had a book called Massively Parallel Artificial Intelligence in the 80s, but it became clear to me that the machines were part of the story, but the lots and lots of people doing lots and lots of different things was the much more interesting part of the story.

    Jim:
    And then also, I've always been intrigued by human memory. You asked me a question and I not only answered that question, but I'm doing right now. It's associating a million things in my mind. And what I'm really doing is winnowing rather than trying to come up with the precise answer. And so I started thinking about how does AI memory start to look like human memory more? In those days, a thousand and then 10,000 and then a million "axioms" were very, very large things, and that's what I wanted to do. And then the web was coming along and I saw that, well, if I'm going to get a million facts about something,
  • Knowledge Graph Insights

    Brad Bolliger: Pragmatic Semantic Modeling for Government Data – Episode 42

    12-1-2026 | 34 Min.
    Brad Bolliger

    Brad Bolliger entered the knowledge graph space via enterprise software system design and data analytics. That background informs their pragmatic and strategic approach to the use of semantic technology in systems that facilitate information exchange across government agencies.

    We talked about:

    their work at EY (Ernst & Young) on data and analytics strategy assessments and enterprise software design and as a co-chair of the NIEMOpen Technical Architecture Committee
    how their work on EY's Unified Justice Platform introduced them to the knowledge graph world
    a quick overview of entity resolution
    the NIEM standard, its origin in the wake of 9/11, its scope, how it's built and managed, and how governments use it
    their pragmatic approach to ontology and vocabulary management
    the benefits of the extensibility of the RDF format and knowledge graph technology
    how entity-centric data modeling accelerates and facilitates systems evolution
    their take on "analytics enablement engineering"
    their approach to crafting AI-ready data and building AI-aware enterprise solutions
    some of the neuro-symbolic AI architecture's they have seen and implemented
    their call for more systems thinking and systems analysis to create more effective services that work together in a more ethical and effective way

    Brad's bio
    Bradley Bolliger (they/them) works in the AI & Data practice of Ernst & Young and serves as co-chair of the NIEMOpen Technical Architecture Committee, an OASIS open standards project for data interoperability.

    Brad assists clients across various industries with optimizing data platform ecosystems, enhancing customer relationships, and leveraging advanced analytics tools and techniques in their digital transformation efforts. In addition to designing data platforms and AI/NLP systems, Brad has served in lead analyst roles for public sector information system modernization efforts, including major contact center data ecosystems and integrated criminal justice system environments, the latter of which would lead to the development of the UnifiedJusticePlatform.
    Connect with Brad online

    LinkedIn
    Unified Justice Platform

    Video
    Here’s the video version of our conversation:

    https://youtu.be/8XCmF3qXv1E
    Podcast intro transcript
    This is the Knowledge Graph Insights podcast, episode number 42. When you have to account for the people and other entities involved in high-stakes situations, you need a system that delivers accurate, unambiguous information. Brad Bolliger does this in their work on EY's Unified Justice Platform. Brad is relatively new to the graph world and has adopted a pragmatic approach to semantic modeling and knowledge graphs, focusing on applying lessons learned in their extensive experience in enterprise systems design and data analytics.
    Interview transcript
    Larry:
    Hi, everyone. Welcome to episode number 42 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Brad Bolliger. Brad works in the AI and data practice at EY, the big consultancy in Chicago, and also helps co-chair the NIEM Information Exchange, the Info Exchange Network and standard. Welcome, Brad. Tell the folks a little bit more about what you're up to these days.

    Brad:
    Thanks for having me, Larry. I'm thrilled to be talking to you today. Yeah, I'm non-binary. I use they/them pronouns, and I work in the AI and data practice at Ernst & Young, as you said, where I do data and analytics strategy assessments and enterprise software design, things like that. I'm also co-chair of the NIEMOpen Technical Architecture Committee, which is an Oasis Open standard for sharing data in public services primarily, but for specification for developing information exchanges. And I'm working on semantics and software design more generally.

    Larry:
    Yeah. And you kind of not stumbled, but you had semantics thrust upon you in this new role, I understand, 'cause one of the projects you work on, I don't know if you're still working on it, was the Unified Justice Platform at EY. Can you talk a little bit about that and how it brought you into the semantics world?

    Brad:
    Yeah, that's right. It spun out of an assessment from a county government wanting to overhaul their integrated justice system, which was the collection of actors who collaborate or have this adversarial relationship to administer the process of justice in their jurisdiction. And because very often they're their own elected officials with their own budgets, they have their own software to fulfill their own functions. And that means that they are kind of inherently operating a distributed system, sending messages back and forth to say, "Hey, we booked this person into the jail. Hey, we've got this court date coming up. Hey, we're filing these charges." And they need to orchestrate complex operational processes across multiple software systems and multiple groups of people, again, kind of across jurisdictions or enclaves. And that was, of course, a really interesting systems analysis process that led to the development of a solution to this problem we were trying to assess, which we later called the Unified Justice Platform and is an event-driven architecture for building an entity-resolved knowledge graph as an operational data store programmatically as messages are exchanged between the stakeholders in the Enclave.

    Larry:
    Yeah. And you used a couple of words in there. I want to clarify for folks who might be new to them. The notion of entity resolution, the entity-resolved knowledge graph, I'll just point out that we met through our mutual friend, Paco Nathan, who works for Senzing, a company that just does entity resolution. And can you talk a little bit about entity resolution, how that fits into the needs of this distributed system and how you implement it in the platform?

    Brad:
    Yeah. Actually, I'll plug almost two years ago, we did a webinar with someone from Senzing and talked about the fundamental utility of entity resolution and relevance, I suppose, as a problem more generally. Entity resolution is essentially about creating, for me, is essentially about creating a high quality master index of whatever kind of data that it is that you're looking at. So in this case, we were talking about a master person index so that you have a more reliable picture of the same natural person, no matter which software system is representing the data that describes the person subject to judicial proceedings in particular. But thinking about entity-centric data modeling more generally, you got a different type of entity, you still need to disambiguate which location you're talking about, which person you're talking about, which entity that really is. And if there are different representations, different records that relate to the same underlying entity, that process of entity resolution therefore has this really broad systemic benefit to data management and data engineering in particular, because ultimately it's about the master index at the end of the day.

    Larry:
    Yeah. And as you talked about that, you mentioned that it's like this a canonical record of entities. And how does NIEM fit into that? Because that's a vocabulary as I understand it.

    Brad:
    That's right.

    Larry:
    Yeah. Can you talk a little bit about NIEM and how that works with entity resolution?

    Brad:
    Yeah, very briefly on NIEM, NIEM spun out of the post September 11th realization that public services needed to share data to collaborate more effectively to actually solve emergencies, but just problems in general. And what they realized was that they need to have a common language to collaborate more effectively. Again, because systems, machines, software systems, have this really concrete definition of we use these particular terms and they mean something in our enclave, but you could have a person's full name and a person's first name and a person's last name in two different records, but actually they're the same real person. So NIEM came out of an attempt to at least address some of that disambiguity. And what is most interesting to me about NIEM, honestly, is that it is a collaboratively defined list of vocabulary. So we actually get domain participants involved and they decide we use these terms and they mean these things.

    Brad:
    And so it's an attempt to reduce the amount of complexity that you could use to describe a different person, but communicate the same meaning without losing the information that's entailed in some data record. But I'm digressing a little bit probably. What NIEM is a framework for building message specifications, APIs, if you like, or other types of structures, data structures in general that is a community agreed-upon set of terms that have some kind of core relevance, person, entity, organization, or have some domain specific function, like, subject or something in human services and so on.

    Larry:
    Interesting. Yeah. And as you talk about that, that attempt to align people on vocabulary is such a notoriously difficult problem. And I don't know how many jurisdictions we're talking about here, but every little town in America has a police department and other social services that they do. What is the scope or the scale of that? And is it facilitated in any way by existing standards or vocabularies?

    Brad:
    Oh, very much so. In fact, the problem is even worse than you've described it very charitably, I think. Just in the United States alone, I'm told that there are over 18,000 law enforcement agencies, just law enforcement agencies. Nevermind how ... Anyway, so NIEM is a voluntary open standard. So it is something that is available, but is usually not mandated. There are some places where it is mandated for specific types of services. So the scale of the problem that we're talking about really depends on who's included in the conversation.
  • Knowledge Graph Insights

    Tara Raafat: Human-Centered Knowledge Graph and Metadata Leadership – Episode 41

    15-12-2025 | 30 Min.
    Tara Raafat

    At Bloomberg, Tara Raafat applies her extensive ontology, knowledge graph, and management expertise to create a solid semantic and technical foundation for the enterprise's mission-critical data, information, and knowledge.

    One of the keys to the success of her knowledge graph projects is her focus on people. She of course employs the best semantic practices and embraces the latest technology, but her knack for engaging the right stakeholders and building the right kinds of teams is arguably what distinguishes her work.

    We talked about:

    her history as a knowledge practitioner and metadata strategist
    the serendipitous intersection of her knowledge work with the needs of new AI systems
    her view of a knowledge graph as the DNA of enterprise information, a blueprint for systems that manage the growth and evolution of your enterprise's knowledge
    the importance of human contributions to LLM-augmented ontology and knowledge graph building
    the people you need to engage to get a knowledge graph project off the ground: executive sponsors, skeptics, enthusiasts, and change-tolerant pioneers
    the five stars you need on your team to build a successful knowledge graph: ontologists, business people, subject matter experts, engineers, and a KG product owner
    the importance of balancing the desire for perfect solutions with the pragmatic and practical concerns that ensure business success
    a productive approach to integrating AI and other tech into your professional work
    the importance of viewing your knowledge graph as not just another database, but as the very foundation of your enterprise knowledge

    Tara's bio
    Dr. Tara Raafat is Head of Metadata and Knowledge Graph Strategy in Bloomberg’s CTO Office, where she leads the development of Bloomberg’s enterprise Knowledge Graph and semantic metadata strategy, aligning it with AI and data integration initiatives to advance next-generation financial intelligence. With over 15 years of expertise in semantic technologies, she has designed knowledge-driven solutions across multiples domains including but not limited to finance, healthcare, industrial symbiosis, and insurance. Before Bloomberg, Tara was Chief Ontologist at Mphasis and co-founded NextAngles™, an AI/semantic platform for regulatory compliance. Tara holds a PhD in Information System Engineering from the UK. She is a strong advocate for humanitarian tech and women in STEM and a frequent speaker at international conferences, where she delivers keynotes, workshops, and tutorials.
    Connect with Tara online

    LinkedIn
    email: traafat at bloomberg dot net

    Video
    Here’s the video version of our conversation:

    https://youtu.be/yw4yWjeixZw
    Podcast intro transcript
    This is the Knowledge Graph Insights podcast, episode number 41. As groundbreaking new AI capabilities appear on an almost daily basis, it's tempting to focus on the technology. But advanced AI leaders like Tara Raafat focus as much, if not more, on the human side of the knowledge graph equation. As she guides metadata and knowledge graph strategy at Bloomberg, Tara continues her career-long focus on building the star-shaped teams of humans who design and construct a solid foundation for your enterprise knowledge.
    Interview transcript
    Larry:
    Hi everyone. Welcome to episode number 41 of the Knowledge Graph Insights podcast. I am really excited today to welcome to the show Tara Raafat. She's the head of metadata and knowledge graph strategy at Bloomberg, and a very accomplished ontologist, knowledge graph practitioner. And welcome to the show, Tara. Tell the folks a little bit more about what you're doing these days.

    Tara:
    Hi, thank you so much, Larry. I'm super-excited to be here and chatting with you. We always have amazing chats, so I'm looking forward to this one as well. Well, as Larry mentioned, I'm currently working for Bloomberg and I've been in the space of knowledge graphs and ontology and creation for a pretty long time. So I've been in this community, I've seen a lot. And my interest has always been in the application of ontologies and knowledge graphs in industries, and have worked in so many different industries from banking and financial to insurance to medical. So I touched upon a lot of different domains with the application of knowledge graphs. And currently at Bloomberg, I am also leading their metadata strategy and the knowledge graph strategy, so basically semantic metadata. And we're looking over how we are basically connecting all the different data sources and data silos that we have within Bloomberg to make our data ready for all the AI interesting, exciting AI stuff that we're doing. And making sure that we have a great representation of our data.

    Larry:
    That's something that comes up all the time in my conversations lately is that people have done this work for years for very good reasons, all those things you just talked about, the importance of this kind of work in finance and insurance and medical fields and things like that. But it turns out that it makes you AI-ready as well. So is that just a happy coincidence or are you doing even more to make your metadata more AI-ready these days?

    Tara:
    Yeah. In a sense, you could say happy coincidence, but I think from the very beginning of when you think about ontologies and knowledge graphs, the goal was always to make your data machine-understandable. So whenever people ask me, "You're an ontologist, what does that even mean?" My explanation was always, I take all the information in your head and put it in a way that is machine understandable. So now encoded in that way. So now when we're thinking about the AI era, it's basically we're thinking if AI is operating on our information, on our data, it needs to have the right context and the right knowledge. So it becomes a perfect fit here. So if data is available and ready in your knowledge graph format, it means that it's machine understandable. It has the right context. It has the extra information that an AI system, specifically in the LLM era and generative AI needs in order to make sure that the answering that it's done is more grounded and based in facts, or have a better provenance. And it's more accurate in quality.

    Larry:
    Yeah, that's right. You just reminded me, it's not so much serendipity or a happy coincidence. It's like, no, it's just what we do. Because we make things accessible. The whole beauty of this is the-

    Tara:
    We knew what's coming, right? The word AI has changed so much. It's the same thing. It just keeps popping up in different contexts, but yeah.

    Larry:
    So you're actually a visionary futurist as all of us are in the product. Yeah. In your long experience, one of the things I love most, there's a lot of things I love about your work. I even wrote about it after KGC. I summarized one of your talks, and I think it's on your LinkedIn profile now, you have this great definition of a knowledge graph. And you liken it to a biological concept that I like. So can you talk a little bit about that?

    Tara:
    Sure. I see knowledge graph as the DNA of data or DNA of our information. And the reason I started thinking about it that way is when you think about the human DNA, you're literally thinking of the structure and relationship of the organisms and how they operate and how they evolve. So there's a blueprint of their operation and how they would grow and evolve. And for me, that's very similar to when we start creating a knowledge graph representation of our data, because we're again, capturing the structure and relationships between our data. And we're actually encoding the context and the rules that are needed to allow our data to grow and evolve as our business grows and evolves. So there's a very similarity for me there. And it also brings that human touch to this whole concept of knowledge graphs because when I think about knowledge graphs and talking about ontologies, it comes from a philosophical background. And it's a lot more social and human.

    Tara:
    And at the end of the day, the foundation of it is how we as humans interpret the world and interpret information. And how then by the use of technology, we encode it, but the interpretation is still very human. So that's why this link for me is actually very interesting. And I think one more thing I would add, which is I do this comparison to also emphasize on the fact that knowledge graphs are not just another database or another data store. So I don't like companies to look at it from that perspective. They really should look at it as the foundation on which their data grows and evolves as their business grows.

    Larry:
    Yeah. And that foundational role, it just keeps coming up, again, related to AI a lot, the LLM stuff that I've heard a lot of people talk about the factual foundation for your AI infrastructure and that kind of thing. And again, another one of those things like, yeah, it just happens to be really good at that. And it was purpose built for that from the start.

    Larry:
    You mentioned a lot in there, the human element. And that's what I was so enamored of with your talk at KGC and other talks you've done and we've talked about this. And one of the things that, just a quick personal aside, one of the things that drives me nuts about the current AI hype cycle is this idea like, "Oh, we can just get rid of humans. It's great. We'll just have machines instead." I'm like, "Have you not heard..." Every conversation, I've done about 300 different interviews over the years. Every single one of them talks about how it's not technical, it's not procedural or management wisdom. It's always people stuff. It's like change management and working with people. Can you talk about how the people stuff manifests in your work in metadata strategy and knowledge graph construction? I know that's a lot.

    Tara:
    Sure.
  • Knowledge Graph Insights

    Alexandre Bertails: The Netflix Unified Data Architecture – Episode 40

    03-11-2025 | 31 Min.
    Alexandre Bertails

    At Netflix, Alexandre Bertails and his team have adopted the RDF standard to capture the meaning in their content in a consistent way and generate consistent representations of it for a variety of internal customers.

    The keys to their system are a Unified Data Architecture (UDA) and a domain modeling language, Upper, that let them quickly and efficiently share complex data projections in the formats that their internal engineering customers need.

    We talked about:

    his work at Netflix on the content engineering team, the internal operation that keeps the rest of the business running
    how their search for "one schema to rule them all" and the need for semantic interoperability led to the creation of the Unified Data Architecture (UDA)
    the components of Netflix's knowledge graph
    Upper, their domain modeling language
    their focus on conceptual RDF, resulting in a system that works more like a virtual knowledge graph
    his team's decision to "buy RDF" and its standards
    the challenges of aligning multiple internal teams on ontology-writing standards and how they led to the creation of UDA
    their two main goals in creating their Upper domain modeling language - to keep it as compact as possible and to support federation
    the unique nature of Upper and its three essential characteristics - it has to be self-describing, self-referencing, and self-governing
    their use of SHACL and its role in Upper
    how his background in computer science and formal logic and his discovery of information science brought him to the RDF world and ultimately to his current role
    the importance of marketing your work internally and using accessible language to describe it to your stakeholders - for example describing your work as a "domain model" rather than an ontology
    UDA's ability to permit the automatic distribution of semantically precise data across their business with one click
    how reading the introduction to the original 1999 RDF specification can help prepare you for the LLM/gen AI era

    Alexandre's bio
    Alexandre Bertails is an engineer in Content Engineering at Netflix, where he leads the design of the Upper metamodel and the semantic foundations for UDA (Unified Data Architecture).
    Connect with Alex online

    LinkedIn
    bertails.org

    Resources mentioned in this interview

    Model Once, Represent Everywhere: UDA (Unified Data Architecture) at Netflix
    Resource Description Framework (RDF) Schema Specification (1999)

    Video
    Here’s the video version of our conversation:

    https://youtu.be/DCoEo3rt91M

    Podcast intro transcript
    This is the Knowledge Graph Insights podcast, episode number 40. When you're orchestrating data operations for an enormous enterprise like Netflix, you need all of the automation help you can get. Alex Bertails and his content engineering team have adopted the RDF standard to build a domain modeling and data distribution platform that lets them automatically share semantically precise data across their business, in the variety of formats that their internal engineering customers need, often with just one click.
    Interview transcript
    Larry:
    Hi, everyone. Welcome to episode number 40 of the Knowledge Graph Insights podcast. I am really excited today to welcome to the show, Alex Bertails. Alex is a software engineer at Netflix, where he's done some really interesting work. We'll talk more about that later today. But welcome, Alex, tell the folks a little bit more about what you're up to these days.

    Alex:
    Hi, everyone. I'm Alex. I'm part of the content engineering side of Netflix. Just to make it more concrete, most people will think about the streaming products, that's not us. We are more on the enterprise side, so essentially the people helping the business being run, so more internal operations. I'm a software engineer. I've been part of the initiative called UDA for a few years now, and we published that blog post a few months ago, and that's what most people want to talk about.

    Larry:
    Yeah, it's amazing that the excitement about that post and so many people talking about it. But one thing, I think I inferred it from the article, but I don't recall a real explicit statement of the problem you were trying to solve in that. Can you talk a little bit about the business prerogatives that drove you to create UDA?

    Alex:
    Yeah, totally. There was no UDA, there's no clear problem that we had to solve and really people, won't realize that, but we've been thinking about that point for a very long time. Essentially, on the enterprise side, you have to think about lots of teams having to represent the same business concepts, think about movie actor region, but really hundreds of them really, across different systems. It's not necessarily people not agreeing on what a movie is, although it happens, but it's really what is the movie across a GraphQL service, a data mesh source, an Iceberg table, resulting in duplicating efforts and definitions at the end not aligning. A few years ago, we were in search for this one schema kind of concept that would actually rule them all, and that's how we got into domain modeling, and how can we do that kind of domain modeling across all representations?

    Alex:
    So there was one part of it. The other part is we needed to enable what's called semantic interoperability. Once we have the ability to talk about concepts and domain models across all of the representations, then the next question is how can we actually move and help our users move in between all of those data representations? There is one thing to remember from the article that's actually in the title, that's that concept of model once, represent everywhere. The core idea with all of that is to say once we've been able to capture a domain model in one place, then we have the ability to project and generate consistent representations. In our case, we are focused on GraphQL, Avro, Java, and SQL. That's what we have today, but we are looking into adding more support for other representations.

    Larry:
    Interesting. And I think every enterprise will have its own mix of data structures like that that they're mapping things to. I love the way you use the word, project. I think different people talk about what they do with the end results of such systems. You have two concepts you talk about as you talk about this, the notion of mappings, which we're just talking about with the data stuff, but also that notion of projection. That's sort of like once you've instantiated something out this system, you project it out to the end user. Is that kind of how it works?

    Alex:
    Yes, so we do use the term, projection, in the more mathematical sense, and more people would call that denotations. So essentially, once you have a domain model, and you can reason about it, and we have actually, a formal representation of the domain models, maybe we'll talk about that a little bit later. But then you can actually define how it's supposed to look like, the exact same thing with the same data semantics, but as an API, for example, in GraphQL, or as a data product in Iceberg, in the data warehouse, or as a low-compacted Kafka topic in our data mesh infrastructure as Avro. So for us, we have to make sure that it's quote, unquote, "the same thing," regardless of the data representation that the user is actually interested in.

    Alex:
    To put everything together, you talked about the mappings, what's really interesting for us is that the mappings are just one of the three main components that we have in our knowledge graph, because at the end of the day, UDA at its core is really a knowledge graph which is made out of the domain models. We've talked about that. Then the mappings, the mappings are themselves objects in that knowledge graph, and they are here actually to connect the world of concepts from the domain models through the worlds of data containers, which in our case could represent things like an Iceberg table, so we would want to know the coordinates on the Iceberg table and we would want to know the schema. But that applies as well to the data mesh source abstraction and the Avro schema that goes with it.

    Alex:
    That would apply as well, and that's a tricky part that very few people actually try to solve, but that would apply to the GraphQL APIs. We want to be able to say and know, oh, there is a type resolver for that GraphQL type that exists in that domain graph service and it's located exactly over there. So that's the kind of granularity that we actually capture in the knowledge graph.

    Larry:
    Very cool. And this is the Knowledge Graph Insights podcast, which is how we ended up talking about this. But that notion of the models, and then the mappings, and then the data containers that actually have everything, I'm just trying to get my head around the scale of this knowledge graph. You said this is not just, but you tease it out, it doesn't have to do with the streaming services or the customer facing part of the business, it's just about your kind of content and data media assets that you need to manage on the back end. Are you sort of an internal service? Is that how it's conceived or?

    Alex:
    That's a good question. So we are not so much into the binary data. That's not at all what UDA is about. Again, it's knowledge graph podcast, for sure, but even more precisely, when we say knowledge graph, we really mean conceptual RDF and we are very, very clear about that. That means for us, quite a few things. The knowledge graph, in our case, needs to be able to capture the data wherever it lives. We do not want necessarily to be RDF all the way through, but at the very core of it, there is a lot of RDF. I'm trying to remember how we talk about it. But yeah, so think about a graph representation of connected data. And again, it has to work across all of the data representations,

Meer Zaken en persoonlijke financiën podcasts

Over Knowledge Graph Insights

Interviews with experts on semantic technology, ontology design and engineering, linked data, and the semantic web.
Podcast website

Luister naar Knowledge Graph Insights, Doorzetters | met Ruud Hendriks en Richard Bross en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies

Knowledge Graph Insights: Podcasts in familie