BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics
Support the show to get full episodes, full archive, and join the Discord community.
Dean Buonomano runs the Buonomano lab at UCLA. Dean was a guest on Brain Inspired way back on episode 18, where we talked about his book Your Brain is a Time Machine: The Neuroscience and Physics of Time, which details much of his thought and research about how centrally important time is for virtually everything we do, different conceptions of time in philosophy, and how how brains might tell time. That was almost 7 years ago, and his work on time and dynamics in computational neuroscience continues.
One thing we discuss today, later in the episode, is his recent work using organotypic brain slices to test the idea that cortical circuits implement timing as a computational primitive it's something they do by they're very nature. Organotypic brain slices are between what I think of as traditional brain slices and full on organoids. Brain slices are extracted from an organism, and maintained in a brain-like fluid while you perform experiments on them. Organoids start with a small amount of cells that you the culture, and let them divide and grow and specialize, until you have a mass of cells that have grown into an organ of some sort, to then perform experiments on. Organotypic brain slices are extracted from an organism, like brain slices, but then also cultured for some time to let them settle back into some sort of near-homeostatic point - to them as close as you can to what they're like in the intact brain... then perform experiments on them. Dean and his colleagues use optigenetics to train their brain slices to predict the timing of the stimuli, and they find the populations of neurons do indeed learn to predict the timing of the stimuli, and that they exhibit replaying of those sequences similar to the replay seen in brain areas like the hippocampus.
But, we begin our conversation talking about Dean's recent piece in The Transmitter, that I'll point to in the show notes, called The brain holds no exclusive rights on how to create intelligence. There he argues that modern AI is likely to continue its recent successes despite the ongoing divergence between AI and neuroscience. This is in contrast to what folks in NeuroAI believe.
We then talk about his recent chapter with physicist Carlo Rovelli, titled Bridging the neuroscience and physics of time, in which Dean and Carlo examine where neuroscience and physics disagree and where they agree about the nature of time.
Finally, we discuss Dean's thoughts on the integrated information theory of consciousness, or IIT. IIT has see a little controversy lately. Over 100 scientists, a large part of that group calling themselves IIT-Concerned, have expressed concern that IIT is actually unscientific. This has cause backlash and anti-backlash, and all sorts of fun expression from many interested people. Dean explains his own views about why he thinks IIT is not in the purview of science - namely that it doesn't play well with the existing ontology of what physics says about science. What I just said doesn't do justice to his arguments, which he articulates much better.
Buonomano lab.
Twitter: @DeanBuono.
Related papers
The brain holds no exclusive rights on how to create intelligence.
What makes a theory of consciousness unscientific?
Ex vivo cortical circuits learn to predict and spontaneously replay temporal patterns.
Bridging the neuroscience and physics of time.
BI 204 David Robbe: Your Brain Doesn’t Measure Time
Read the transcript.
0:00 - Intro
8:49 - AI doesn't need biology
17:52 - Time in physics and in neuroscience
34:04 - Integrated information theory
1:01:34 - Global neuronal workspace theory
1:07:46 - Organotypic slices and predictive processing
1:26:07 - Do brains actually measure time? David Robbe
--------
1:50:33
BI 209 Aran Nayebi: The NeuroAI Turing Test
Support the show to get full episodes, full archive, and join the Discord community.
The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.
Read more about our partnership.
Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.
To explore more neuroscience news and perspectives, visit thetransmitter.org.
Aran Nayebi is an Assistant Professor at Carnegie Mellon University in the Machine Learning Department. He was there in the early days of using convolutional neural networks to explain how our brains perform object recognition, and since then he's a had a whirlwind trajectory through different AI architectures and algorithms and how they relate to biological architectures and algorithms, so we touch on some of what he has studied in that regard. But he also recently started his own lab, at CMU, and he has plans to integrate much of what he has learned to eventually develop autonomous agents that perform the tasks we want them to perform in similar at least ways that our brains perform them. So we discuss his ongoing plans to reverse-engineer our intelligence to build useful cognitive architectures of that sort.
We also discuss Aran's suggestion that, at least in the NeuroAI world, the Turing test needs to be updated to include some measure of similarity of the internal representations used to achieve the various tasks the models perform. By internal representations, as we discuss, he means the population-level activity in the neural networks, not the mental representations philosophy of mind often refers to, or other philosophical notions of the term representation.
Aran's Website.
Twitter: @ayan_nayebi.
Related papers
Brain-model evaluations need the NeuroAI Turing Test.
Barriers and pathways to human-AI alignment: a game-theoretic approach.
0:00 - Intro
5:24 - Background
20:46 - Building embodied agents
33:00 - Adaptability
49:25 - Marr's levels
54:12 - Sensorimotor loop and intrinsic goals
1:00:05 - NeuroAI Turing Test
1:18:18 - Representations
1:28:18 - How to know what to measure
1:32:56 - AI safety
--------
1:43:59
BI 208 Gabriele Scheler: From Verbal Thought to Neuron Computation
Support the show to get full episodes, full archive, and join the Discord community.
Gabriele Scheler co-founded the Carl Correns Foundation for Mathematical Biology. Carl Correns was her great grandfather, one of the early pioneers in genetics. Gabriele is a computational neuroscientist, whose goal is to build models of cellular computation, and much of her focus is on neurons.
We discuss her theoretical work building a new kind of single neuron model. She, like Dmitri Chklovskii a few episodes ago, believes we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriele is working on, for example, respects the computations going on not only externally, via spiking, which has been the only game in town forever, but also the computations going on within the cell itself. Gabriele is in line with previous guests like Randy Gallistel, David Glanzman, and Hessam Akhlaghpour, who argue that we need to pay attention to how neurons are computing various things internally and how that affects our cognition. Gabriele also believes the new neuron model she's developing will improve AI, drastically simplifying the models by providing them with smarter neurons, essentially.
We also discuss the importance of neuromodulation, her interest in wanting to understand how we think via our internal verbal monologue, her lifelong interest in language in general, what she thinks about LLMs, why she decided to start her own foundation to fund her science, what that experience has been like so far. Gabriele has been working on these topics for many years, and as you'll hear in a moment, she was there when computational neuroscience was just starting to pop up in a few places, when it was a nascent field, unlike its current ubiquity in neuroscience.
Gabriele's website.
Carl Correns Foundation for Mathematical Biology.
Neuro-AI spinoff
Related papers
Sketch of a novel approach to a neural model.
Localist neural plasticity identified by mutual information.
Related episodes
BI 199 Hessam Akhlaghpour: Natural Universal Computation
BI 172 David Glanzman: Memory All The Way Down
BI 126 Randy Gallistel: Where Is the Engram?
0:00 - Intro
4:41 - Gabriele's early interests in verbal thinking
14:14 - What is thinking?
24:04 - Starting one's own foundation
58:18 - Building a new single neuron model
1:19:25 - The right level of abstraction
1:25:00 - How a new neuron would change AI
--------
1:35:08
BI 207 Alison Preston: Schemas in our Brains and Minds
Support the show to get full episodes, full archive, and join the Discord community.
The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.
Read more about our partnership.
Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.
To explore more neuroscience news and perspectives, visit thetransmitter.org.
The concept of a schema goes back at least to the philosopher Immanuel Kant in the 1700s, who use the term to refer to a kind of built-in mental framework to organize sensory experience. But it was the psychologist Frederic Bartlett in the 1930s who used the term schema in a psychological sense, to explain how our memories are organized and how new information gets integrated into our memory. Fast forward another 100 years to today, and we have a podcast episode with my guest today, Alison Preston, who runs the Preston Lab at the University of Texas at Austin. On this episode, we discuss her neuroscience research explaining how our brains might carry out the processing that fits with our modern conception of schemas, and how our brains do that in different ways as we develop from childhood to adulthood.
I just said, "our modern conception of schemas," but like everything else, there isn't complete consensus among scientists exactly how to define schema. Ali has her own definition. She shares that, and how it differs from other conceptions commonly used. I like Ali's version and think it should be adopted, in part because it helps distinguish schemas from a related term, cognitive maps, which we've discussed aplenty on brain inspired, and can sometimes be used interchangeably with schemas. So we discuss how to think about schemas versus cognitive maps, versus concepts, versus semantic information, and so on.
Last episode Ciara Greene discussed schemas and how they underlie our memories, and learning, and predictions, and how they can lead to inaccurate memories and predictions. Today Ali explains how circuits in the brain might adaptively underlie this process as we develop, and how to go about measuring it in the first place.
Preston Lab
Twitter: @preston_lab
Related papers:
Concept formation as a computational cognitive process.
Schema, Inference, and Memory.
Developmental differences in memory reactivation relate to encoding and inference in the human brain.
Read the transcript.
0:00 - Intro
6:51 - Schemas
20:37 - Schemas and the developing brain
35:03 - Information theory, dimensionality, and detail
41:17 - Geometry of schemas
47:26 - Schemas and creativity
50:29 - Brain connection pruning with development
1:02:46 - Information in brains
1:09:20 - Schemas and development in AI
--------
1:29:47
Quick Announcement: Complexity Group
Here's the link to learn more and sign up:
Complexity Group Email List.
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.