Tag: bioelectricity

Transcript of Lex Fridman’s interview with Michael Levin

{This is my version of an annotated transcript of Lex Fridman’s interview with Michael Levin , episode 325, on 1 October 2022. I used the  transcripts for Lex Fridman episodes that Andrej Karpathy made using OpenAI Whisper as a starting point. I stripped out the text and then listened to the YouTube interview and started editing. Mainly, I created the dialog,added paragraphs, and corrected a few typos. New material I added is included in braces {}. I added the YouTube headings and an occasional screenshot. I decided to put this on the Internet in case it is useful for others. I welcome any suggestions for improvement.

Why did I do this? Because this is an interview that I wanted to study deeply. I will also share what I learned from Lex’s interview with Michael Levin. One of my goals 2023 was to listen to all of the available Lex Fridman interviews. }

{0:00 – Introduction}

{ML} It turns out that if you train a planarian and then cut their heads off, the tail will regenerate a brand new brain that still remembers the original information. I think planaria hold the answer to pretty much every deep question of life. For one thing, they’re similar to our ancestors. So they have true symmetry, they have a true brain, they’re not like earthworms, they’re, you know, they’re much more advanced life forms. They have lots of different internal organs, but they’re these little, they’re about, you know, maybe two centimeters in the centimeter to two in size. And they have a head and a tail. 

And the first thing is planaria are immortal. So they do not age. There’s no such thing as an old planarian. So that right there tells you that these theories of thermodynamic limitations on lifespan are wrong. It’s not that well over time everything degrades. No, planaria can keep it going for probably, you know, how long have they been around 400 million years, right? So these are the actual, so the planaria in our lab are actually in physical continuity with planaria that were here 400 million years ago. 

[LF} The following is a conversation with Michael Levin, one of the most fascinating and brilliant biologists I’ve ever talked to. He and his lab at Tufts University works on novel ways to understand and control complex pattern formation in biological systems. Andre Karpathy, a world class AI researcher, is the person who first introduced me to Michael Levin’s work. I bring this up because these two people make me realize that biology has a lot to teach us about AI, and AI might have a lot to teach us about biology. 

This is the Lex Fridman podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Michael Levin. 

{1:40 – Embryogenesis}

Embryogenesis is the process of building the human body from a single cell. I think it’s one of the most incredible things that exists on earth from a single embryo. So how does this process work? 

{ML} Yeah, it is an incredible process. I think it’s maybe the most magical process there is. And I think one of the most fundamentally interesting things about it is that it shows that each of us takes the journey from so-called just physics to mind, right? Because we all start life as a single quiescent, unfertilized oocyte, and it’s basically a bag of chemicals, and you look at that and you say, okay, this is chemistry and physics. And then nine months and some years later, you have an organism with high level cognition and preferences and an inner life and so on. 

And what embryogenesis tells us is that that transformation from physics to mind is gradual. It’s smooth. There is no special place where, you know, a lightning bolt says, boom, now you’ve gone from physics to true cognition. That doesn’t happen. And so we can see in this process that the whole mystery, you know, the biggest mystery of the universe, basically, how you get mind from matter. 

{LF} From just physics, in quotes. Yeah. So where’s the magic into the thing? How do we get from information encoded in DNA and make physical reality out of that information? 

{ML} So one of the things that I think is really important if we’re going to bring in DNA into this picture is to think about the fact that what DNA encodes is the hardware of life. DNA contains the instructions for the kind of micro level hardware that every cell gets to play with. So all the proteins, all the signaling factors, the ion channels, all the cool little pieces of hardware that cells have, that’s what’s in the DNA. The rest of it is in so-called generic laws. And these are laws of mathematics. These are laws of computation. These are laws of physics, of all kinds of interesting things that are not directly in the DNA. 

And that process, you know, I think the reason I always put just physics in quotes is because I don’t think there is such a thing as just physics. I think that thinking about these things in binary categories, like this is physics, this is true cognition, this is as if it’s only faking these kinds of things. I think that’s what gets us in trouble. I think that we really have to understand that it’s a continuum and we have to work up the scaling, the laws of scaling. And we can certainly talk about that. There’s a lot of really interesting thoughts to be had there. 

{LF} So the physics is deeply integrated with the information. So the DNA doesn’t exist on its own. The DNA is integrated as, in some sense, in response to the laws of physics at every scale. The laws of the environment it exists in. 

{ML} Yeah, the environment and also the laws of the universe. I mean, the thing about the DNA is that it’s once evolution discovers a certain kind of machine, that if the physical implementation is appropriate, it’s sort of, and this is hard to talk about because we don’t have a good vocabulary for this yet, but it’s a very kind of a platonic notion that if the machine is there, it pulls down interesting things that you do not have to evolve from scratch because the laws of physics give it to you for free. So just as a really stupid example, if you’re trying to evolve a particular triangle, you can evolve the first angle and you evolve the second angle, but you don’t need to evolve the third. You know what it is already. Now, why do you know? That’s a gift for free from geometry in a particular space. You know what that angle has to be. 

And if you evolve an ion channel, which is, ion channels are basically transistors, right? They’re voltage gated current conductances. If you evolve that ion channel, you immediately get to use things like truth tables. You get logic functions. You don’t have to evolve the logic function. You don’t have to evolve a truth table. It doesn’t have to be in the DNA. You get it for free, right? And the fact that if you have NAND gates, you can build anything you want, you get that for free. All you have to evolve is that first step, that first little machine that enables you to couple to those laws. And there’s laws of adhesion and many other things. 

And this is all that interplay between the hardware that’s set up by the genetics and the software that’s made, right? The physiological software that basically does all the computation and the cognition and everything else is a real interplay between the information and the DNA and the laws of physics of computation and so on. 

{LF} So is it fair to say, just like this idea that the laws of mathematics are discovered, they’re latent within the fabric of the universe in that same way the laws of biology are kind of discovered? 

{ML} Yeah, I think that’s absolutely, and it’s probably not a popular view, but I think that’s right on the money. Yeah. 

{LF} Well, I think that’s a really deep idea. Then embryogenesis is the process of revealing, of embodying, of manifesting these laws. You’re not building the laws. 

{ML}  Yeah. 

{LF} You’re just creating the capacity to reveal. 

{ML} Yes. I think, again, not the standard view of molecular biology by any means, but I think that’s right on the money. I’ll give you a simple example. Some of our latest work with these xenobots, right? So what we’ve done is to take some skin cells off of an early frog embryo and basically ask about their plasticity. If we give you a chance to sort of reboot your multicellularity in a different context, what would you do? Because what you might assume by… The thing about embryogenesis is that it’s super reliable, right? It’s very robust. And that really obscures some of its most interesting features. We get used to it. We get used to the fact that acorns make oak trees and frog eggs make frogs. And we say, well, what else is it going to make? That’s what it makes. That’s a standard story. 

But the reality is… And so you look at these skin cells and you say, well, what do they know how to do? Well, they know how to be a passive boring two dimensional outer layer, keeping the bacteria from getting into the embryo. That’s what they know how to do. Well, it turns out that if you take these skin cells and you remove the rest of the embryo, so you remove all of the rest of the cells and you say, well, you’re by yourself now, what do you want to do? 

So what they do is they form this multi little creature that runs around the dish. They have all kinds of incredible and incredible capacities. They navigate through mazes. They have various behaviors that they do both independently and together. Basically, they implement von Neumann’s dream of self-replication, because if you sprinkle a bunch of loose cells into the dish, what they do is they run around, they collect those cells into little piles. They sort of mush them together until those little piles become the next generation of xenobots. So you’ve got this machine that builds copies of itself from loose material in its environment. 

None of this are things that you would have expected from the frog genome. In fact, the genome is wild type. There’s nothing wrong with their genetics. Nothing has been added, no nanomaterials, no genomic editing, nothing. 

And so what we have done there is engineered by subtraction. What you’ve done is you’ve removed the other cells that normally basically bully these cells into being skin cells. And you find out that what they really want to do is to be this, their default behaviors to be a xenobot. But in vivo, in the embryo, they get told to be skinned by these other cell types. 

And so now here comes this really interesting question that you just posed. When you ask where does the form of the tadpole and the frog come from, the standard answer is, well, it’s selection. So over millions of years, it’s been shaped to produce the specific body that’s fit for froggy environments. Where does the shape of the xenobot come from? 

There’s never been any xenobots. There’s never been selection to be a good xenobot. These cells find themselves in the new environment. In 48 hours, they figure out how to be an entirely different protoorganism with new capacities like kinematic self replication. That’s not how frogs or tadpoles replicate. We’ve made it impossible for them to replicate their normal way. Within a couple of days, these guys find a new way of doing it that’s not done anywhere else in the biosphere. 

{9:08 – Xenobots: biological robots}

{LF} Well, actually, let’s step back and define, what are xenobots? 

{ML} So a xenobot is a self-assembling little protoorganism. It’s also a biological robot. Those things are not distinct. It’s a member of both classes. 

{LF} How much is it biology? How much is that robot? 

{ML} At this point, most of it is biology because what we’re doing is we’re discovering natural behaviors of the cells and also of the cell collectives. Now, one of the really important parts of this was that we’re working together with Josh Bongard’s group at University of Vermont. They’re computer scientists, they do AI, and they’ve basically been able to use a simulated evolution approach to ask, how can we manipulate these cells, give them signals, not rewire their DNA, so not hardware, but experience signals? So can we remove some cells? Can we add some cells? Can we poke them in different ways to get them to do other things? 

So in the future, there’s going to be, we’re now, and this is future unpublished work, but we’re doing all sorts of interesting ways to reprogram them to new behaviors. But before you can start to reprogram these things, you have to understand what their innate capacities are. 

{LF} Okay, so that means engineering, programming, you’re engineering them in the future. And in some sense, the definition of a robot is something you in part engineer versus evolve. I mean, it’s such a fuzzy definition anyway, in some sense, many of the organisms within our body are kinds of robots. 

{ML}  Yes, yes.

{LF} And I think robots is a weird line because it’s, we tend to see robots as the other. I think there will be a time in the future when there’s going to be something akin to the civil rights movements for robots, but we’ll talk about that later perhaps. 

{ML}  Sure.

{LF} Anyway, so how do you, can we just linger on it? How do you build a Xenobot? What are we talking about here? From when does it start and how does it become the glorious Xenobot? 

{ML} Yeah, so just to take one step back, one of the things that a lot of people get stuck on is they say, well, you know, engineering requires new DNA circuits or it requires new nanomaterials, you know, what the thing is, we are now moving from old school engineering, which use passive materials, right? Those things, you know, wood, metal, things like this, that basically the only thing you could depend on is that they were going to keep their shape. That’s it. They don’t do anything else. It’s on you as an engineer to make them do everything they’re going to do. And then there were active materials and now computation materials. 

This is a whole new era. These are agential materials. This is you’re now collaborating with your substrate because your material has an agenda. These cells have, you know, billions of years of evolution. They have goals. They have preferences. They’re not just going to sit where you put them. 

{LF} That’s hilarious that you have to talk your material into keeping its shape. 

{ML} That’s it. That is exactly right. That is exactly right. Stay there.

{LF}  It’s like getting a bunch of cats or something and trying to organize the shape out of them. It’s funny. 

{ML} We’re on the same page here because in a paper, this is, this is currently just been accepted in nature by engineering. One of the figures I have is building a tower out of Legos versus dogs, right? So think about the difference, right? If you build out of Legos, you have full control over where it’s going to go. But if somebody knocks it over, it’s game over. With the dogs, you cannot just come and stack them. They’re not going to stay that way. But the good news is that if you train them, then somebody knocks it over, they’ll get right back up. So it’s all right. 

So as an engineer, what you really want to know is what can they depend on this thing to do, right? That’s really, you know, a lot of people have definitions of robots as far as what they’re made of or how they got here, you know, design versus evolve, whatever. I don’t think any of that is useful.

 I think, I think as an engineer, what you want to know is how much can I depend on this thing to do when I’m not around to micromanage it? What level of, what level of dependency can I, can I give this thing? How much agency does it have? Which then tells you what techniques do you use? So do you use micromanagement, like you put everything where it goes? Do you train it? Do you give it signals? Do you try to convince it to do things, right? How much, you know, how intelligent is your substrate? And so now we’re moving into this, into this area where you’re, you’re, you’re working with agential materials. That’s a collaboration. That’s not, that’s not old, old style. 

{LF} What’s the word you’re using? Agential? 

{ML} Agential. Yeah. What’s that mean? Agency. It comes from the word agency. So, so basically the material has agency, meaning that it has some, some level of obviously not human level, but some level of preferences, goals, memories, ability to remember things, to compute into the future, meaning anticipate, you know, when you’re working with cells, they have all of that to some, to various degrees.

{LF}  Is that empowering or limiting having material as a mind of its own, literally? 

{ML} I think it’s both, right? So it raises difficulties because it means that it, if you, if you’re using the old mindset, which is a linear kind of extrapolation of what’s going to happen, you’re going to be surprised and shocked all the time because biology does not do what we linearly expect materials to do. On the other hand, it’s massively liberating. And so in the following way, I’ve argued that advances in regenerative medicine require us to take advantage of this because what it means is that you can get the material to do things that you don’t know how to micromanage. 

So just as a simple example, right? If you, if you, you had a rat and you wanted this rat to do a circus trick, put a ball in the little hoop, you can do it the micromanagement way, which is try to control every neuron and try to play the thing like a puppet, right? And maybe someday that’ll be possible, maybe, or you can train the rat. And this is why humanity for thousands of years before we knew any neuroscience, we had no idea what’s behind, what’s between the ears of any animal. We were able to train these animals because once you recognize the level of agency of a certain system, you can use appropriate techniques. If you know the currency of motivation, reward and punishment, you know how smart it is, you know what kinds of things it likes to do. You are searching a much more, much smoother, much nicer problem space than if you try to micromanage the thing. 

And in regenerative medicine, when you’re trying to get, let’s say an arm to grow back or an eye to repair a cell birth defect or something, do you really want to be controlling tens of thousands of genes at each point to try to micromanage it? Or do you want to find the high level modular controls that say, build an arm here. You already know how to build an arm. You did it before, do it again. So that’s, I think it’s both, it’s both difficult and it challenges us to develop new ways of engineering and it’s hugely empowering. 

{LF} Okay. So how do you do, I mean, maybe sticking with the metaphor of dogs and cats, I presume you have to figure out the, find the dogs and dispose of the cats. Because, you know, it’s like the old herding cats is an issue. So you may be able to train dogs. I suspect you will not be able to train cats. Or if you do, you’re never going to be able to trust them. So is there a way to figure out which material is amenable to herding? Is it in the lab work or is it in simulation?

{ML}  Right now it’s largely in the lab because we, our simulations do not capture yet the most interesting and powerful things about biology. So the simulation does, what we’re pretty good at simulating are feed forward emergent types of things, right? So cellular automata, if you have simple rules and you sort of roll those forward for every, every agent or every cell in the simulation, then complex things happen, you know, ant colony or algorithms, things like that. We’re good at that. And that’s, and that’s fine. 

The difficulty with all of that is that it’s incredibly hard to reverse. So this is a really hard inverse problem, right? If you look at a bunch of termites and they make a, you know, a thing with a single chimney and you say, well, I like it, but I’d like two chimneys. How do you change the rules of behavior for each termite? So they make two chimneys, right? Or, or if you say, here are a bunch of cells that are creating this kind of organism. I don’t think that’s optimal. I’d like to repair that birth defect. How do you control all the, all the individual low level rules, right? All the protein interactions and everything else, rolling it back from the anatomy that you want to the low level hardware rules is in general intractable. It’s a, it’s an inverse problem that’s generally not solvable. 

So right now it’s mostly in the lab because what we need to do is we need to understand how biology uses top down controls. So the idea is not, not bottom up emergence, but the idea of things like a goal directed test-operate-exit kinds of loops where, where it’s basically an error minimization function over a new space and not a space of gene expression, but for example, a space of anatomy. 

So just as a simple example, if you have, you have a salamander and it’s got an arm, you can, you can amputate that arm anywhere along the length. It will grow exactly what’s needed and then it stops. That’s the most amazing thing about regeneration is that it stops it knows when to stop. When does it stop? It stops when a correct salamander arm has been completed. 

So that tells you that’s right. That’s a, that’s a, a means ends kind of analysis where it has to know what the correct limb is supposed to look like, right? So it has a way to ascertain the current shape. It has a way to measure that delta from, from what shape it’s supposed to be. And it will keep taking actions, meaning remodeling and growing and everything else until that’s complete. 

So once you know that, and we’ve taken advantage of this in the lab to do some, some really wild things with, with both planaria and frog embryos and so on, once you know that, you can start playing with that, with that homeostatic cycle. You can ask, for example, well, how does it remember what the correct shape is? And can we mess with that memory? Can we give it a false memory of what the shape should be and let the cells build something else? Or can we mess with the measurement apparatus, right? So it gives you, it gives you those kinds of, so, 

so, so the idea is to basically appropriate a lot of the approaches and concepts from cognitive neuroscience and behavioral science into things that previously were taken to be dumb materials. And, you know, you get yelled at in class if you, if you, for being anthropomorphic, if you said, well, my cells want to do this and my cells want to do that. And I think, I think that’s a, that’s a major mistake that leaves a ton of capabilities on the table. 

{LF} So thinking about biologic systems as things that have memory, have almost something like cognitive ability, but I mean, how incredible is it, you know, that the salamander arm is being rebuilt, not with a dictator. It’s kind of like the cellular automata system. All the individual workers are doing their own thing. So where’s that top down signal that does the control coming from? Like, how can you find it? 

{ML} Yeah.

{LF} Like, why does it stop growing? How does it know the shape? How does it have memory of the shape? And how does it tell everybody to be like, whoa, whoa, whoa, slow down, we’re done. 

{ML} So the first thing to think about, I think, is that there are no examples anywhere of a central dictator, because in this kind of science, because everything is made of parts. And so we, even though we feel as a unified central sort of intelligence and kind of point of cognition, we are a bag of neurons, right? All intelligence is collective intelligence. There’s this, this is important to kind of think about, because a lot of people think, okay, there’s real intelligence, like me, and then there’s collective intelligence, which is ants and flocks of birds and termites and things like that. And maybe it’s appropriate to think of them as an individual, and maybe it’s not, and a lot of people are skeptical about that and so on. But you’ve got to realize that we are not, there’s no such thing as this like indivisible diamond of intelligence that’s like this one central thing that’s not made of parts. We are all made of parts. 

And so if you believe, which I think is hard to get around, that we in fact have a centralized set of goals and preferences and we plan and we do things and so on, you are already committed to the fact that a collection of cells is able to do this, because we are a collection of cells. There’s no getting around that. In our case, what we do is we navigate the three dimensional world and we have behavior. 

{LF} This is blowing my mind right now, because we are just a collection of cells. 

{ML} Oh yeah. 

{LF} So when I’m moving this arm, I feel like I’m the central dictator of that action, but there’s a lot of stuff going on. All the cells here are collaborating in some interesting way. They’re getting signal from the central nervous system. 

{ML} Well, even the central nervous system is misleadingly named because it isn’t really central. Again, it’s just a bunch of cells. I mean, all of them, right? There are no, there are no singular indivisible intelligences anywhere. We are all, every example that we’ve ever seen is a collective of something. It’s just that we’re used to it. We’re used to that. We’re used to, okay, this thing is kind of a single thing, but it’s really not. You zoom in, you know what you see. You see a bunch of cells running around. 

{LF} Is there some unifying, I mean, we’re jumping around, but that something that you look at as the bioelectrical signal versus the biochemical, the chemistry, the electricity, maybe the life is in that versus the cells. It’s the, there’s an orchestra playing and the resulting music is the dictator. 

{ML} That’s not bad. That’s Dennis Noble’s kind of view of things. He has two really good books where he talks about this musical analogy, right?  So I think that’s, I like it. I like it.

{LF}  Is it wrong though? 

{ML} I don’t think it’s, no, I don’t think it’s wrong. I don’t think it’s wrong. I think the important thing about it is that we have to come to grips with the fact that a true proper cognitive intelligence can still be made of parts. Those things are, and in fact it has to be, and I think it’s a real shame, but I see this all the time. When you have a collective like this, whether it be a group of robots or a collection of cells or neurons or whatever, as soon as we gain some insight into how it works, meaning that, oh, I see, in order to take this action, here’s the information that got processed via this chemical mechanism or whatever. Immediately people say, oh, well then that’s not real cognition. That’s just physics.

{22:55 – Sense of self}

 I think this is fundamentally flawed because if you zoom into anything, what are you going to see? Of course you’re just going to see physics. What else could be underneath, right? It’s not going to be fairy dust. It’s going to be physics and chemistry, but that doesn’t take away from the magic of the fact that there are certain ways to arrange that physics and chemistry and in particular the bioelectricity, which I like a lot, to give you an emergent collective with goals and preferences and memories and anticipations that do not belong to any of the subunits. 

So I think what we’re getting into here, and we can talk about how this happens during embryogenesis and so on, what we’re getting into is the origin of a self with a capital S. So we ourselves, there are many other kinds of selves, and we can tell some really interesting stories about where selves come from and how they become unified. 

{LF} Yeah, is this the first, or at least humans tend to think that this is the level of which the self with a capital S is first born, and we really don’t want to see human civilization or Earth itself as one living organism. 

{ML} Yeah.

{LF} that’s very uncomfortable to us. It is, yeah. But is, yeah, where’s the self born? 

{ML} We have to grow up past that. So what I like to do is, I’ll tell you two quick stories about that. I like to roll backwards. So as opposed to, so if you start and you say, okay, here’s a paramecium, and you see it, you know, it’s a single cell organism, you see it doing various things, and people will say, okay, I’m sure there’s some chemical story to be told about how it’s doing it, so that’s not a paramecium. So that’s not true cognition, right? And people will argue about that. 

{ML} I like to work it backwards. I say, let’s agree that you and I, as we sit here, are examples of true cognition, if anything, as if there’s anything that’s true cognition, we are examples of it. Now let’s just roll back slowly, right? So you roll back to the time when you were a small child and used to doing whatever, and then just sort of day by day, you roll back, and eventually you become more or less that paramecium, and then you sort of even below that, right, as an unfertilized OSI. So it’s, no one has, to my knowledge, no one has come up with any convincing discrete step at which my cognitive powers disappear, right? It just doesn’t, the biology doesn’t offer any specific step. It’s incredibly smooth and slow and continuous. And so I think this idea that it just sort of magically shows up at one point, and then, you know, humans have true selves that don’t exist elsewhere, I think it runs against everything we know about evolution, everything we know about developmental biology, these are all slow continua. 

And the other really important story I want to tell is where embryos come from. So think about this for a second. Amniote embryos, so this is humans, birds, and so on, mammals and birds and so on. Imagine a flat disk of cells, so there’s maybe 50,000 cells. And in that, so when you get an egg from a fertilized, let’s say you buy a fertilized egg from a farm, right? That egg will have about 50,000 cells in a flat disk, it looks like a little tiny little frisbee. And in that flat disk, what’ll happen is there’ll be one set of cells will become special, and it will tell all the other cells, I’m going to be the head, you guys don’t be the head. And so it’ll amplify symmetry breaking amplification, you get one embryo, there’s some neural tissue and some other stuff forms. 

Now, you say, okay, I had one egg and one embryo, and there you go, what else could it be? Well, the reality is, and I used to, I did all of this as a grad student, if you take a little needle, and you make a scratch in that blastoderm in that disk, such that the cells can’t talk to each other for a while, it heals up, but for a while, they can’t talk to each other. What will happen is that both regions will decide that they can be the embryo, and there will be two of them. And then when they heal up, they become conjoint twins, and you can make two, you can make three, you can make lots. So the question of how many cells are in there cannot be answered until it’s actually played all the way through. It isn’t necessarily that there’s just one, there can be many. 

So what you have is you have this medium, this, this undifferentiated, I’m sure there’s a there’s a psychological version of this somewhere that I don’t know the proper terminology. But you have this, you have this list, like the ocean of potentiality, you have these 1000s of cells, and some number of individuals are going to be formed out of it, usually one, sometimes zero, sometimes several. And they form out of these cells, because a region of these cells organizes into a collective that will have goals, goals that individual cells don’t have, for example, make a limb, make an eye, how many eyes? Well, exactly two. So individual cells don’t know what an eye is, they don’t know how many eyes you’re supposed to have, but the collective does. The collective has goals and memories and anticipations that the individual cells don’t. And that that the establishment of that boundary with its own ability to maintain to to pursue certain goals. That’s the origin of selfhood. 

{LF} But I, is that goal in there somewhere? Were they always destined? Like, are they discovering that goal? Like, where the hell did evolution discover this when you went from the prokaryotes to eukaryotic cells? And then they started making groups. And when you make a certain group, you make a, you make it sound, and it’s such a tricky thing to try to understand, you make it sound like this cells didn’t get together and came up with a goal. But the very act of them getting together revealed the goal that was always there. There was always that potential for that goal. 

{ML} So the first thing to say is that there are way more questions here than certainties. Okay, so everything I’m telling you is cutting edge developing, you know, stuff. So it’s not as if any of us know the answer to this. But, but here’s, here’s, here’s my opinion on this. I think what evolution, I don’t think that evolution produces solutions to specific problems, in other words, specific environments, like here’s a frog that can live well in a froggy environment. I think what evolution produces is problem solving machines that that will that will solve problems in different spaces. So not just three dimensional spaces, but in a way, three dimensional space. 

This goes back to what we were talking about before we the brain is a evolutionarily a late development. It’s a system that is able to pursue goals in three dimensional space by giving commands to muscles, where did that system come from? That system evolved from a much more ancient, evolutionarily much more ancient system, where collections of cells gave instructions to for cell behaviors, meaning cells move to divide to die to change into different cell types, to navigate amorphous space, the space of anatomies, the space of all possible anatomies. And before that, cells were navigating transcriptional space, which is a space of all possible gene expressions. And before that metabolic space. 

So what evolution has done, I think, is produced hardware that is very good at navigating different spaces using a bag of tricks, right, which which I’m sure many of them we can steal for autonomous vehicles and robotics and various things. And what happens is that they navigate these spaces without a whole lot of commitment to what the space is. In fact, they don’t know what the space is, right? We are all brains in a vat, so to speak. Every cell does not know, right? Every cell is some other name, some other cells external environment, right? 

So where does that with that border between you, you and the outside world, you don’t really know where that is, right? Every collection of cells has to figure that out from scratch. And the fact that evolution requires all of these things to figure out what they are, what effectors they have, what sensors they have, where does it make sense to draw a boundary between me and the outside world? The fact that you have to build all that from scratch, this autopoiesis is what defines the border of a self. Now, biology uses a multi-scaled competency architecture, meaning that every level has goals. So so molecular networks have goals, cells have goals, tissues, organs, colonies. And and it’s the interplay of all of those that that enable biology to solve problems in new ways, for example, in xenobots and various other things. 

This is, you know, it’s exactly as you said, in many ways, the cells are discovering new ways of being. But at the same time, evolution certainly shapes all this. So so evolution is very good at this agential bioengineering, right? When evolution is discovering a new way of being an animal, you know, an animal or a plant or something, sometimes it’s by changing the hardware, you know, protein, changing proteins, protein structure, and so on. But much of the time, it’s not by changing the hardware, it’s by changing the signals that the cells give to each other. It’s doing what we as engineers do, which is try to convince the cells to do various things by using signals, experiences, stimuli. That’s what biology does. It has to, because it’s not dealing with a blank slate. 

Every time as you know, if you’re evolution, and you’re trying to make an organism, you’re not dealing with a passive material that is fresh, and you have to specify it already wants to do certain things. So the easiest way to do that search to find whatever is going to be adaptive, is to find the signals that are going to convince cells to do various things, right? 

{LF} Your sense is that evolution operates both in the software and the hardware. And it’s just easier, more efficient to operate in the software. 

{ML} Yes. And I should also say, I don’t think the distinction is sharp. In other words, I think it’s a continuum. But I think we can but I think it’s a meaningful distinction where you can make changes to a particular protein, and now the enzymatic function is different, and it metabolizes differently, and whatever, and that will have implications for fitness. Or you can change the huge amount of information in the genome that isn’t structural at all. It’s, it’s, it’s signaling, it’s when and how do cells say certain things to each other. And that can have massive changes, as far as how it’s going to solve problems. 

{32:26 – Multi-scale competency architecture}

{LF} I mean, this idea of multi hierarchical competency architecture, which is incredible to think about. So this hierarchy that evolution builds, I don’t know who’s responsible for this. I also see the incompetence of bureaucracies of humans when they get together. So how the hell does evolution build this, where at every level, only the best get to stick around, they somehow figure out how to do their job without knowing the bigger picture. 

{ML} Yeah.

{LF} And then there’s like the bosses that do the bigger thing somehow, or that you can now abstract away the small group of cells as an organ or something. And then that organ does something bigger in the context of the full body or something like this. How is that built? Is there some intuition you can kind of provide of how that’s constructed, that hierarchical competence architecture? I love that competence, just the word competence is pretty cool in this context, because everybody’s good at their job. 

{ML} Yeah, no, it’s really key. And the other nice thing about competency is that so my central belief in all of this is that engineering is the right perspective on all of this stuff, because it gets you away from subjective terms. You know, people talk about sentience and this and that those things are very hard to define, or people argue about them philosophically. 

I think that engineering terms like competency, like, you know, pursuit of goals, right? All of these things are, are empirically incredibly useful, because you know, when you see it, and if it helps you build, right, if I if I can pick the right level, I say, this thing has, I believe this is x level of like, competency, I think it’s like a thermostat, or I think it’s like a better thermostat, or I think it’s a, you know, various other kinds of, you know, many, many different kinds of complex systems. If that helps me to control and predict and build such systems, then that’s all there is to say, there’s no more philosophy to argue about. 

So I like competency in that way, because you can quantify, you could, you have to, in fact, you have to, you have to make a claim competent at what? And then, or if I say, if I tell you, it has a goal, the question is, what’s the goal? And how do you know? And I say, well, because every time I deviated from this particular state, that’s what it spends energy to get back to, that’s the goal. And we can quantify it, and we can be objective about it. 

So we’re not used to thinking about this, I give a talk sometimes called Why don’t robots get cancer, right? And the reason robots don’t get cancer is because generally speaking, with a few exceptions, our architectures have been, you’ve got a bunch of dumb parts. And you hope that if you put them together, the overlying machine will have some intelligence and do something rather, right, but the individual parts don’t don’t care, they don’t have an agenda.

 Biology isn’t like that, every level has an agenda. And the final outcome is the result of cooperation and competition, both within and across levels. So for example, during embryogenesis, your tissues and organs are competing with each other. And it’s actually a really important part of development, there’s a reason they compete with each other, they’re not all just, you know, sort of helping each other, they’re also competing for information, for metabolic for limited metabolic constraints. 

But to get back to your your other point, which is, you know, which is which is the seems like really efficient and good and so on compared to some of our human efforts. We also have to keep in mind that what happens here is that each level bends the option space for the level beneath so that your parts basically they don’t see the the geometry. So I’m using them. And I think I take this seriously,  terminology from like, from like relativity, right, where the space is literally bent. So the option space is deformed by the higher level so that the lower levels, all they really have to do is go down their concentration gradient, they don’t have to, in fact, they don’t, they can’t know what the big picture is. But if you bend the space just right, if they do what locally seems right, they end up doing your bidding, they end up doing things that are optimal in the higher space.  Conversely, because the components are good at getting their job done, you as the higher level don’t need to try to compute all the low level controls, all you’re doing is bending the space, you don’t know or care how they’re going to do it. 

Give you a super simple example in the  tadpole, we found that okay, so  tadpoles need to become frogs and to become to go from a  tadpole head to a frog head, you have to rearrange the face. So the eyes have to move forward, the jaws have to come out the nostrils move like everything moves. It used to be thought that because all  tadpoles look the same, and all frogs look the same. If you just remember, if every piece just moves in the right direction, the right amount, then you get your you get your frog. Right. 

So we decided to test we I have this hypothesis that I thought I thought actually, the system is probably more intelligent than that. So what did we do? We made what we call Picasso  tadpoles. So these are so everything is scrambled. So the eyes are on the back of the head, the jaws are off to the side, everything is scrambled. Well, guess what they make, they make pretty normal frogs, because all the different things move around in novel paths configurations until they get to the correct froggy sort of frog face configuration, then they stop. 

So, so the thing about that is now imagine evolution, right? So, so you make some sort of mutation, and it does, like every mutation, it does many things. So something good comes of it, but also it moves your mouth off to the side, right? Now, if if there wasn’t this multi scale competency, you can see where this is going, if there wasn’t this multi scale competency, the organism would be dead, your fitness is zero, because you can’t eat. And you would never get to explore the other beneficial consequences of that mutation, you’d have to wait until you find some other way of doing it without moving the mouth, that’s really hard. So, the fitness landscape would be incredibly rugged, evolution would take forever. The reason it works, one of the reasons it works so well, is because you do that, no worries, the mouth will find its way where it belongs, right? 

So now you get to explore. So what that means is that all of these mutations that otherwise would be deleterious are now neutral, because the competency of the parts make up for all kinds of things. So all the noise of development, all the variability in the environment, all these things, the competency of the parts makes up for it. So that’s all that’s all fantastic, right? That’s all that’s all great. 

The only other thing to remember when we compare this to human efforts is this. Every component has its own goals in various spaces, usually with very little regard for the welfare of the other levels. So as a simple example, you know, you  as a complex system, you will go out and you will do you know, jiu jitsu, or whatever, you’ll have some go you have to go rock climbing, scrape a bunch of cells off your hands. And then you’re happy as a system, right? You come back, and you’ve accomplished some goals, and you’re really happy. Those cells are dead. They’re gone. Right? Did you think about those cells? Not really, right? You had some you had some bruising…

{LF} You selfish SOB. 

{ML} That’s it. And so and so that’s the thing to remember is that, you know, and we know this from history is that just being a collective isn’t enough. Because what the goals of that collective will be relative to the welfare of the individual parts is a massively open question.

{LF} The ends justify the means I’m telling you, Stalin was onto something. 

{ML} No, that’s the danger. 

{LF} But we can exactly. That’s the danger of for us humans, we have to construct ethical systems under which we don’t take seriously the full mechanism of biology and apply it to the way the world functions, which is an interesting line we’ve drawn. The world that built us is the one we reject in some sense, 

{ML} Yeah.

{LF} when we construct human societies, the idea that this country was founded on that all men are created equal. That’s such a fascinating idea. That’s like, you’re fighting against nature and saying, well, there’s something bigger here than a hierarchical competency architecture. 

{ML} Yeah.

{LF} But there’s so many interesting things you said. So from an algorithmic perspective, the act of bending the option space. That’s really, that’s really profound. Because if you look at the way AI systems are built today, there’s a big system, like I said, with robots, and as a goal, and he gets better and better at optimizing that goal at accomplishing that goal. But if biology built a hierarchical system where everything is doing computation, and everything is accomplishing the goal, not only that, it’s kind of dumb, you know, with the limited with a bent option space is just doing the thing that’s the easiest thing for in some sense. And somehow that allows you to have turtles on top of turtles, literally dumb systems on top of dumb systems that as a whole create something incredibly smart. 

{ML} Yeah, I mean, every system has some degree of intelligence in its own problem domain. So, cells will have problems they’re trying to solve in physiological space and transcriptional space. And then I can give you some cool examples of that. 

But the collective is trying to solve problems in anatomical space, right and forming a, you know, a creature and growing your blood vessels and so on. And then the collective the whole body is solving yet other problems, they may be in social space and linguistic space and three dimensional space. And who knows, you know, the group might be solving problems in, you know, I don’t know, some sort of financial space or something. 

So one of the major differences with most AIs today is (A) the kind of flatness of the architecture, but also of the fact that they’re constructed from outside their borders, and they’re, you know, so a few. So, to a large extent, and of course, there are counterexamples now, but but to a large extent, our technology has been such that you create a machine or a robot, it knows what its sensors are, it knows what its effectors are, it knows the boundary between it and the outside world, although this is given from the outside. Biology constructs this from scratch. 

Now the best example of this that that originally in robotics was actually Josh Bongard’s work in 2006, where he made these, these robots that did not know their shape to start with. So like a baby, they sort of floundered around, they made some hypotheses, well, I did this, and I moved in this way. Well, maybe I’m a whatever, maybe I have wheels, or maybe I have six legs or whatever, right? And they would make a model and eventually will crawl around. 

So that’s, I mean, that’s really good. That’s part of the autopoiesis, but we can go a step further. And some people are doing this. And then we’re sort of working on some of this too, is this idea that let’s even go back further, you don’t even know what sensors you have, you don’t know where you end in the outside world begins. 

All you have is certain things like active inference, meaning you’re trying to minimize surprise, right? You have some metabolic constraints, you don’t have all the energy you need, you don’t have all the time in the world to think about everything you want to think about. So that means that you can’t afford to be a micro reductionist, you know, all this data coming in, you have to course grain it and say, I’m gonna take all this stuff, and I’m gonna call that a cat. I’m gonna take all this, I’m gonna call that the edge of the table I don’t want to fall off of. And I don’t want to know anything about the micro states, what I want to know is what is the optimal way to cut up my world. And by the way, this thing over here, that’s me. And the reason that’s me is because I have more control over this than I have over any of this other stuff. And so now you can begin to write. 

So that’s self construction at that, that figuring out making models of the outside world, and then turning that inwards, and starting to make a model of yourself, right, which immediately starts to get into issues of agency and control. 

{43:57 – Free will}

Because in order to….  if you are under metabolic constraints, meaning you don’t have the energy, right, that all the energy in the world, you have to be efficient, that immediately forces you to start telling stories about coarse grained agents that do things, right, you don’t have the energy to like Laplace’s demon, you know, calculate every, every possible state that’s going to happen, you have to you have to coarse grain, and you have to say, that is the kind of creature that does things, either things that I avoid, or things that I will go towards, that’s a major food or whatever, whatever it’s going to be. 

And so right at the base of simple, very simple organisms starting to make models of agents doing things, that is the origin of models of free will, basically, right, because you see the world around you as having agency. And then you turn that on yourself. And you say, wait, I have agency too, I can, I do things, right. And then you make decisions about what you’re going to do. So all of this one model is to view all of those kinds of things as being driven by that early need to determine what you are and to do so and to then take actions in the most energetically efficient space possible. 

{LF} Right. So free will emerges when you try to simplify, tell a nice narrative about your environment. 

{ML} I think that’s very plausible. Yeah. 

{LF} You think free was an illusion. So you’re kind of implying that it’s a useful hack. Well, I’ll say two things. 

{ML} The first thing is, I think I think it’s very plausible to say that any organism that self or any agent that self whether it’s biological or not, any agent that self constructs under energy constraints, is going to believe in free will, we’ll get to whether it has free will momentarily. But I think what it definitely drives is a view of yourself and the outside world as an agential view, I think that’s inescapable. 

{LF} So that’s true for even primitive organisms? 

{ML} I think so. I think that’s now they don’t have now obviously, you have to scale down, right. So they don’t have the kinds of complex metacognition that we have. So they can do long term planning and thinking about free will and so on and so on. But

{LF}  the sense of agency is really useful to accomplish tasks simple or complicated. 

{ML} That’s right. In all kinds of spaces, not just in obvious three dimensional space. I mean, we’re very good that the thing is, humans are very good at detecting agency of like medium sized objects moving at medium speeds in the three dimensional world, right? We see a bowling ball and we see a mouse and we immediately know what the difference is, right? And how we’re going to 

{LF} Mostly things you can eat or get eaten by. 

{ML} Yeah, yeah. That’s our training set, right? From the time you’re little, your training set is visual data on this like little chunk of your experience. But imagine if imagine if from the time that we were born, we had innate senses of your blood chemistry, if you could feel your blood chemistry, the way you can see, right, you had a high bandwidth connection, and you could feel your blood chemistry, and you could see, you could sense all the things that your organs were doing. So your pancreas, your liver, all the things. 

If we had that, we would be very good at detecting intelligence and physiological space, we would know the level of intelligence that our various organs were deploying to deal with things that were coming to anticipate the stimuli to, you know, but we’re just terrible at that. We don’t, in fact, in fact, people don’t even, you know, you talk about intelligence that these are these {unintelligible} spaces. And a lot of people think that’s just crazy, because, because all we’re all we know is motion. 

{LF} We do have access to that information. So it’s actually possible that so evolution could if we wanted to construct an organism that’s able to perceive…. 

{ML} Most certainly.

{LF} the flow of blood through your body, the way you see an old friend and say, yo, what’s up? How’s the wife and the kids? In that same way, you would see that you would feel like a connection to the liver. 

{ML} Yeah, yeah, I think, you know, 

{LF} maybe other people’s liver and not just your own, because you don’t have access to other people’s liver. 

{ML} Not yet. But you could imagine some really interesting connection, right? 

{LF} Like sexual selection, like, oh, that girl’s got a nice liver. Well, that’s like, the way her blood flows, the dynamics of the blood is very interesting. It’s novel. I’ve never seen one of those. 

{ML} But you know, that’s exactly what we’re trying to half ass when we judge judgment of beauty by facial symmetry and so on. That’s a half assed assessment of exactly that. Because if your cells could not cooperate enough to keep your organism symmetrical, you know, you can make some inferences about what else is wrong, right? Like that’s a very, you know, that’s a very basic. 

{LF} Interesting. Yeah. So that in some deep sense, actually, that is what we’re doing. We’re trying to infer how health, we use the word healthy, but basically, how functional is this biological system I’m looking at so I can hook up with that one and make offspring? 

{ML} Yeah, yeah. Well, what kind of hardware might their genomics give me that might be useful in the future? 

{LF} I wonder why evolution didn’t give us a higher resolution signal. Like why the whole peacock thing with the feathers? It doesn’t seem, it’s a very low bandwidth signal for sexual selection. 

{ML} I’m gonna, and I’m not an expert on this stuff, but on peacocks. Well, you know, but I’ll take a stab at the reason. I think that it’s because it’s an arms race. You see, you don’t want everybody to know everything about you. So I think that as much as, as much as, and in fact, there’s another interesting part of this arms race, which is, if you think about this, the most adaptive, evolvable system is one that has the most level of top down control, right?

 If it’s really easy to say to a bunch of cells, make another finger versus, okay, here’s 10,000 gene expression changes that you need to do to make it to change your finger, right? The system with good top down control that has memory and when we need to get back to that, by the way, that’s a question I neglected to answer about where the memory is and so on. A system that uses all of that is really highly evolvable and that’s fantastic. But guess what? It’s also highly subject to hijacking by parasites, by cheaters of various kinds, by conspecifics. 

Like we found that, and then that goes back to the story of the pattern memory in these planaria, there’s a bacterium that lives on these planaria. That bacterium has an input into how many heads the worm is going to have because it’s hijacks that control system and it’s able to make a chemical that basically interfaces with the system that calculates how many heads you’re supposed to have and they can make them have two heads. And so you can imagine that if you are two, so you want to be understandable for your own parts to understand each other, but you don’t want to be too understandable because you’ll be too easily controllable. And so I think that my guess is that that opposing pressure keeps us from being a super high bandwidth kind of thing where we can just look at somebody and know everything about them. 

{LF} So it’s a kind of biological game of Texas hold them. You’re showing some cards and you’re hiding other cards and there’s part of it and there’s bluffing and there’s all that. And then there’s probably whole species that would do way too much bluffing. That’s probably where peacocks fall. There’s a book that I don’t remember if I read or if I read summaries of the book, but it’s about evolution of beauty and birds. Where is that from? Is that a book or does Richard Dawkins talk about it? But basically there’s some species start to like over select for beauty, not over select. They just for some reason select for beauty. There is a case to be made. Actually now I’m starting to remember, I think Darwin himself made a case that you can select based on beauty alone. There’s a point where beauty doesn’t represent some underlying biological truth. 

{ML} That’s right, that’s right.

{LF} You start to select for beauty itself. And I think the deep question is there some evolutionary value to beauty, but it’s an interesting kind of thought that can we deviate completely from the deep biological truth to actually appreciate some kind of the summarization in itself. 

Let me get back to memory because this is a really interesting idea. How do a collection of cells remember anything? How do biological systems remember anything? How is that akin to the kind of memory we think of humans as having within our big cognitive engine? 

{ML} Yeah. One of the ways to start thinking about bioelectricity is to ask ourselves, where did neurons and all these cool tricks that the brain uses to run these amazing problem solving abilities on and basically an electrical network, right? Where did that come from? They didn’t just evolve, you know, appear out of nowhere. It must have evolved from something. 

And what it evolved from was a much more ancient ability of cells to form networks to solve other kinds of problems. For example, to navigate amorphous space to control the body shape. And so all of the components of neurons, so ion channels, neurotransmitter machinery, electrical synapses, all this stuff is way older than brains, way older than neurons, in fact, older than multicellularity. And so it was already that even bacterial biofilms, there’s some beautiful work from UCSD on brain like dynamics and bacterial biofilms. So evolution figured out very early on that electrical networks are amazing at having memories, at integrating information across distance, at different kinds of optimization tasks, you know, image recognition and so on, long before there were brains. 

{LF} Can you actually just step back? We’ll return to it.

{53:27 – Bioelectricity}

What is bioelectricity? What is biochemistry? What is, what are electrical networks? I think a lot of the biology community focuses on the chemicals as the signaling mechanisms that make the whole thing work. You have, I think, to a large degree, uniquely, maybe you can correct me on that, have focused on the bioelectricity, which is using electricity for signaling. There’s also probably mechanical. Sure, sure. Like knocking on the door. So what’s the difference? And what’s an electrical network? 

{ML} Yeah, so I want to make sure and kind of give credit where credit is due. So as far back as 1903, and probably late 1800s already, people were thinking about the importance of electrical phenomena in life. So I’m for sure not the first person to stress the importance of electricity. People, there were waves of research in the in the 30s, in the 40s, and then, again, in the kind of 70s, 80s, and 90s of sort of the pioneers of bioelectricity, who did some amazing work on all this.

I think, I think what what we’ve done that’s new, is to step away from this idea that, and I’ll describe what what the bioelectricity is a step away from the idea that, well, here’s another piece of physics that you need to keep track of to understand physiology and development. And to really start looking at this as saying, no, this is a privileged computational layer that gives you access to the actual cognition of the tissue of basal cognition. So, merging that developmental biophysics with ideas and cognition of computation, and so on, I think I think that’s what we’ve done that’s new. 

But people have been talking about bioelectricity for a really long time. And so I’ll, so I’ll define that. So what happens is that if you have, if you have a single cell, cell has a membrane, in that membrane are proteins called ion channels, and those proteins allow charged molecules, potassium, sodium, chloride, to go in and out under certain circumstances. And when there’s an imbalance of those ions, there becomes a voltage gradient across that membrane. And so all cells, all living cells try to hold a particular kind of voltage difference across the membrane, and they spend a lot of energy to do so. When you now now, so that’s it, that’s it, that’s a single cell. 

When you have multiple cells, the cells sitting next to each other, they can communicate their voltage state to each other via a number of different ways. But one of them is this thing called a gap junction, which is basically like a little submarine hatch that just kind of docks, right? And the ions from one side can flow to the other side, and vice versa. So… 

{LF} Isn’t it incredible that this evolved? Isn’t that wild? Because that didn’t exist. 

{ML} Correct. This had to be, this had to be evolved. 

{LF} It had to be invented. That’s right. Somebody invented electricity in the ocean. When did this get invented? 

{ML} Yeah. So, I mean, it is incredible. The guy who discovered gap junctions, Werner Loewenstein, I visited him. He was really old. 

{LF} A human being? He discovered them. Because who really discovered them lived probably four billion years ago. 

{ML} Good point. So you give credit where credit is due, I’m just saying. He rediscovered gap junctions. But when I visited him in Woods Hole, maybe 20 years ago now, he told me that he was writing, and unfortunately, he passed away, and I think this book never got written. He was writing a book on gap junctions and consciousness. And I think it would have been an incredible book, because gap junctions are magic. I’ll explain why in a minute. 

What happens is that, just imagine, the thing about both these ion channels and these gap junctions is that many of them are themselves voltage sensitive. So that’s a voltage sensitive current conductance. That’s a transistor. And as soon as you’ve invented one, immediately, you now get access to, from this platonic space of mathematical truths, you get access to all of the cool things that transistors do. So now, when you have a network of cells, not only do they talk to each other, but they can send messages to each other, and the differences of voltage can propagate. 

Now, to neuroscientists, this is old hat, because you see this in the brain, right? This action potentials, the electricity.  They have these awesome movies where you can take a zebra, like a transparent animal, like a zebrafish, and you can literally look down, and you can see all the firings as the fish is making decisions about what to eat and things like this. It’s amazing. Well, your whole body is doing that all the time, just much slower. 

So there are very few things that neurons do that all the cells in your body don’t do. They all do very similar things, just on a much slower timescale. And whereas your brain is thinking about how to solve problems in three dimensional space, the cells in an embryo are thinking about how to solve problems in anatomical space. They’re trying to have memories like, hey, how many fingers are we supposed to have? Well, how many do we have now? What do we do to get from here to there? That’s the kind of problems they’re thinking about. 

And the reason that gap junctions are magic is, imagine, right, from the earliest time. Here are two cells. This cell, how can they communicate? Well, the simple version is this cell could send a chemical signal, it floats over, and it hits a receptor on this cell, right? Because it comes from outside, this cell can very easily tell that that came from outside. Whatever information is coming, that’s not my information. That information is coming from the outside. So I can trust it, I can ignore it, I can do various things with it, I can do various things with it, whatever, but I know it comes from the outside. 

Now imagine instead that you have two cells with a gap junction between them. Something happens, let’s say this cell gets poked, there’s a calcium spike, the calcium spike or whatever small molecule signal propagates through the gap junction to this cell. There’s no ownership metadata on that signal. This cell does not know now that it came from outside because it looks exactly like its own memories would have looked like of whatever had happened, right? 

So gap junctions to some extent wipe ownership information on data, which means that if I can’t, if you and I are sharing memories and we can’t quite tell who the memories belong to, that’s the beginning of a mind meld. That’s the beginning of a scale up of cognition from here’s me and here’s you to no, now there’s just us. 

{LF} So they enforce a collective intelligence gap junctions. 

{ML} That’s right. It helps. It’s the beginning. It’s not the whole story by any means, but it’s the start. 

{LF} Where’s state stored of the system? Is it in part in the gap junctions themselves? Is it in the cells? 

{ML} There are many, many layers to this as always in biology. So there are chemical networks. So for example, gene regulatory networks, right? Which, or basically any kind of chemical pathway where different chemicals activate and repress each other, they can store memories. So in a dynamical system sense, they can store memories. They can get into stable states that are hard to pull them out of. So that becomes, once they get in, that’s a memory, a permanent memory or a semi permanent memory of something that’s happened. 

There are cytoskeletal structures that are physically, they store memories in physical configuration. 

There are electrical memories like flip flops where there is no physical. So if you look, I showed my students this example as a flip flop. And the reason that it stores a zero one is not because some piece of the hardware moved. It’s because there’s a cycling of the current in one side of the thing. If I come over and I hold the other side to a high voltage for a brief period of time, it flips over and now it’s here. But none of the hardware moved. The information is in a stable dynamical sense. And if you were to x-ray the thing, you couldn’t tell me if it was zero or one, because all you would see is where the hardware is. You wouldn’t see the energetic state of the system. So there are bioelectrical states that are held in that exact way, like volatile RAM basically, like in the electrical state. 

{LF} It’s very akin to the different ways that memory is stored in a computer. So there’s RAM, there’s a hard drive. 

{ML} You can make that mapping, right? So I think the interesting thing is that based on the biology, we can have a more sophisticated, you know, I think we can revise some of our computer engineering methods because there are some interesting things that biology we haven’t done yet. But that mapping is not bad. I mean, I think it works in many ways. 

{LF} Yeah, I wonder because I mean, the way we build computers at the root of computer science is the idea of proof of correctness. We program things to be perfect, reliable. You know, this idea of resilience and robustness to unknown conditions is not as important. So that’s what biology is really good at. So I don’t know what kind of systems. I don’t know how we go from a computer to a biological system in the future. 

{ML} Yeah, I think that, you know, the thing about biology is all about making really important decisions really quickly on very limited information. I mean, that’s what biology is all about. You have to act, you have to act now. The stakes are very high, and you don’t know most of what you need to know to be perfect. And so there’s not even an attempt to be perfect or to get it right in any sense. There are just things like active inference, minimize surprise, optimize some efficiency and some things like this that guides the whole business. 

{LF} I mentioned to you offline that somebody who’s a fan of your work is Andre Kapathy. And he’s, amongst many things, also writes occasionally a great blog. He came up with this idea, I don’t know if he coined the term, but of software 2.0, where the programming is done in the space of configuring these artificial neural networks. Is there some sense in which that would be the future of programming for us humans, where we’re less doing like Python like programming and more… How would that look like? But basically doing the hyperparameters of something akin to a biological system and watching it go and adjusting it and creating some kind of feedback loop within the system so it corrects itself. And then we watch it over time accomplish the goals we want it to accomplish. Is that kind of the dream of the dogs that you described in the Nature paper? 

{ML} Yeah. I mean, that’s what you just painted is a very good description of our efforts at regenerative medicine as a kind of somatic psychiatry. So the idea is that you’re not trying to micromanage. I mean, think about the limitations of a lot of the medicines today. We try to interact down at the level of pathways. So we’re trying to micromanage it. What’s the problem? 

Well, one problem is that for almost every medicine other than antibiotics, once you stop it, the problem comes right back. You haven’t fixed anything. You were addressing symptoms. You weren’t actually curing anything, again, except for antibiotics. That’s one problem. 

The other problem is you have a massive amount of side effects because you were trying to interact at the lowest level. It’s like, I’m going to try to program this computer by changing the melting point of copper. Maybe you can do things that way, but my God, it’s hard to program at the hardware level. 

So what I think we’re starting to understand is that, and by the way, this goes back to what you were saying before about that we could have access to our internal state. So people who practice that kind of stuff, so yoga and biofeedback and those, those are all the people that uniformly will say things like, well, the body has an intelligence and this and that. Those two sets overlap perfectly because that’s exactly right. Because once you start thinking about it that way, you realize that the better locus of control is not always at the lowest level. This is why we don’t all program with a soldering iron. 

We take advantage of the high level intelligences that are there, intelligences that are there, which means trying to figure out, okay, which of your tissues can learn? What can they learn? Why is it that certain drugs stop working after you take them for a while with this habituation, right? And so can we understand habituation, sensitization, associative learning, these kinds of things in chemical pathways? 

We’re going to have a completely different way. I think we’re going to have a completely different way of using drugs and of medicine in general when we start focusing on the goal states and on the intelligence of our subsystems as opposed to treating everything as if the only path was micromanagement from chemistry upwards.

{LF} Well, can you speak to this idea of somatic psychiatry? What are somatic cells? How do they form networks that use bioelectricity to have memory and all those kinds of things? 

{ML} Yeah. 

{LF} What are somatic cells like basics here? 

{ML} Somatic cells just means the cells of your body. Soma just means body, right? So somatic cells are just the… I’m not even specifically making a distinction between somatic cells and stem cells or anything like that. I mean, basically all the cells in your body, not just neurons, but all the cells in your body. They form electrical networks during embryogenesis, during regeneration. 

What those networks are doing in part is processing information about what our current shape is and what the goal shape is. Now, how do I know this? Because I can give you a couple of examples. 

{1:06:44 – Planaria}

One example is when we started studying this, we said, okay, here’s a planarian. A planarian is a flatworm. It has one head and one tail normally.  And the amazing… There’s several amazing things about planaria, but basically they kind of… I think planaria hold the answer to pretty much every deep question of life. For one thing, they’re similar to our ancestors. So they have true symmetry. They have a true brain. They’re not like earthworms. They’re a much more advanced life form. They have lots of different internal organs, but they’re these little… They’re about maybe two centimeters in the centimeter to two in size. They have a head and a tail. And the first thing is planaria are immortal. 

So they do not age. There’s no such thing as an old planarian. So that right there tells you that these theories of thermodynamic limitations on lifespan are wrong. It’s not that well over time everything degrades. No, planaria can keep it going for probably how long have they been around 400 million years. So the planaria in our lab are actually in physical continuity with planaria that were here 400 million years ago. 

{LF} So there’s planaria that have lived that long essentially. What does it mean: physical continuity? 

{ML} Because what they do is they split in half. The way they reproduce is they split in half. So the planaria, the back end grabs the petri dish, the front end takes off and they rip themselves in half. 

{LF} But isn’t it some sense where like you are a physical continuation? 

{ML} Yes, except that we go through a bottleneck of one cell, which is the egg. They do not. I mean, they can. There’s certain planaria. 

{LF} Got it. So we go through a very ruthless compression process and they don’t. 

{ML} Yes. Like an auto encoder, you know, sort of squashed down to one cell and then back out. These guys just tear themselves in half. 

And so the other amazing thing about them is they regenerate. So you can cut them into pieces. The record is, I think, 276 or something like that by Thomas Hunt Morgan. And each piece regrows a perfect little worm. They know exactly, every piece knows exactly what’s missing, what needs to happen. In fact, if you chop it in half, as it grows the other half, the original tissue shrinks so that when the new tiny head shows up, they’re proportional. So it keeps perfect proportion. If you starve them, they shrink. If you feed them again, they expand. Their control, their anatomical control is just insane. 

{LF} Somebody cut them into over 200 pieces? 

{ML} Yeah. Thomas Hunt Morgan did. 

{LF} Hashtag science. Amazing. 

{ML} And maybe more. I mean, they didn’t have antibiotics back then. I bet he lost some due to infection. I bet it’s actually more than that. I bet you could do more than that. 

{LF} Humans can’t do that. Well, yes. I mean, again, true, except that… 

{ML} Maybe you can at the embryonic level. Well, that’s the thing, right? So when I talk about this, I say, just remember that as amazing as it is to grow a whole planarian from a tiny fragment, half of the human population can grow a full body from one cell. So development is really, you can look at development as just an example of regeneration. 

{LF} Yeah. To think, we’ll talk about regenerative medicine, but there’s some sense of what would be like that worm in like 500 years where I can just go regrow a hand. 

{ML} Yep. With given time, it takes time to grow large things. 

{LF} For now. Yeah, I think so. I think. You can probably… Why not accelerate? Oh, biology takes its time?

{ML} I’m not going to say anything is impossible, but I don’t know of a way to accelerate these processes. I think it’s possible. I think we are going to be regenerative, but I don’t know of a way to make it faster. 

{LF} I could just think people from a few centuries from now would be like, well, they used to have to wait a week for the hand to regrow. It’s like when the microwave was invented. You can toast your… What’s that called when you put a cheese on a toast? It’s delicious is all I know. I’m blanking. Anyhow. All right. So planaria, why were we talking about the magical planaria that they have the mystery of life? 

{ML} Yeah. So the reason we’re talking about planaria is not only are they immortal, not only do they regenerate every part of the body, they generally don’t get cancer, which we can talk about why that’s important. They’re smart. They can learn things. You can train them. And it turns out that if you train a planaria and then cut their heads off, the tail will regenerate a brand new brain that still remembers the original information. 

{LF} Do they have a biological network going on or no? 

{ML} Yes. 

{LF} So their somatic cells are forming a network. And that’s what you mean by a true brain? What’s the requirement for a true brain? 

{ML} Like everything else, it’s a continuum, but a true brain has certain characteristics as far as the density, like a localized density of neurons that guides behavior. 

{LF} In the head. 

{ML} Exactly. Exactly. If you cut their head off, the tail doesn’t do anything. It just sits there until a new brain regenerates. They have all the same neurotransmitters that you and I have. 

But here’s why we’re talking about them in this context. So here’s your planaria. You cut off the head. You cut off the tail. You have a middle fragment. That middle fragment has to make one head and one tail. How does it know how many of each to make? And where do they go? How come it doesn’t switch? How come, right? 

So we did a very simple thing. And we said, okay, let’s make the hypothesis that there’s a somatic electrical network that remembers the correct pattern, and that what it’s doing is recalling that memory and building to that pattern. 

So what we did was we used a way to visualize electrical activity in these cells, right? It’s a variant of what people used to look for electricity in the brain. And we saw that that fragment has a very particular electrical pattern. You can literally see it once we developed the technique. It has a very particular electrical pattern that shows you where the head and the tail goes, right? You can just see it. 

And then we said, okay, well now let’s test the idea that that’s a memory that actually controls where the head and the tail goes. Let’s change that pattern. So basically, incept the false memory. 

And so what you can do is you can do that in many different ways. One way is with drugs that target ion channels to say, and so you pick these drugs and you say, okay, I’m going to do it so that instead of this one head, one tail electrical pattern, you have a two headed pattern, right? You’re just editing the electrical information in the network. When you do that, guess what the cells build? They build a two headed worm. 

And the coolest thing about it, no genetic changes. So we haven’t touched the genome. The genome is totally wild type. But the amazing thing about it is that when you take these two headed animals and you cut them into pieces again, some of those pieces will continue to make two headed animals.

So that information, that memory, that electrical circuit, not only does it hold the information for how many heads, not only does it use that information to tell the cells what to do to regenerate, but it stores it. Once you’ve reset it, it keeps. And we can go back, we can take a two headed animal and put it back to one headed. 

So now imagine, so there’s a couple of interesting things here that have implications for understanding what genomes and things like that. Imagine I take this two headed animal. Oh, and by the way, when they reproduce, when they tear themselves in half, you still get two headed animals. So imagine I take them and I throw them in the Charles River over here. So 100 years later, some scientists come along and they scoop up some samples and they go, oh, there’s a single headed form and a two headed form. Wow, a speciation event. Cool. Let’s sequence the genome and see why, what happened. The genomes are identical. There’s nothing wrong with the genome. 

So if you ask the question, how does, so, this goes back to your very first question is where do body plans come from, right? How does the planarian know how many heads it’s supposed to have? Now it’s interesting because you could say DNA, but what happened, what, what, as it turns out, the DNA produces a piece of hardware that by default says one head the way that when you turn on a calculator, by default, it’s a zero every single time, right? When you turn it on, it just says zero, but it’s a programmable calculator as it turns out. So once you’ve changed that next time, it won’t say zero. It’ll say something else and the same thing here. 

So you can make one headed, two headed, you can make no headed worms. We’ve done some other things along these lines, some other really weird constructs. So, so this, this, this, this question of, right. So again, it’s really important. The hardware-software distinction is really important because the hardware is essential because without proper hardware, you’re never going to get to the right physiology of having that memory. But once you have it, it doesn’t fully determine what the information is going to be. You can have other information in there and it’s reprogrammable by us, by bacteria, by various parasites, probably things like that. 

The other amazing thing about these planarias, think about this, most animals, when we get a mutation in our bodies, our children don’t inherit it, right? So you can go on, you could run around for 50, 60 years getting mutations. Your children don’t have those mutations because we go through the egg stage. 

Planaria tear themselves in half and that’s how they reproduce. So for 400 million years, they keep every mutation that they’ve had that doesn’t kill the cell that it’s in. So when you look at these planaria, their bodies are what’s called mixoploid, meaning that every cell might have a different number of chromosomes. They look like a tumor. If you look at the, the genome is an incredible mess because they accumulate all this stuff. And yet the, their body structure is, they are the best regenerators on the planet. Their anatomy is rock solid, even though their genome is always all kinds of crap. 

So this is a kind of a scandal, right? That, you know, when we learn that, well, you know, what are genomes to what genomes determine your body? Okay. Why is the animal with the worst genome have the best anatomical control, the most cancer resistant, the most regenerative, right? Really, we’re just beginning to start to understand this relationship between the genomically determined hardware and, and, and by the way, just as of, as of a couple of months ago, I think I now somewhat understand why this is, but it’s really, it’s really a major, you know, a major puzzle. 

{LF} I mean, that really throws a wrench into the whole nature versus nurture because you usually associate electricity with the, with the nurture and the hardware with the nature. And it’s, there’s just this weird integrated mess that propagates through generations. 

{ML} Yeah. It’s much more fluid. It’s much more complex. You can, you can imagine what’s happening here is just, just imagine the evolution of an animal like this, that, that multi scale, this goes back to this multi scale competency, right? 

Imagine that you have two, two, two, you have, you have an animal that that where its, its tissues have some degree of multi scale competency. So for example, if the like, like we saw in the tadpole, you know, if you put an eye on its tail, they can still see out of that eye, right? That the, you know, there’s all, there’s incredible plasticity. 

So if you have an animal and it comes up for selection and the fitness is quite good, evolution doesn’t know whether the fitness is good because the genome was awesome or because the genome was kind of junky, but, but the competency made up for it, right? And things kind of ended up good. 

So what that means is that the more competency you have, the harder it is for selection to pick the best genomes, it hides information, right? And so that means that, so, so what happens, you know, evolution basically starts all those start, all the hard work is being done to increase the competency because it’s harder and harder to see the genomes. And so I think in planaria, what happened is that there’s this runaway phenomenon where all the effort went into the algorithm such that we know you got a crappy genome. We can’t keep, we can’t clean up the genome. We can’t keep track of it. So what’s going to happen is what survives are the algorithms that can create a great worm no matter what the genome is. 

So everything went into the algorithm and which, which of course then reduces the pressure on keeping a, you know, keeping a clean genome. So this idea of, right, and different animals have this in different, to different levels, but this idea of putting energy into an algorithm that does not overtrain on priors, right? It can’t assume, I mean, I think biology is this way in general, evolution doesn’t take the past too seriously because it makes these basically problem solving machines as opposed to like exactly what, you know, to, to, to deal with exactly what happened last time. 

{LF} Yeah. Problem solving versus memory recall. So a little memory, but a lot of problem solving. 

{ML} I think so. Yeah. In many cases, yeah. 

{LF} Problem solving. I mean, it’s incredible that those kinds of systems are able to be constructed, um, especially how much they contrast with the way we build problem solving systems in the AI world. 

{1:18:33 – Building xenobots}

Back to Xenobots. I’m not sure if we ever described how Xenobots are built, but I mean, you have a paper titled: Biological robots perspectives on an emerging interdisciplinary field. And the beginning you, uh, you mentioned that the word Xenobots is like controversial. Do you guys get in trouble for using Xenobots or what? Do people not like the word Xenobots? Are you trying to be provocative with the word Xenobots versus biological robots? I don’t know. Is there some drama that we should be aware of? 

{ML} Yeah. There’s a little bit of drama. Uh, I think, I think the drama is basically related to people, um, having very fixed ideas about what terms mean. And I think in many cases, these ideas are completely out of date with where science is now. And for sure they’re, they’re out of date with what’s going to be, I mean, these, these, these concepts, uh, are not going to survive the next couple of decades. 

So if you ask a person and including, um, you know, a lot of people in biology who kind of want to keep a sharp distinction between biologicals and robots, right? See, what’s a robot? Well, a robot, it comes out of a factory. It’s made by humans. It is boring. It is a meaning that you can predict everything it’s going to do. It’s made of metal and certain other inorganic materials. Living organisms are magical. They, they, they arise, right? And so on. 

So these, these distinctions, I think these, these distinctions, I think were, were never good, but, uh, they’re going to be completely useless going forward. And so part of, there’s a couple of papers that’s one paper and there’s another one that Josh Bongard and I wrote {paper} where we really attack the terminology. And we say these binary categories are based on very, um, nonessential kind of surface limitations of, of technology and imagination that were true before, but they’ve got to go. And so, and so we call them Xenobots. So, so Xeno for Xenopus laevis, where this is, it’s the frog that, that these guys are made of, but we think it’s an example of a biobot technology, because ultimately if we, if we under, once we understand how to, uh, communicate and manipulate, um, the inputs to these cells, we will be able to get them to build whatever we want them to build. And that’s robotics, right? It’s the rational construction of machines that have useful purposes. I absolutely think that this is a robotics platform, whereas some biologists don’t.

{LF} But it’s built in a way that, uh, all the different components are doing their own computation. So in a way that we’ve been talking about, so you’re trying to do top down control in that biological system. 

{ML} That’s exactly right. And in the future, all of this will, will, will merge together because of course at some point we’re going to throw in synthetic biology circuits, right? New, new, um, you know, new transcriptional circuits to get them to do new things. Of course we’ll throw some of that in, but we specifically stayed away from all of that because in the first few papers, and there’s some more coming down the pike that are, I think going to be pretty, pretty dynamite, um, that, uh, we want to show what the native cells are made of. 

Because what happens is, you know, if you engineer the heck out of them, right, if we were to put in new, you know, new transcription factors and some new metabolic machinery and whatever, people will say, well, okay, you engineered this and you made it do whatever. And fine. I wanted to show, uh, and, and, and the whole team, uh, wanted to show the plasticity and the intelligence in the biology. What does it do that’s surprising before you even start manipulating the hardware in that way? 

{LF} Yeah. Don’t try to, uh, over control the thing. Let it flourish. The full beauty of the biological system. Why Xenopus laevis? How do you pronounce it? 

{ML}  The frog. Xenopus laevis. Yeah. Yeah. It’s a very popular. Why this frog? It’s been used since, uh, I think the fifties. Uh, it’s just very convenient because you can, you know, we, we keep the adults in this, in this, uh, very fine frog habitat. They lay eggs. They lay tens of thousands of eggs at a time. The eggs develop right in front of your eyes. It’s the most magical thing you can, you can see because normally, you know, if you were to deal with mice or rabbits or whatever, you don’t see the early stages, right? Because everything’s inside the mother. Everything’s in a Petri dish at room temperature. So you just, you, you have an egg, it’s fertilized and you can just watch it divide and divide and divide. And on all the organs form, you can just see it. And at that point, um, the community has, has developed lots of different tools for understanding what’s going on and also for, for manipulating, right? So it’s, it’s people use it for, um, you know, for understanding birth defects and neurobiology and cancer immunology. 

{LF} So you get the whole, uh, embryogenesis in the Petri dish. That’s so cool to watch. Is there videos of this? 

{ML} Oh yeah. Yeah. Yeah. There’s, but yeah, there’s, there’s amazing videos on, on, online. I mean, mammalian embryos are super cool too. For example, monozygotic twins are what happens when you cut a mammalian embryo in half. You don’t get two half bodies. You get two perfectly normal bodies because it’s a regeneration event, right? Development is just the, it’s just the kind of regeneration really. 

{LF} And why this particular frog? It’s just, uh, cause they were doing in the fifties and…

{ML}  It breeds well in, um, you know, in, in, it’s easy to raise in, in the laboratory and, uh, it’s very prolific and all the tools basically for decades, people have been developing tools. There’s other, some people use other frogs, but I have to say this is, this is, this is important. Xenobots are fundamentally not anything about frogs. So, um, I can’t say too much about this cause it’s not published and peer reviewed yet, but we’ve made Xenobots out of other things that have nothing to do with frogs. It’s…. this is not a frog phenomenon. This is, we, we started with frog because it’s so convenient, but this, this, this plasticity is not a frog. You know, it’s not related to the fact that they’re frogs. 

{LF} What happens when you kiss it? Does it turn into a prince? No. Or a princess? Which way? Uh, prince. Yeah. Prince should be a prince. 

{ML} Yeah. Uh, that’s an experiment that I don’t believe we’ve done. And if we have, I don’t want to collaborate, 

{LF} I can, I can take on the lead, uh, on that effort. Okay, cool. Uh, how does the cells coordinate? Let’s focus in on just the embryogenesis. So there’s one cell, so it divides, doesn’t have to be very careful about what each cell starts doing once they divide. 

{ML} Yes. 

{LF} And like, when there’s three of them, it’s like the cofounders or whatever, like, well, like slow down, you’re responsible for this. When do they become specialized and how do they coordinate that specialization? 

{ML} So, this is the basic science of developmental biology. There’s a lot known about all of that, but, um, but I’ll tell you what I think is kind of the most important part, which is, yes, it’s very important who does what. However, because going back to this issue of why I made this claim that, um, biology doesn’t take the past too seriously. And what I mean by that is it doesn’t assume that everything is the way it’s, it’s expected to be. Right. 

{ML} And here’s an example of that. Um, this was, this was done, this was, this was an old experiment going back to the forties, but, um, basically imagine it’s a newt, salamander and it’s got these little tube tubules that go to the kidneys, right? It’s a little tube. Take a cross section of that tube. You see eight to 10 cells that have cooperated to make this little tube in cross section, right? 

So one amazing, one amazing thing you can do is, um, you can, you can mess with a very early cell division to make the cells gigantic, bigger. You can, you can make them different sizes. You can force them to be different sizes. So if you make the cells different sizes, the whole newt is still the same size. So if you take a cross section through the, through that tubule, instead of eight to 10 cells, you might have four or five or you might have, you know, three until you make the cells so enormous that one single cell wraps around itself and, and gives you that same large scale structure with a completely different molecular mechanism. So now instead of cell to cell communication to make a tubule, instead of that, it’s one cell using the cytoskeleton to bend itself around. 

So think about what that means in the service of a large scale, talk about top down control, right? In the service of a large-scale anatomical feature, different molecular mechanisms get called up. So now think about this, you’re, you’re, you’re a newt cell and trying to make an embryo. If you had a fixed idea of who was supposed to do what, you’d be screwed because now your cells are gigantic. Nothing would work. There’s an incredible tolerance for changes in the size of the parts and the amount of DNA in those parts. Um, all sorts of stuff you can, you can, the life is highly interoperable. You can put electrodes in there and you can put weird nanomaterials. It still works. It’s, it’s, uh, this is that problem solving action, right? It’s able to do what it needs to do, even when circumstances change. That is, you know, the hallmark of intelligence, right? William James defined intelligence as the ability to get to the same goal by different means. That’s this, you get to the same goal by completely different means.

 And so why am I bringing this up is just to say that, yeah, it’s important for the cells to do the right stuff, but they have incredible tolerances for things not being what you expect and to still get their job done. So if you’re, you know, um, all of these things are not hardwired. 

There are organisms that might be hardwired. For example, the nematode C elegans in that organism, every cell is numbered, meaning that every C elegans has exactly the same number of cells as every other C elegans. They’re all in the same place. They all divide. There’s literally a map of how it works that in that, in that sort of system, it’s, it’s, it’s much more cookie cutter, but, but most, most organisms are incredibly plastic in that way. 

{LF} Is there something particularly magical to you about the whole developmental biology process? Um, is there something you could say, cause you just said it, they’re very good at accomplishing the goal of the job they need to do the competency thing, but you get fricking organism from one cell. It’s like, uh, I mean, it’s very hard, hard to intuit that whole process to even think about reverse engineering that process. 

{ML} Right. Very hard to the point where I often just imagine, I, I sometimes ask my students to do this thought experiment. Imagine you were, you were shrunk down to the, to the scale of a single cell and you were in the middle of an embryo and you were looking around at what’s going on and the cells running around, some cells are dying at the, you know, every time you look, it’s kind of a different number of cells for most organisms. And so I think that if you didn’t know what embryonic development was, you would have no clue that what you’re seeing is always going to make the same thing. Nevermind knowing what that, what that is. Nevermind being able to say, even with full genomic information, being able to say, what the hell are they building? We have no way to do that. But, but just even to guess that, wow, the, the, the outcome of all this activity is it’s always going to be, it’s always going to build the same thing. 

{LF} The imperative to create the final you as you are now is there already. So you can, you would, so you start from the same embryo, you create a very similar organism. 

{ML} Yeah. Except for cases like the Xenobots, when you give them a different environment, they come up with a different way to be adaptive in that environment. But overall, I mean, so, so I think, so I think to, you know, kind of summarize it, I think what evolution is really good at is creating hardware that has a very stable baseline mode, meaning that left to its own devices, it’s very good at doing the same thing. But it has a bunch of problem solving capacity such that if any, if any assumptions don’t hold, if your cells are a weird size, or you get the wrong number of cells, or there’s a, you know, somebody stuck in electrode halfway through the body, whatever, it will still get most of what it needs to do done. 

{LF} You’ve talked about the magic and the power of biology here. If we look at the human brain, how special is the brain in this context? You’re kind of minimizing the importance of the brain or lessening its…. We think of all the special computation happens in the brain, everything else is like the help. You’re kind of saying that the whole thing is the whole thing is doing computation. But nevertheless, how special is the human brain in this full context of biology? 

{ML} Yeah, I mean, look, there’s no getting away from the fact that the human brain allows us to do things that we could not do without it. 

{ML} You can say the same thing about the liver….. The heart 

{ML} Yeah, no, this is true. And so and so, you know, I, my goal is not No, you’re right. My goal is ….

{LF} You’re just being polite to the brain right now. Well, like being a politician, like, listen, everybody has everybody has a role. Yeah, it’s a very important role. 

{ML} That’s right. 

{LF} We have to acknowledge the importance of the brain, you know, 

{ML} There are more than enough people who are cheerleading the brain, right? So I don’t feel like; nothing I say is going to reduce people’s excitement about the human brain. And so

{LF}  emphasize other things credit. 

{ML} I don’t think it gets too much credit, I think other things don’t get enough credit. I think the human brain is incredible and special and all that. I think other things need more credit. And I also think that this and I’m sort of this way about everything. I don’t like binary categories, but almost anything. I like a continuum. And the thing about the human brain is that it… by accepting that as some kind of an important category or essential thing, we end up with all kinds of weird pseudo problems and conundrum. 

So for example, when we talk about it, you know, if you don’t want to talk about ethics and other other things like that, and what you know, this this idea that surely if we look out into the universe, surely, we don’t believe that this human brain is the only way to be sentient, right? Surely we don’t, you know, and to have high level cognition. I just can’t even wrap my mind around this, this idea that that is the only way to do it. No doubt there are other architectures made of completely different principles that achieve the same thing. 

And once we believe that, then that tells us something important. It tells us that things that are not quite human brains or chimeras of human brains and other tissue or human brains or other kinds of brains and novel configurations or things that are sort of brains, but not really, or plants or embryos or whatever, might also have important cognitive status. So that’s the only thing.

 I think we have to be really careful about treating the human brain as if it was some kind of like sharp binary category. You know, you are or you aren’t. I don’t believe that exists. 

{LF} So when we look out at all the beautiful variety of human brains, semi semi-biological architectures out there in the universe, how many intelligent alien civilizations do you think are out there? 

{ML} Boy, I have no expertise in that whatsoever. 

{LF} You haven’t met any? 

{ML} I have met the ones we’ve made. 

{LF} I think that I mean, exactly. In some sense with synthetic biology, are you not creating aliens? 

{ML} I absolutely think so because look, all of life, all of all standard model systems are an end of one course of evolution on Earth, right? And trying to make conclusions about biology from looking at life on Earth is like testing your theory on the same data that generated it. It’s all kind of like locked in. So we absolutely have to create novel examples that have no history on Earth that don’t, you know, xenobots have no history of selection to be a good xenobot. The cells have selection for various things, but the xenobot itself never existed before. 

And so we can make chimeras, you know, we make frog-axolotls that are sort of half frog, half axolotl. You can make all sorts of hybrots, right constructions of living tissue with robots and whatever. We need to be making these things until we find actual aliens, because otherwise, we’re just looking at an end of one set of examples, all kinds of frozen accidents of evolution and so on. We need to go beyond that to really understand biology. 

{LF} But we’re still even when you do synthetic biology, you’re locked in to the basic components of the way biology is done on this Earth. 

{ML} Yeah, right. 

{LF} And also, and also the basic constraints of the environment, even artificial environments to construct in the lab are tied up to the environment. I mean, what do you? Okay, let’s say there is I mean, what I think is there’s a nearly infinite number of intelligent civilizations living or dead out there. If you pick one out of the box, what do you think it would look like? So in…. when you think about synthetic biology, or creating synthetic organisms, how hard is it to create something that’s very different? 

{ML} Yeah, I think it’s very hard to create something that’s very different, right? It’s we are just locked in both both experimentally and in terms of our imagination, right? It’s very hard. 

{LF} And you also emphasize several times that the idea of shape. 

{ML} Yeah 

{LF} The individual cell get together with other cells and they kind of they’re gonna build a shape. So it’s shape and function, but shape is a critical thing. 

{ML} Yeah. So here, I’ll take a stab. I mean, I agree with you. I did to whatever extent that we can say anything, I do think that there’s, you know, probably an infinite number of different architectures with interesting cognitive properties out there. 

What can we say about them? I think that the only things that are going …. I don’t think we can rely on any of the typical stuff, you know, carbon based, none of that. Like, I think all of that is just, you know, us being having having a lack of imagination. 

But I think the things that are going to be universal, if anything is, are things, for example, driven by resource limitation, the fact that you are fighting a hostile world, and you have to draw a boundary between yourself and the world somewhere, the fact that that boundary is not given to you by anybody, you have to you have to assume it, you know, estimated yourself. And the fact that you have to coarse grain your experience and the fact that you’re going to try to minimize surprise and the fact that like these, these are the things that I think are fundamental about biology, none of the, you know, the facts about the genetic code, or even the fact that we have genes or the biochemistry of it, I don’t think any of those things are fundamental. But it’s going to be a lot more about the information and about the creation of the self, the fact that so in my framework, selves are demarcated by the scale of the goals that they can pursue. So from little tiny local goals to like massive, you know, planetary scale goals for certain humans, and everything and everything in between. 

So you can draw this like cognitive light cone about that determines the the scale of the goals you could possibly pursue. I think those kinds of frameworks, like that, like active inference, and so on are going to be universally applicable, but but none of the other things that are typically discussed. 

{LF} Quick pause, do you need a bathroom break? 

{ML} We were just talking about, you know, aliens and all that. That’s a funny thing, which is, I don’t know if you’ve seen them, there’s a kind of debate that goes on about cognition and plants, and what can you say about different kinds of computation and cognition and plants. And I always I always look at that something like if you’re weirded out by cognition and plants, you’re not ready for exobiology, right? If you know something that’s that similar here on Earth is already like freaking you out, then I think there’s going to be all kinds of cognitive life out there that we’re gonna have a really hard time recognizing. 

{LF} I think robots will help us….

{ML} yeah 

{LF} ….like expand our mind about cognition, either that or the work like xenobots. So, and they maybe becomes the same thing is, you know, really, when the human engineer the thing, at least in part, and then is able to achieve some kind of cognition that’s different than what you’re used to, then you start to understand like, oh, you know, every living organism is capable of cognition. Oh, I need to kind of broaden my understanding what cognition is. But do you think plants, like when you eat them, are they screaming? 

{ML} I don’t know about screaming. I think you have to…. 

{LF} That’s what I think when I eat a salad. 

{ML} Yeah, good. Yeah, I think you have to scale down the expectations in terms of right, so probably they’re not screaming in the way that we would be screaming. However, there’s plenty of data on plants being able to do anticipation and certain kinds of memory and so on. 

I think, you know, what you just said about robots, I hope you’re right. And I hope that’s but there’s two, there’s two ways that people can take that right. So one way is exactly what you just said to try to kind of expand their notions for that category. 

The other way people often go is they just sort of define the term is if it’s not a natural product, it’s it’s just faking, right? It’s not really intelligence if it was made by somebody else, because it’s that same, it’s the same thing. They can see how it’s done. And once you see how it’s like a magic trick, when you see how it’s done, it’s not as fun anymore. And I think people have a real tendency for that. And they sort of which…. which I find really strange in the sense that if somebody said to me, we have this this this sort of blind, like, like, hill climbing search, and then and then we have a really smart team of engineers, which one do you think is going to produce a system that has good intelligence? I think it’s really weird to say that it only comes from the blind search, right? It can’t be done by people who, by the way, can also use evolutionary techniques if they want to, but also rational design. I think it’s really weird to say that real intelligence only comes from natural evolution. So I hope you’re right. I hope people take it the other way. 

{LF} But there’s a nice shortcut. So I work with legged robots a lot now for my own personal pleasure. Not in that way, internet. So four legs. And one of the things that changes my experience with the robots a lot is when I can’t understand why I did a certain thing. And there’s a lot of ways to engineer that. Me, the person that created the software that runs it. There’s a lot of ways for me to build that software in such a way that I don’t exactly know why it did a certain basic decision. 

Of course, as an engineer, you can go in and start to look at logs. You can log all kind of data, sensory data, the decisions you made, you know, all the outputs in your networks and so on. But I also try to really experience that surprise and that really experience as another person would that totally doesn’t know how it’s built. And I think the magic is there in not knowing how it works. That I think biology does that for you through the layers of abstraction. 

{ML} Yeah, 

{LF} Because nobody really knows what’s going on inside the biological. Like each one component is clueless about the big picture. 

{ML} I think there’s actually really cheap systems that can illustrate that kind of thing, which is even like, you know, fractals, right? Like, you have a very small, short formula in Z, and you see it and there’s no magic, you’re just going to crank through, you know, Z squared plus C, whatever, you’re just going to crank through it. But the result of it is this incredibly rich, beautiful image, right? That that just like, wow, all of that was in this, like, 10 character long string, like amazing. 

So the fact that you can know everything there is to know about the details and the process and all the parts and every like, there’s literally no magic of any kind there. And yet the outcome is something that you would never have expected. And it’s just, you know, is incredibly rich and complex and beautiful. So there’s a lot of that. 

{1:42:08 – Unconventional cognition}

{LF} You write that you work on developing conceptual frameworks for understanding unconventional cognition. So the kind of thing we’ve been talking about, I just like the term unconventional cognition. And you want to figure out how to detect, study and communicate with the thing. You’ve already mentioned a few examples, but what is unconventional cognition? Is it as simply as everything else outside of what we define usually as cognition, cognitive science, the stuff going on between our ears? Or is there some deeper way to get at the fundamentals of what is cognition? 

{ML} Yeah, I think like, and I’m certainly not the only person who works in unconventional, unconventional cognition. 

{ML} So it’s the term used? 

{LF} Yeah, that’s one that I so I’ve coined a number of weird terms, but that’s not one of mine like that. That’s an existing thing. So for example, somebody like Andy Adamatzky, who I don’t know if you’ve if you’ve had him on, if you haven’t, you should. He’s a, you know, very interesting guy. He’s a computer scientist, and he does unconventional cognition and slime molds, all kinds of weird. He’s a real weird, weird cat, really interesting. Anyway, so that’s, you know, it’s a bunch of terms that I’ve come up with. But that’s not one of mine. 

{ML} So I think like many terms, that one is, is really defined by the times, meaning that unconventional cognitive things that are unconventional cognition today are not going to be considered unconventional cognition at some point. It’s one of those, it’s one of those things. And so it’s, you know, it’s, it’s, it’s this,  really deep question of how do you recognize, communicate with, classify cognition, when you cannot rely on the typical milestones, right? 

So typical, you know, again, if you stick with the history of life on Earth, like these, these exact model systems, you would say, Ah, here’s a particular structure of the brain. And this one has fewer of those. And this one has a bigger frontal cortex. And this one, right, so these are landmarks that we’re that we’re used to, and and allows us to make very kind of rapid judgments about things. But if you can’t rely on that, either because you’re looking at a synthetic thing, or an engineered thing, or an alien thing, then what do you do? Right? How do you and so and so that’s what I’m really interested. I’m interested in mind in all of its possible implementations, not just the obvious ones that we know from looking at brains here on Earth. 

{LF} Whenever I think about something like unconventional cognition, I think about cellular automata, I’m just captivated by the beauty of the thing. The fact that from simple little objects, you can create some such beautiful complexity that very quickly, you forget about the individual objects, and you see the things that it creates as its own organisms. That blows my mind every time. Like, honestly, I could full time just eat mushrooms and watch cellular automata. Don’t even have to do mushrooms. Just cellular automata. It feels like, I mean, from the engineering perspective, I love when a very simple system captures something really powerful, because then you can study that system to understand something fundamental about complexity about life on Earth. Anyway, how do I communicate with a thing? If cellular automata can do cognition, if a plant can do cognition, if a xenobot can do cognition, how do I like whisper in its ear and get an answer back to how do I have a conversation? 

{ML} Yeah.

{LF} How do I have a xenobot on a podcast? 

{ML} It’s a really interesting line of investigation that opens up. I mean, we’ve thought about this. So you need a few things. 

You need to understand the space in which they live. So not just the physical modality, like can they see light, can they feel vibration? I mean, that’s important, of course, because that’s how you deliver your message. But not just the ideas for a communication medium, not just the physical medium, but saliency, right? So what’s important to this system? And systems have all kinds of different levels of sophistication of what you could expect to get back. And I think what’s really important, I call this the spectrum of persuadability, which is this idea that when you’re looking at a system, you can’t assume where on the spectrum it is. You have to do experiments. 

And so for example, if you look at a gene regulatory network, which is just a bunch of nodes that turn each other on and off at various rates, you might look at that and you say, well, there’s no magic here. I mean, clearly this thing is as deterministic as it gets. It’s a piece of hardware. The only way we’re going to be able to control it is by rewiring it, which is the way molecular biology works, right? We can add nodes, remove nodes, whatever. 

Well, so we’ve done simulations and shown that biological, and now we’re doing this in the lab, the biological networks like that have associative memory. So they can actually learn, they can learn from experience. They have habituation, they have sensitization, they have associative memory, which you wouldn’t have known if you assume that they have to be on the left side of that spectrum. 

So when you’re going to communicate with something, and we’ve even, Charles Abramson and I have written a paper on behaviorist approaches to synthetic organisms, meaning that if you’re given something, you have no idea what it is or what it can do, how do you figure out what its psychology is, what its level is, what does it, and so we literally lay out a set of protocols, starting with the simplest things and then moving up to more complex things where you can make no assumptions about what this thing can do, right? You have to start and you’ll find out. 

So when you’re going to, so here’s a simple, I mean, here’s one way to communicate with something. If you can train it, that’s a way of communicating. So if you can provide, if you can figure out what the currency of reward of positive and negative reinforcement is, right, and you can get it to do something it wasn’t doing before based on experiences you’ve given, you have taught it one thing. You have communicated one thing, that such and such an action is good, some other action is not good. That’s like a basic atom of a primitive atom of communication. 

{LF} What about in some sense, if it gets you to do something you haven’t done before, is it answering back? 

{ML} Yeah, most certainly. And there’s, I’ve seen cartoons, I think maybe Gary Larson or somebody had had a cartoon of these rats in the maze and the one rat, you know, assist to the other. You look at this every time, every time I walk over here, he starts scribbling in that on the, you know, on the clipboard that he has, it’s awesome. 

{LF} If we step outside ourselves and really measure how much, like if I actually measure how much I’ve changed because of my interaction with certain cellular automata. I mean, you really have to take that into consideration about like, well, these things are changing you too. Yes. I know, you know how it works and so on, but you’re being changed by the thing. 

{ML} Yeah, absolutely. I think I read, I don’t know any details, but I think I read something about how wheat and other things have domesticated humans in terms of, right, but by their properties change the way that the human behavior and societal structures. 

{LF} In that sense, cats are running the world because they’ve took over the, so first off, so first they, while not giving a shit about humans, clearly with every ounce of their being, they’ve somehow got just millions and millions of humans to take them home and feed them. And then not only the physical space did they take over, they took over the digital space. They dominate the internet in terms of cuteness, in terms of memeability. And so they’re like, they got themselves literally inside the memes, they become viral and spread on the internet. And they’re the ones that are probably controlling humans. That’s my theory. Another, that’s a follow up paper after the frog kissing. 

Okay. I mean, you mentioned sentience and consciousness. You have a paper titled Generalizing Frameworks for Sentience Beyond Natural Species. So beyond normal cognition, if we look at sentience and consciousness, and I wonder if you draw an interesting distinction between those two elsewhere, outside of humans, and maybe outside of Earth, you think aliens have sentience. And if they do, how do we think about it? So when you have this framework, what is this paper? What is the way you propose to think about sentience? 

{ML} Yeah, that particular paper was a very short commentary on another paper that was written about crabs. It was a really good paper on them, crabs and various, like a rubric of different types of behaviors that could be applied to different creatures, and they’re trying to apply it to crabs and so on. Consciousness, we can talk about if you want, but it’s a whole separate kettle of fish. I almost never talk about…

{LF} Except in crabs. 

{ML} In this case, yes. I almost never talk about consciousness, per se. I’ve said very, very little about it, but we can talk about it if you want. Mostly what I talk about is cognition, because I think that that’s much easier to deal with in a kind of rigorous experimental way. I think that all of these terms have, you know, sentience and so on, have different definitions, and I fundamentally, I think that people can, as long as they specify what they mean ahead of time, I think people can define them in various ways.  The only thing that I really think that I really kind of insist on is that the right way to think about all this stuff is from an engineering perspective. What does it help me to control, predict, and does it help me do my next experiment? That’s not a universal perspective. 

Some people have philosophical kind of underpinnings, and those are primary, and if anything runs against that, then it must automatically be wrong. Some people will say, I don’t care what else. If your theory says to me that thermostats have little tiny goals, I’m not, so that’s it. That’s my philosophical preconception. Thermostats do not have goals, and that’s it. That’s one way of doing it, and some people do it that way. I do not do it that way, and I think that we can’t, I don’t think we can know much of anything from a philosophical armchair. I think that all of these theories and ways of doing things stand or fall based on just basically one set of criteria. Does it help you run a rich research program? That’s it. 

{LF} I agree with you totally, but forget philosophy. What about the poetry of ambiguity? What about at the limits of the things you can engineer using terms that can be defined in multiple ways and living within that uncertainty in order to play with words until something lands that you can engineer? I mean, that’s to me where consciousness sits currently. Nobody really understands the hard problem of consciousness, the subject, what it feels like, because it really feels like, it feels like something to be this biological system. This conglomerate of a bunch of cells in this hierarchy of competencies feels like something, and yeah, I feel like one thing, and is that just a side effect of a complex system, or is there something more that humans have, or is there something more that any biological system has? Some kind of magic, some kind of, not just a sense of agency, but a real sense with a capital letter S of agency. 

{ML} Yeah. Ah, boy, yeah, that’s a deep question.

{LF}  Is there room for poetry in engineering or no? 

{ML} No, there definitely is, and a lot of the poetry comes in when we realize that none of the categories we deal with are sharp as we think they are, right? And so in the different areas of all these spectra are where a lot of the poetry sits, I have many new theories about things, but I, in fact, do not have a good theory about consciousness that I plan to trot out. 

{LF} And you almost don’t see it as useful for your current work to think about consciousness? 

{ML} I think it will come. I have some thoughts about it, but I don’t feel like they’re going to move the needle yet on that. 

{LF} And you want to ground it in engineering always. 

{ML} So, well, I mean, so if we really tackle consciousness per se, in the terms of the hard problem, that isn’t necessarily going to be groundable in engineering, right? That aspect of cognition is, but actual consciousness per se, first person perspective, I’m not sure that that’s groundable in engineering. 

And I think specifically what’s different about it is there’s a couple of things. So let’s, you know, here we go. I’ll say a couple of things about consciousness. One thing is that what makes it different is that for every other thing, other aspect of science, when we think about having a correct or a good theory of it, we have some idea of what format that theory makes predictions in. So whether those be numbers or whatever, we have some idea. We may not know the answer, we may not have the theory, but we know that when we get the theory, here’s what it’s going to output, and then we’ll know if it’s right or wrong. 

For actual consciousness, not behavior, not neural correlates, but actual first person consciousness. If we had a correct theory of consciousness, or even a good one, what the hell would, what format would it make predictions in, right? Because all the things that we know about basically boil down to observable behaviors. 

So the only thing I can think of when I think about that is, it’ll be poetry, or it’ll be something to, if I ask you, okay, you’ve got a great theory of consciousness, and here’s this creature, maybe it’s a natural one, maybe it’s an engineered one, whatever. And I want you to tell me what your theory says about this being, what it’s like to be this being. The only thing I can imagine you giving me is some piece of art, a poem or something, that once I’ve taken it in, I share, I now have a similar state as whatever. That’s about as good as I can come up with. 

{LF} Well, it’s possible that once you have a good understanding of consciousness, it would be mapped to some things that are more measurable. So for example, it’s possible that a conscious being is one that’s able to suffer. So you start to look at pain and suffering. You can start to connect it closer to things that you can measure that, in terms of how they reflect themselves in behavior and problem solving and creation and attainment of goals, for example, which I think suffering is one of the, you know, life is suffering. It’s one of the big aspects of the human condition. And so if consciousness is somehow a, maybe at least a catalyst for suffering, you could start to get like echoes of it. 

You start to see like the actual effects of consciousness and behavior. That it’s not just about subjective experience. It’s like it’s really deeply integrated in the problem solving decision making of a system, something like this. But also it’s possible that we realize, this is not a philosophical statement. Philosophers can write their books. I welcome it. You know, I take the Turing test really seriously. I don’t know why people really don’t like it. When a robot convinces you that it’s intelligent, I think that’s a really incredible accomplishment. And there’s some deep sense in which that is intelligence. If it looks like it’s intelligent, it is intelligent. And I think there’s some deep aspect of a system that appears to be conscious. In some deep sense, it is conscious. At least for me, we have to consider that possibility. And a system that appears to be conscious is an engineering challenge. 

{ML} Yeah, I don’t disagree with any of that. I mean, especially intelligence, I think, is a publicly observable thing. Science fiction has dealt with this for a century or much more, maybe. This idea that when you are confronted with something that just doesn’t meet any of your typical assumptions, so you can’t look in the skull and say, oh, well, there’s that frontal cortex, so then I guess we’re good. So this thing lands on your front lawn, and the little door opens, and something trundles out, and it’s shiny and aluminum looking, and it hands you this poem that it wrote while it was flying over, and how happy it is to meet you. What’s going to be your criteria for whether you get to take it apart and see what makes it tick, or whether you have to be nice to it and whatever? All the criteria that we have now and that people are using, and as you said, a lot of people are down on the Turing test and things like this, but what else have we got? Because measuring the cortex size isn’t going to cut it in the broader scheme of things. So I think it’s a wide open problem. 

Our solution to the problem of other minds, it’s very simplistic. We give each other credit for having minds just because we’re sort of on an anatomical level, we’re pretty similar, and so it’s good enough. But how far is that going to go? So I think that’s really primitive. So yeah, I think it’s a major unsolved problem.

{LF}  It’s a really challenging direction of thought to the human race that you talked about, like embodied minds. If you start to think that other things other than humans have minds, that’s really challenging. Because all men are created equal starts being like, all right, well, we should probably treat not just cows with respect, but like plants, and not just plants, but some kind of organized conglomerates of cells in a petri dish. 

{ML} In fact, some of the work we’re doing, like you’re doing and the whole community of science is doing with biology, people might be like, we were really mean to viruses. 

Yeah. I mean, yeah, the thing is, you’re right. And I certainly get phone calls about people complaining about frog skin and so on. But I think we have to separate the sort of deep philosophical aspects versus what actually happens. 

So what actually happens on Earth is that people with exactly the same anatomical structure kill each other on a daily basis. So I think it’s clear that simply knowing that something else is equally or maybe more cognitive or conscious than you are is not a guarantee of kind behavior, that much we know of. And so then we look at commercial farming of mammals and various other things. And so I think on a practical basis, long before we get to worrying about things like frog skin, we have to ask ourselves, why are we, what can we do about the way that we’ve been behaving towards creatures, which we know for a fact, because of our similarities are basically just like us. That’s kind of a whole other social thing. 

But fundamentally, of course, you’re absolutely right in that we are also, think about this, we are on this planet in some way, incredibly lucky. It’s just dumb luck that we really only have one dominant species. It didn’t have to work out that way. So you could easily imagine that there could be a planet somewhere with more than one equally or maybe near equally intelligent species. But they may not look anything like each other. 

So there may be multiple ecosystems where there are things of similar to human like intelligence. And then you’d have all kinds of issues about how do you relate to them when they’re physically like you at all. But yet in terms of behavior and culture and whatever, it’s pretty obvious that they’ve got as much on the ball as you have. Or maybe imagine that there was another group of beings that was on average 40 IQ points lower. We’re pretty lucky in many ways. We don’t really have, even though we still act badly in many ways. But the fact is, all humans are more or less in that same range, but didn’t have to work out that way. 

{LF} Well, but I think that’s part of the way life works on Earth, maybe human civilization works, is it seems like we want ourselves to be quite similar. And then within that, you know, where everybody’s about the same relatively IQ, intelligence, problem solving capabilities, even physical characteristics. But then we’ll find some aspect of that that’s different. 

And that seems to be like, I mean, it’s really dark to say, but that seems to be the, not even a bug, but like a feature of the early development of human civilization. You pick the other, your tribe versus the other tribe and you war, it’s a kind of evolution in the space of memes, a space of ideas, I think, and you war with each other. So we’re very good at finding the other, even when the characteristics are really the same. And that’s, I don’t know what that, I mean, I’m sure so many of these things echo in the biological world in some way. 

{ML} Yeah. There’s a fun experiment that I did. My son actually came up with this and we did a biology unit together. He’s a homeschooler. And so we did this a couple of years ago. We did this thing where, imagine you get this slime mold, right? Physarum polycephalum, and it grows on a Petri dish of agar and it sort of spreads out and it’s a single cell protist, but it’s like this giant thing. And so you put down a piece of oat and it wants to go get the oat and it sort of grows towards the oat. 

So what you do is you take a razor blade and you just separate the piece of the whole culture that’s growing towards the oat. You just kind of separate it. And so now think about the interesting decision making calculus for that little piece. I can go get the oat and therefore I won’t have to share those nutrients with this giant mass over there. So the nutrients per unit volume is going to be amazing. I should go eat the oat. But if I first rejoin, because Physarum, once you cut it, has the ability to join back up. If I first rejoin, then that whole calculus becomes impossible because there is no more me anymore. There’s just we and then we will go eat this thing, right? 

So this interesting, you can imagine a kind of game theory where the number of agents isn’t fixed and that it’s not just cooperate or defect, but it’s actually merge and whatever, right? 

{LF} Yeah. So that computation, how does it do that decision making? 

{ML} Yeah. So it’s really interesting. And so empirically, what we found is that it tends to merge first. It tends to merge first and then the whole thing goes. But it’s really interesting that that calculus, I mean, I’m not an expert in the economic game theory and all that, but maybe there’s some sort of hyperbolic discounting or something. 

But maybe this idea that the actions you take not only change your payoff, but they change who or what you are, and that you could take an action after which you don’t exist anymore, or you are radically changed, or you are merged with somebody else. As far as I know, that’s a whole different thing. As far as I know, we’re still missing a formalism for even knowing how to model any of that. 

{2:06:39 – Origin of evolution}

{LF} Do you see evolution, by the way, as a process that applies here on Earth? Where did evolution come from? Yeah. So this thing from the very origin of life that took us to today, what the heck is that? 

{ML} I think evolution is inevitable in the sense that if you combine, and basically, I think one of the most useful things that was done in early computing, I guess in the 60s, it started with evolutionary computation and just showing how simple it is that if you have imperfect heredity and competition together, those two things, or three things, so heredity, imperfect heredity, and competition, or selection, those three things, and that’s it. Now you’re off to the races. And so that can be, it’s not just on Earth because it can be done in the computer, it can be done in chemical systems, it can be done in, you know, Lee Smolin says it works on cosmic scales. So I think that that kind of thing is incredibly pervasive and general. It’s a general feature of life. 

It’s interesting to think about, you know, the standard thought about this is that it’s blind, right? Meaning that the intelligence of the process is zero, it’s stumbling around. And I think that back in the day, when the options were it’s dumb like machines, or it’s smart like humans, then of course, the scientists went in this direction, because nobody wanted creationism. They said, okay, it’s got to be like completely blind. I’m not actually sure, right? Because I think that everything is a continuum. And I think that it doesn’t have to be smart with foresight like us, but it doesn’t have to be completely blind either. I think there may be aspects of it. And in particular, this kind of multi-scale competency might give it a little bit of look ahead maybe or a little bit of problem solving sort of baked in. But that’s going to be completely different in different systems. I do think it’s general. I don’t think it’s just on Earth. I think it’s a very fundamental thing. 

{LF} And it does seem to have a kind of direction that it’s taking us that’s somehow perhaps is defined by the environment itself. It feels like we’re headed towards something. Like, we’re playing out a script that was just like a single cell defines the entire organism.

{ML}  Yeah.

{LF}  It feels like from the origin of Earth itself, it’s playing out a kind of script. You can’t really go any other way. 

{ML} I mean, so this is very controversial, and I don’t know the answer. But people have argued that this is called, you know, sort of rewinding the tape of life, right? And some people have argued, I think, I think Conway Morris, maybe has argued that it is that there’s a deep attractor, for example, to human to the human kind of structure and that and that if you were to rewind it again, you’d basically get more or less the same thing. And then other people have argued that, no, it’s incredibly sensitive to frozen accidents. And then once certain stochastic decisions are made downstream, everything is going to be different. I don’t know. I don’t know. 

You know, we’re very bad at predicting attractors in the space of complex systems, generally speaking, right? We don’t know. So maybe, so maybe evolution on Earth has these deep attractors that no matter what has happened, it pretty much would likely to end up there or maybe not. I don’t know. 

{LF} It’s a really difficult idea to imagine that if you ran Earth a million times, 500,000 times you would get Hitler? Like, yeah, we don’t like to think like that. We think like, because at least maybe in America, you’d like to think that individual decisions can change the world. And if individual decisions could change the world, then surely any perturbation could result in a totally different trajectory. But maybe there’s a, in this competency hierarchy, it’s a self-correcting system. There’s just ultimately, there’s a bunch of chaos that ultimately is leading towards something like a super intelligent, artificial intelligence system that answers 42. I mean, there might be a kind of imperative for life that it’s headed to. And we’re too focused on our day to day life of getting coffee and snacks and having sex and getting a promotion at work, not to see the big imperative of life on Earth that is headed towards something. 

{ML} Yeah, maybe, maybe. It’s difficult. I think one of the things that’s important about chimeric bioengineering technologies, all of those things are that we have to start developing a better science of predicting the cognitive goals of composite systems. So we’re just not very good at it, right? 

We don’t know if I create a composite system, and this could be the Internet of Things or swarm robotics or a cellular swarm or whatever. What is the emergent intelligence of this thing? First of all, what level is it going to be at? And if it has goal directed capacity, what are the goals going to be? Like, we are just not very good at predicting that yet. And I think that it’s an existential level need for us to be able to because we’re building these things all the time, right? We’re building both physical structures like swarm robotics, and we’re building social financial structures and so on, with very little ability to predict what sort of autonomous goals that system is going to have, of which we are now cogs. And so learning to predict and control those things is going to be critical. 

So in fact, if you’re right and there is some kind of attractor to evolution, it would be nice to know what that is and then to make a rational decision of whether we’re going to go along or we’re going to pop out of it or try to pop out of it because there’s no guarantee. I mean, that’s the other kind of important thing. A lot of people, I get a lot of complaints from people who email me and say, you know, what you’re doing, it isn’t natural. And I’ll say, look, natural, that’d be nice if somebody was making sure that natural was matched up to our values, but no one’s doing that. Evolution optimizes for biomass. That’s it. Nobody’s optimizing. It’s not optimizing for your happiness. I don’t think necessarily it’s optimizing for intelligence or fairness or any of that stuff. 

{LF} I’m going to find that person that emailed you, beat them up, take their place, steal everything they own and say, no, this is natural. This is natural. 

{ML} Yeah, exactly. Because it comes from an old worldview where you could assume that whatever is natural, that that’s probably for the best. And I think we’re long out of that garden of Eden kind of view. So I think we can do better. I think we, and we have to, right? Natural just isn’t great for a lot of life forms. 

{2:13:41 – Synthetic organisms}

{LF} What are some cool synthetic organisms that you think about, you dream about? When you think about embodied mind, what do you imagine? What do you hope to build? 

{ML} Yeah, on a practical level, what I really hope to do is to gain enough of an understanding of the embodied intelligence of the organs and tissues such that we can achieve a radically different regenerative medicine so that we can say, basically, and I think about it as, you know, in terms of like, okay, can you, what’s the goal kind of end game for this whole thing? 

To me, the end game is something that you would call an anatomical compiler. So the idea is you would sit down in front of the computer and you would draw the body or the organ that you wanted. Not molecular details, but like, yeah, this is what I want. I want a six legged, you know, frog with a propeller on top, or I want a heart that looks like this, or I want a leg that looks like this. And what it would do if we knew what we were doing is put out, convert that anatomical description into a set of stimuli that would have to be given to cells to convince them to build exactly that thing, right? I probably won’t live to see it, but I think it’s achievable. 

And I think with that, if we can have that, then that is basically the solution to all of medicine except for infectious disease. So birth defects, right? Traumatic injury, cancer, aging, degenerative disease. If we knew how to tell cells what to build, all of those things go away. So those things go away. And the positive feedback spiral of economic costs, where all of the advances are increasingly more heroic and expensive interventions of a sinking ship when you’re like 90 and so on, right? All of that goes away because basically, instead of trying to fix you up as you degrade, you progressively regenerate, you apply the regenerative medicine early before things degrade. So I think that that’ll have massive economic impacts over what we’re trying to do now, which is not at all sustainable. And that’s what I hope. I hope that we get it. 

So to me, yes, the xenobots will be doing useful things, cleaning up the environment, cleaning out your joints and all that kind of stuff. But more important than that, I think we can use these synthetic systems to try to develop a science of detecting and manipulating the goals of collective intelligences of cells specifically for regenerative medicine. 

And then sort of beyond that, if we think further beyond that, what I hope is that kind of like what you said, all of this drives a reconsideration of how we formulate ethical norms because this old school, so in the olden days, what you could do is if you were confronted with something, you could tap on it, right? And if you heard a metallic clanging sound, you’d say, ah, fine, right? So you could conclude it was made in a factory. I can take it apart. I can do whatever, right? If you did that and you got sort of a squishy kind of warm sensation, you’d say, ah, I need to be more or less nice to it and whatever. That’s not going to be feasible. It was never really feasible, but it was good enough because we didn’t have any, we didn’t know any better. That needs to go. And I think that by breaking down those artificial barriers, someday we can try to build a system of ethical norms that does not rely on these completely contingent facts of our earthly history, but on something much, much deeper that really takes agency and the capacity to suffer and all that takes that seriously. 

{LF} The capacity to suffer and the deep questions I would ask of a system is can I eat it and can I have sex with it? Which is the two fundamental tests of, again, the human condition. So I can basically do what Dali does that’s in the physical space. So print out like a 3D print Pepe the Frog with a propeller head, propeller hat is the dream. 

{ML} Well yes and no. I mean, I want to get away from the 3D printing thing because that will be available for some things much earlier. I mean, we can already do bladders and ears and things like that because it’s micro level control, right? When you 3D print, you are in charge of where every cell goes. And for some things that, you know, for, like this thing, they had that I think 20 years ago or maybe earlier than that, you could do that. 

{LF} So yeah, I would like to emphasize the Dali part where you provide a few words and it generates a painting. So here you say, I want a frog with these features and then it would go direct a complex biological system to construct something like that. 

{ML} Yeah. The main magic would be, I mean, I think from, from looking at Dali and so on, it looks like the first part is kind of solved now where you go from, from the words to the image, like that seems more or less solved. The next step is really hard. This is what keeps things like CRISPR and genomic editing and so on. That’s what limits all the impacts for regenerative medicine because going back to, okay, this is the knee joint that I want, or this is the eye that I want. Now, what genes do I edit to make that happen, right? Going back in that direction is really hard. 

So instead of that, it’s going to be, okay, I understand how to motivate cells to build particular structures. Can I rewrite the memory of what they think they’re supposed to be building such that then I can, you know, take my hands off the wheel and let them, let them do their thing. 

{ML} So some of that is experiment, but some of that may be AI can help too. Just like with protein folding, this is exactly the problem that protein folding in the most simple medium tried and has solved with Alpha Fold, which is how does the sequence of letters result in this three dimensional shape? And you have to, I guess it didn’t solve it because you have to, if you say, I want this shape, how do I then have a sequence of letters? Yeah. The reverse engineering step is really tricky.

{ML}  It is. I think we’re, we’re, and we’re doing some of this now is, is to use AI to try and build actionable models of the intelligence of the cellular collectives. So try to help us and help us gain models that, that, that, and, and we’ve had some success in this. So we did something like this for, you know, for repairing birth defects of the brain in frogs. We’ve done some of this for normalizing melanoma where you can really start to use AI to make models of how would I impact this thing if I wanted to given all the complexities, right. And, given all the, the, the, the controls that it knows how to do. 

{2:20:27 – Regenerative medicine}

{LF} So when you say regenerative medicine, so we talked about creating biological organisms, but if you regrow a hand, that information is already there, right? The biological system has that information. So how does regenerative medicine work today? How do you hope it works? What’s the hope there? 

{ML} Yeah. 

{LF} Yeah. How do you make it happen? 

{ML} Well today there’s a set of popular approaches. So, one is 3D printing. So the idea is I’m going to make a scaffold of the thing that I want. I’m going to seed it with cells and then, and then there it is, right? So kind of direct, and then that works for certain things. You can make a bladder that way or an ear, something like that. 

The other idea is some sort of stem cell transplant. These are the ideas. If we, if we put in stem cells with appropriate factors, we can get them to generate certain kinds of neurons for certain diseases and so on. All of those things are good for relatively simple structures, but when you want an eye or a hand or something else, I think in this maybe an unpopular opinion, I think the only hope we have in any reasonable kind of timeframe is to understand how the thing was motivated to get made in the first place. So what is it that, that made those cells in the, in the beginning, create a particular arm with a particular set of sizes and shapes and number of fingers and all that. And why is it that a salamander can keep losing theirs and keep regrowing theirs and a planarian can do the same even more? 

So to me, uh, kind of ultimate regenerative medicine was when you can tell the cells to build whatever it is you need them to build. Right. And so that we can all be like planaria basically.

{LF} Do you have to start at the very beginning or can you, um, do a shortcut? Cause we’re going to hand, you already got the whole organism. 

{ML} Yeah. So here’s what we’ve done, right? So, we’ve, we’ve more or less solved that in frogs. So frogs, unlike salamanders do not regenerate their legs as adults. And so, so, uh, we’ve shown that with a very, um, uh, kind of simple intervention. So what we do is there’s two things you need to, uh, you need to have a signal that tells the cells what to do, and then you need some way of delivering it. And so this is work together with, um, with David Kaplan and I should do a, um, a disclosure here. We have a company called Morphoceuticals and spin off where we’re trying to, uh, to address, uh, uh, regenerate, you know, limb regeneration. 

So we’ve solved it in the frog and we’re now in trials and mice. So now we’re going to, we’re in mammals now. It’s, I can’t say anything about how it’s going, but the frog thing is solved. So what you do is, um, a

{LF} fter you have a little frog, Luke Skywalker with every growing hand. 

{ML} Yeah, basically, basically. Yeah. Yeah. So what you do is we did, we did with legs instead of forearms. And what you do  after amputation, normally they, they don’t regenerate. You put on a wearable bioreactor. So it’s this thing that, um, that goes on and, uh, Dave Kaplan’s lab makes these things and inside it’s a, it’s a very controlled environment. It is a silk gel that carries, uh, some drugs, for example, ion channel drugs. And what you’re doing is you’re saying to the cells, you should regrow what normally goes here. 

So, uh, that whole thing is on for 24 hours and you take it off and you don’t touch the leg. Again, this is really important because what we’re not looking for is a set of micromanagement, uh, you know, printing or controlling the cells we want to trigger. We want to, we want to interact with it early on and then not touch it again because we don’t know how to make a frog leg, but the frog knows how to make a frog leg. So 24 hours, 18 months of leg growth after that, without us touching it again. And after 18 months, you get a pretty good leg that kind of shows this proof of concept that early on when the cells right after injury, when they’re first making a decision about what they’re going to do, you can, you can impact them. And once they’ve decided to make a leg, they don’t need you after that. They can do their own thing. So that’s an approach that we’re now taking. 

{2:24:13 – Cancer suppression}

{LF} What about cancer suppression? That’s something you mentioned earlier. How can all of these ideas help with cancer suppression? 

{ML} So let’s, let’s go back to the beginning and ask what, what, what, what cancer is. So I think, um, you know, asking why there’s cancer is the wrong question. I think the right question is why is there ever anything but cancer? 

So, in the normal state, you have a bunch of cells that are all cooperating towards a large scale goal. If that process of cooperation breaks down and you’ve got a cell that is isolated from that electrical network that lets you remember what the big goal is, you revert back to your unicellular lifestyle as far as, now think about that border between self and world, right? Normally when all these cells are connected by gap junctions into an electrical network, they are all one self, right? That meaning that, um, their goals, they have these large tissue level goals and so on. As soon as a cell is disconnected from that, the self is tiny, right? 

And so at that point, and so, so people, a lot of people model cancer cell cells as being more selfish and all that. They’re not more selfish. They’re equally selfish. It’s just that their self is smaller. Normally the self is huge. Now they got tiny little selves. Now what are the goals of tiny little selves? Well, proliferate, right? And migrate to wherever life is good. And that’s metastasis. That’s proliferation and metastasis. 

So, one thing we found and people have noticed years ago that when cells convert to cancer, the first thing they see is they close the gap junctions. And it’s a lot like, I think it’s a lot like that experiment with the slime mold where until you close that gap junction, you can’t even entertain the idea of leaving the collective because there is no you at that point, right? Your mind melded with this, with this whole other network. But as soon as the gap junction is closed, now the boundary between you and now, now the rest of the body is just outside environment to you. You’re just a, you’re just a unicellular organism and the rest of the body’s environment. 

So, we studied this process and we worked out a way to artificially control the bioelectric state of these cells to physically force them to remain in that network. And so then, then what that, what that means is that nasty mutations like KRAS and things like that, these really tough oncogenic mutations that cause tumors. If you do them and then, but then within artificially control of the bioelectrics, you greatly reduce tumor genesis or, or normalize cells that had already begun to convert. You basically, they go back to being normal cells. And so this is another, much like with the planaria, this is another way in which the bioelectric state kind of dominates what the genetic state is. So if you sequence the, you know, if you sequence the nucleic acid, you’ll see the KRAS mutation, you’ll say, ah, well that’s going to be a tumor, but there isn’t a tumor because, because bioelectrically you’ve kept the cells connected and they’re just working on making nice skin and kidneys and whatever else. So, we’ve started moving that to, to, to human glioblastoma cells and we’re hoping for, you know, a patient in the future interaction with patients. 

{LF} So is this one of the possible ways in which we may quote cure cancer? I think so.

 {ML} Yeah, I think so. I think, I think the actual cure, I mean, there are other technology, you know, immunotherapy, I think is a great technology. Chemotherapy, I don’t think is a good technology. I think we’ve got to get, get off of that. 

{LF} So chemotherapy just kills cells. 

{ML} Yeah. Well, chemotherapy hopes to kill more of the tumor cells than of your cells. That’s it. It’s a fine balance. The problem is the cells are very similar because they are your cells. And so if you don’t have a very tight way of distinguishing between them, then the toll that chemo takes on the rest of the body is just unbelievable. 

{LF} And immunotherapy tries to get the immune system to do some of the work. 

{ML} Exactly. Yeah. I think that’s potentially a very good, a very good approach. If, if the immune system can be taught to recognize enough of the cancer cells, that’s a pretty good approach. But I, but I think, but I think our approach is in a way more fundamental because if you can, if you can keep the cells harnessed towards organ level goals as opposed to individual cell goals, then nobody will be making a tumor or metastasizing and so on. 

{2:28:15 – Viruses}

{LF} So we’ve been living through a pandemic. What do you think about viruses in this full beautiful biological context we’ve been talking about? Are they beautiful to you? Are they terrifying? Also maybe let’s say, are they, since we’ve been discriminating this whole conversation, are they living? Are they embodied minds? Embodied minds that are assholes? 

{ML} As far as I know, and I haven’t been able to find this paper again, but, but somewhere I saw in the last couple of months, there was some, there were some papers showing an example of a virus that actually had physiology. So there was some, something was going on, I think proton flux or something on the virus itself. 

But, barring that, generally speaking, viruses are very passive. They don’t do anything by themselves. And so I don’t see any particular reason to attribute much of a mind to them. I think, you know, they represent a way to hijack other minds for sure, like, like cells and other things. 

{LF} But that’s an interesting interplay though. If they’re hijacking other minds, you know, the way we’re, we were talking about living organisms that they can interact with each other and have it alter each other’s trajectory by having interacted. I mean, that’s, that’s a deep, meaningful connection between a virus and a cell. And I think both are transformed by the experience. And so in that sense, both are living. 

{ML} Yeah. Yeah. You know, the whole category, I, this question of what’s living and what’s not living, I really, I’m not sure. And I know there’s people that work on this and I don’t want to piss anybody off, but, but I have not found that particularly useful as to try and make that a binary kind of a distinction. I think level of cognition is very interesting of …. as a continuum, but, but living and nonliving, I, you know, I don’t, I really know what to do with that. I don’t, I don’t know what you do next after, after making that distinction. 

{LF} That’s why I make the very binary distinction. Can I have sex with it or not? Can I eat it or not? Those, cause there’s, those are actionable, right? 

{ML} Yeah. Well, I think that’s a critical point that you brought up because how you relate to something is really what this is all about, right? As an engineer, how do I control it? But maybe I shouldn’t be controlling it. Maybe I should be, you know, can I have a relationship with it? Should I be listening to its advice? Like, like all the way from, you know, I need to take it apart all the way to, I better do what it says cause it seems to be pretty smart and everything in between, right? That’s really what we’re asking about. 

{ML} Yeah. We need to understand our relationship to it. We’re searching for that relationship, even in the most trivial senses. You came up with a lot of interesting terms. We’ve mentioned some of them. Agential material. That’s a really interesting one. That’s a really interesting one for the future of computation and artificial intelligence and computer science and all of that. There’s also, let me go through some of them. If they spark some interesting thought for you, there’s teleophobia, the unwarranted fear of erring on the side of too much agency when considering a new system. 

{ML} Yeah. 

{LF} That’s the opposite. I mean, being afraid of maybe anthropomorphizing the thing. 

{ML} This’ll get some people ticked off, I think. But I don’t think, I think the whole notion of anthropomorphizing is a holdover from a pre-scientific age where humans were magic and everything else wasn’t magic and you were anthropomorphizing when you dared suggest that something else has some features of humans. And I think we need to be way beyond that. And this issue of anthropomorphizing, I think it’s a cheap charge. I don’t think it holds any water at all other than when somebody makes a cognitive claim. 

I think all cognitive claims are engineering claims, really. So when somebody says this thing knows or this thing hopes or this thing wants or this thing predicts, all you can say is fabulous. Give me the engineering protocol that you’ve derived using that hypothesis and we will see if this thing helps us or not. And then, and then we can, you know, then we can make a rational decision. 

{LF} I also like anatomical compiler, a future system representing the long-term end game of the science of morphogenesis that reminds us how far away from true understanding we are. Someday you will be able to sit in front of an anatomical computer, specify the shape of the animal or a plant that you want, and it will convert that shape specification to a set of stimuli that will have to be given to cells to build exactly that shape. No matter how weird it ends up being, you have total control. 

Just imagine the possibility for memes in the physical space. One of the glorious accomplishments of human civilizations is memes in digital space. Now this could create memes in physical space. I am both excited and terrified by that possibility. 

{2:33:28 – Cognitive light cones}

Cognitive light cone, I think we also talked about the outer boundary in space and time of the largest goal a given system can work towards. Is this kind of like shaping the set of options? 

{ML} It’s a little different than options. It’s really focused on… I first came up with this back in 2018, I want to say. There was a conference, a Templeton conference where they challenged us to come up with frameworks. I think actually it’s the Diverse Intelligence community. Summer Institute. Yeah, they had a Summer Institute. That’s the logo, the bee with some circuits. Yeah, it’s got different life forms. The whole program is called diverse intelligence. 

They challenged us to come up with a framework that was suitable for analyzing different kinds of intelligence together. Because the kinds of things you do to a human are not good with an octopus, not good with a plant and so on. I started thinking about this. I asked myself what do all cognitive agents, no matter what their provenance, no matter what their architecture is, what do cognitive agents have in common? It seems to me that what they have in common is some degree of competency to pursue a goal. So, what you can do then is you can draw… what I ended up drawing was this thing that it’s kind of like a backwards Minkowski cone diagram where all of space is collapsed into one axis and then here and then time is this axis. Then what you can do is you can draw for any creature, you can semi quantitatively estimate what are the spatial and temporal goals that it’s capable of pursuing. 

For example, if you are a tick and all you really are able to pursue is maximum or a bacterium and maximizing the level of some chemical in your vicinity, that’s all you’ve got, it’s a tiny little icon, then you’re a simple system like a tick or a bacterium. If you are something like a dog, well, you’ve got some ability to care about some spatial region, some temporal. You can remember a little bit backwards, you can predict a little bit forwards, but you’re never ever going to care about what happens in the next town over four weeks from now. As far as we know, it’s just impossible for that kind of architecture. If you’re a human, you might be working towards world peace long after you’re dead. You might have a planetary scale goal that’s enormous. Then there may be other greater intelligences somewhere that can care in the linear range about numbers of creatures, some sort of Buddha like character that can care about everybody’s welfare, really care the way that we can’t. 

It’s not a mapping of what you can sense, how far you can sense. It’s not a mapping of how far you can act. It’s a mapping of how big are the goals you are capable of envisioning and working towards. I think that enables you to put synthetic kinds of constructs, AIs, aliens, swarms, whatever on the same diagram because we’re not talking about what you’re made of or how you got here. We’re talking about what are the size and complexity of the goals towards which you can work. 

{LF} Is there any other terms that pop into mind that are interesting? 

{ML} I’m trying to remember. I have a list of them somewhere on my website. 

{LF} Target Morphology, yeah, definitely check it out. Morphoceutical, I like that one. Ionoceutical. 

{ML}  Yeah. I mean those refer to different types of interventions in the regenerative medicine space. A morphoceutical is something that it’s a kind of intervention that really targets the cells decision making process about what they’re going to build. Ionoceuticals are like that, but more focused specifically on the bioelectrics. There’s also, of course, biochemical, biomechanical, who knows what else, maybe optical kinds of signaling systems there as well. 

Target morphology is interesting. It’s designed to capture this idea that it’s not just feedforward emergence and oftentimes in biology, I mean, of course that happens too, but in many cases in biology, the system is specifically working towards a target in anatomical morphospace. It’s a navigation task really. These kinds of problem solving can be formalized as navigation tasks and that they’re really going towards a particular region. How do you know? Because you deviate them and then they go back. 

{2:38:03 – Advice for young people}

{LF} Let me ask you, because you’ve really challenged a lot of ideas in biology in the work you do, probably because some of your rebelliousness comes from the fact that you came from a different field of computer engineering, but could you give advice to young people today in high school or college that are trying to pave their life story, whether it’s in science or elsewhere, how they can have a career they can be proud of or a life they can be proud of advice? 

{ML} Boy, it’s dangerous to give advice because things change so fast, but one central thing I can say, moving up and through academia and whatnot, you will be surrounded by really smart people. 

What you need to do is be very careful at distinguishing specific critique versus kind of meta advice. What I mean by that is if somebody really smart and successful and obviously competent is giving you specific critiques on what you’ve done, that’s gold. It’s an opportunity to hone your craft, to get better at what you’re doing, to learn, to find your mistakes. That’s great. 

If they are telling you what you ought to be studying, how you ought to approach things, what is the right way to think about things, you should probably ignore most of that. The reason I make that distinction is that a lot of really successful people are very well calibrated on their own ideas and their own field and their own area. They know exactly what works and what doesn’t and what’s good and what’s bad, but they’re not calibrated on your ideas. The things they will say, oh, this is a dumb idea, don’t do this and you shouldn’t do that, that stuff is generally worse than useless. It can be very demoralizing and really limiting. 

What I say to people is read very broadly, work really hard, know what you’re talking about, take all specific criticism as an opportunity to improve what you’re doing and then completely ignore everything else. I just tell you from my own experience, most of what I consider to be interesting and useful things that we’ve done, very smart people have said, this is a terrible idea, don’t do that. I think we just don’t know. We have no idea beyond our own. At best, we know what we ought to be doing. We very rarely know what anybody else should be doing. 

{LF} Yeah, and their ideas, their perspective has been also calibrated, not just on their field and specific situation, but also on a state of that field at a particular time in the past. There’s not many people in this world that are able to achieve revolutionary success multiple times in their life. So, whenever you say somebody is very smart, usually what that means is somebody who’s smart, who achieved a success at a certain point in their life and people often get stuck in that place where they found success. To be constantly challenging your worldview is a very difficult thing. 

Also at the same time, probably if a lot of people tell, that’s the weird thing about life, if a lot of people tell you that something is stupid or is not going to work, that either means it’s stupid, it’s not going to work, or it’s actually a great opportunity to do something new and you don’t know which one it is and it’s probably equally likely to be either. Well, I don’t know the probabilities. Depends how lucky you are, depends how brilliant you are, but you don’t know and so you can’t take that advice as actual data. 

{ML} Yeah, you have to and this is kind of hard to describe and fuzzy, but I’m a firm believer that you have to build up your own intuition. So over time, you have to take your own risks that seem like they make sense to you and then learn from that and build up so that you can trust your own gut about what’s a good idea even when, and then sometimes you’ll make mistakes and they’ll turn out to be a dead end and that’s fine, that’s science, but what I tell my students is life is hard and science is hard and you’re going to sweat and bleed and everything and you should be doing that for ideas that really fire you up inside and really don’t let kind of the common denominator of standardized approaches to things slow you down. 

(2:42:47 – Death}

{LF} So you mentioned planaria being in some sense immortal. What’s the role of death in life? What’s the role of death in this whole process we have? Is it, when you look at biological systems, is death an important feature, especially as you climb up the hierarchy of competency? 

{ML} Boy, that’s an interesting question. I think that it’s certainly a factor that promotes change and turnover and an opportunity to do something different the next time for a larger scale system. So apoptosis, it’s really interesting. I mean, death is really interesting in a number of ways. 

One is like you could think about:  like what was the first thing to die? That’s an interesting question. What was the first creature that you could say actually died? It’s a tough thing because we don’t have a great definition for it. So if you bring a cabbage home and you put it in your fridge, at what point are you going to say it’s died, right? So it’s kind of hard to know. 

There’s one paper in which I talk about this idea that, I mean, think about this and imagine that you have a creature that’s aquatic, let’s say it’s a frog or something or a tadpole, and the animal dies, in the pond it dies for whatever reason. Most of the cells are still alive. So you could imagine that if when it died, there was some sort of breakdown of the connectivity between the cells, a bunch of cells crawled off, they could have a life as amoebas. Some of them could join together and become a xenobot and twiddle around, right? So we know from planaria that there are cells that don’t obey the Hayflick limit and just sort of live forever. So you could imagine an organism that when the organism dies, it doesn’t disappear, rather the individual cells that are still alive, crawl off and have a completely different kind of lifestyle and maybe come back together as something else, or maybe they don’t. So all of this, I’m sure, is happening somewhere on some planet. 

So death in any case, I mean, we already kind of knew this because the molecules, we know that when something dies, the molecules go through the ecosystem, but even the cells don’t necessarily die at that point, they might have another life in a different way. 

You can think about something like HeLa, right? The HeLa cell line, you know, that has this, that’s had this incredible life. There are way more HeLa cells now then there were when she was alive. 

{LF} It seems like as the organisms become more and more complex, like if you look at the mammals, their relationship with death becomes more and more complex. So the survival imperative starts becoming interesting and humans are arguably the first species that have invented the fear of death. The understanding that you’re going to die, let’s put it this way, like long, so not like instinctual, like, I need to run away from the thing that’s going to eat me, but starting to contemplate the finiteness of life. 

{ML} Yeah. I mean, one thing, so, so one thing about the human light, cognitive light cone is that for the first, as far as we know, for the first time, you might have goals that are longer than your lifespan, that are not achievable, right? 

So if you’re, if you are, let’s say, and I don’t know if this is true, but if you’re a goldfish and you have a 10 minute attention span, I’m not sure if that’s true, but let’s say, let’s say there’s some organism with a, with a short kind of cognitive light cone that way, all of your goals are potentially achievable because you’re probably going to live the next 10 minutes. So whatever goals you have, they are totally achievable. If you’re a human, you could have all kinds of goals that are guaranteed not achievable because they just take too long, like guaranteed you’re not going to achieve them. So I wonder if, you know, is that, is that a, you know, like a perennial, you know, sort of thorn in our, in our psychology that drives some, some psychosis or whatever? I have, I have no idea. 

Another interesting thing about that, actually, I’ve been thinking about this a lot in the last couple of weeks, this notion of giving up. So you would think that evolutionarily, the most adaptive way of being is that you go, you, you, you, you fight as long as you physically can. And then when you can’t, you can’t, and there’s in, there’s this photograph, there’s videos you can find of insects are crawling around where like, you know, like, like most of it is already gone, and it’s still sort of crawling, you know, like, Terminator style, right? Like, as far as you physically can, you keep going. Mammals don’t do that. 

So a lot of mammals, including rats, have this thing were, when they think it’s a hopeless situation, they literally give up and die when physically, they could have kept going. I mean, humans certainly do this. And there’s, there’s some like, really unpleasant experiments that the this guy forget his name did with drowning rats, where if he were rats normally drown after a couple of minutes, but if you teach them that if you just tread water for a couple of minutes, you’ll get rescued, they can tread water for like an hour. And so right, and so they literally just give up and die. And so evolutionarily, that doesn’t seem like a good strategy at all evolutionarily, since why would you like, what’s the benefit ever of giving up, you just do what you can, and you know, one time out of 1000, you’ll actually get rescued, right? 

But this issue of actually giving up suggests some very interesting metacognitive controls where you’ve now gotten to the point where survival actually isn’t the top drive. And that for whatever, you know, there are other considerations that have like taken over. And I think that’s uniquely a mammalian thing. But then I don’t know. 

{LF} Yeah, the Camus, the existentialist question of why live, just the fact that humans commit suicide is a really fascinating question from an evolutionary perspective. 

{ML} And what was the first and that’s the other thing, like, what is the simplest system, whether evolved or natural or whatever, that is able to do that? Right? Like, you can think, you know, what other animals are actually able to do that? I’m not sure. 

{LF} Maybe you could see animals over time, for some reason, lowering the value of survive at all costs, gradually, until other objectives might become more important. 

{ML} Maybe. I don’t know how evolutionarily how that gets off the ground. That just seems like that would have such a strong pressure against it, you know. Just imagine, you know, a population with a lower, you know, if you were a mutant in a population that had less of a survival imperative, would you put your genes outperform the others? 

{LF} Is there such a thing as population selection? Because maybe suicide is a way for organisms to decide themselves that they’re not fit for the environment? Somehow? 

{ML} Yeah, that’s a really contrary, you know, population level selection is a kind of a deep controversial area. But it’s tough because on the face of it, if that was your genome, it wouldn’t get propagated because you would die and then your neighbor who didn’t have that would have all the kids. 

{LF} It feels like there could be some deep truth there that we’re not understanding. 

{ML} Maybe.

{LF} What about you yourself as one biological system? Are you afraid of death? 

{ML} To be honest, I’m more concerned with especially now getting older and having helped a couple of people pass. I think about what’s a good way to go? Basically, like nowadays, I don’t know what that is, I, you know, sitting in a, you know, a facility that sort of tries to stretch you out as long as you can, that doesn’t seem that doesn’t seem good. And there’s not a lot of opportunities to sort of, I don’t know, sacrifice yourself for something useful, right? There’s not terribly many opportunities for that in modern society. So I don’t know, that’s that’s that’s more of I’m not I’m not particularly worried about death itself. But I’ve seen it happen. And it’s not pretty. And I don’t know what a better alternative is. 

{LF} So the existential aspect of it does not worry you deeply? The fact that this ride ends? 

{ML} No, it began. I mean, the ride began, right? So there was I don’t know how many billions of years before that I wasn’t around. So that’s okay. 

{LF} But isn’t the experience of life? It’s almost like, feels like you’re immortal. Because the way you make plans, the way you think about the future. I mean, if you if you look at your own personal rich experience, yes, you can understand, okay, eventually, I died as people I love that have died. So surely, I will die and it hurts and so on. But like, he sure doesn’t. It’s so easy to get lost in feeling like this is going to go on forever. 

{ML} Yeah, it’s a little bit like the people who say they don’t believe in free will, right? I mean, you can say that but when you go to a restaurant, you still have to pick a soup and stuff. So right, so I don’t know if I know I’ve actually seen that that happened at lunch with a well known philosopher and he didn’t believe in free will and the other waitress came around and he was like, Well, let me see. I was like, What are you doing here? You’re gonna choose a sandwich, right? So it’s I think it’s one of those things. I think you can know that, you know, you’re not going to live forever. But you can’t, you can’t. It’s not practical to live that way unless you know, so you buy insurance and then you do some stuff like that. But  mostly, you know, I think you just live as if as if as if you can make plans. 

{2:52:17 – Meaning of life}

{LF} We talked about all kinds of life. We talked about all kinds of embodied minds. What do you think is the meaning of it all? What’s the meaning of all the biological lives we’ve been talking about here on Earth? Why are we here? 

{ML} I don’t know that that’s a well posed question other than the existential question you post before.

{LF}  Is that question hanging out with the question of what is consciousness and there at retreat somewhere…

{ML} I’m not sure because…

{LF}  sipping pina coladas and because they’re ambiguously defined. 

{ML} Maybe I’m not sure that any of these things really ride on the correctness of our scientific understanding. But I mean, just just for an example, right? I’ve always found it weird that people get really worked up to find out realities about their bodies, for example. Right. You’ve seen them. Ex Machina. Right. And so there’s this great scene where he’s cutting his hand to find out, you know, a piece full of cogs. Now, to me, right? If if I open up and I find out and I find a bunch of cogs, my conclusion is not, oh, crap, I must not have true cognition. That sucks. My conclusion is, wow, cogs can have true cognition. Great. So right. 

So it seems to me, I guess I guess I’m with Descartes on this one, that whatever the truth ends up being of how is, what is consciousness, how it can be conscious? None of that is going to alter my primary experience, which is what it is. And if and if a bunch of molecular networks can do it, fantastic. If it turns out that there’s a non corporeal soul, you know, so great. We can we’ll study that, whatever. 

But the fundamental existential aspect of it is, you know, if somebody if somebody told me today that, yeah, yeah, you were created yesterday and all your memories are, you know, sort of fake, you know, kind of like Boltzmann brains, right. And the human, you know, human skepticism, all that. Yeah. OK. Well, so so but but here I am now. So….

{LF}  it’s the experience. It’s primal, so like that’s the thing that matters. So the backstory doesn’t matter. 

{ML} I think so. I think so. From a first person perspective, now from a third person, like scientifically, it’s all very interesting. From a third person perspective, I could say, wow, that’s that’s amazing that this happens and how does it happen and whatever. 

But from a first person perspective, I could care less. Like I just it’s just what I’ve what I learned from any of these scientific facts is, OK, well, I guess then that’s … then I guess that’s what is sufficient to to give me my, you know, amazing first person perspective.

{LF}  I think if you dig deeper and deeper and get surprising answers to why the hell we’re here, it might give you some guidance on how to live. 

{ML} Maybe, maybe. I don’t know. That would be nice. On the one hand, you might be right, because on the one hand, if I don’t know what else could possibly give you that guidance, right. So you would think that it would have to be that or you would do it would have to be science because there isn’t anything else. 

So that’s so maybe on the other hand, I am really not sure how you go from any, you know, what they call from an is to an ought right from any factual description of what’s going on. This goes back to the natural. right. Just because somebody says, oh, man, that’s completely not natural. It’s never happened on Earth before. I’m not impressed by that whatsoever. I think whatever hazard hasn’t happened, we are now in a position to do better if we can. Right. 

{LF} Well, this also because you said there’s science and there’s nothing else. There it’s really tricky to know how to intellectually deal with a thing that science doesn’t currently understand. Right. So like, the thing is, if you believe that science solves everything, you can too easily in your mind think our current understanding, like, we’ve solved everything. 

{ML} Right. Right. Right. 

{LF} Like, it jumps really quickly to not science as a mechanism as a process, but more like science of today. Like, you could just look at human history and throughout human history, just physicists and everybody would claim we’ve solved everything. 

{ML} Sure. Sure. 

{LF} Like, like, there’s a few small things to figure out. And we basically solved everything. Were in reality, I think asking, like, what is the meaning of life is resetting the palette 

{ML}  Yeah.

{LF} of like, we might be tiny and confused and don’t have anything figured out. It’s almost going to be hilarious a few centuries from now when they look back how dumb we were. 

{ML} Yeah, I 100% agree. So when I say science and nothing else, I certainly don’t mean the science of today because I think overall, I think we  know very little. I think most of the things that we’re sure of now are going to be, as you said, are going to look hilarious down the line. So I think we’re just at the beginning of a lot of really important things. 

When I say nothing but science, I also include the kind of first person, what I call science that you do. So the interesting thing about I think about consciousness and studying consciousness and things like that in the first person is unlike doing science in the third person, where you as the scientist are minimally changed by it, maybe not at all. So when I do an experiment, I’m still me, there’s the experiment, whatever I’ve done, I’ve learned something, so that’s a small change. But but overall, that’s it. 

In order to really study consciousness, you will you are part of the experiment, you will be altered by that experiment, right? Whatever, whatever it is that you’re doing, whether it’s some sort of contemplative practice or, or some sort of psychoactive, you know, whatever. You are now, you are now your own experiment, and you are right. And so I consider, I fold that in, I think that’s part of it. I think that exploring our own mind and our own consciousness is very important. I think much of it is not captured by what currently is third person science for sure. But ultimately, I include all of that in science, with a capital S in terms of like a, a rational investigation of both first and third person aspects of our world. 

{LF} We are our own experiment, as beautifully put. And when two systems get to interact with each other, that’s the kind of experiment. So I’m deeply honored that you would do this experiment with me today. 

{ML} Thanks so much.

{LF}  I’m a huge fan of your work. Likewise, thank you for doing everything you’re doing. I can’t wait to see the kind of incredible things you build. So thank you for talking. 

{ML} Really appreciate being here. Thank you. 

{LF} Thank you for listening to this conversation with Michael Levin. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Charles Darwin in The Origin of Species: “From the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life. From so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.”

Thank you for listening, and hope to see you next time.