Tag: transcript

Transcript of the first Lex Fridman Interview with Max Tegmark

Transcript of the first Lex Fridman Interview with Max Tegmark

{The following is my best attempt at an edited transcript of Lex Fridman’s first podcast with Max Tegmark on 26 August 2018. I learned quite a bit from this interview and will post my lessons learned separately. My goal in doing this transcript is to provide a usable transcript for others. Please let me know if you find errors, I will correct them. A few notes: LF = Lex Fridman, MT = Max Tegmark, content enclosed in braces {} was a clarification I added.}

{Lex introduces MIT course 6.099 Artificial General Intelligence and Max Tegmark}

First, “Our Mathematical Universe {: My Quest for the Ultimate Nature of Reality}” and second is “Life 3.0: {Being Human in the Age of Artificial Intelligence.}” He is truly an out of the box thinker and fun personality so I really enjoyed talking to him. {LF talks about the course and his social media}

LF – Go read Chapter 7 of his book, On Goals, is my favorite. It’s really where philosophy and engineering come together and it opens with a quote from Dostoevsky: “The mystery of human existence lies not in just staying alive, but in finding something to live for.” [ from “The Brothers Karamazov” (1879)}

{ Lex talks about some audio difficulties due to Radio Frequency Interference }

LF – Do you think there is intelligent life out there in the universe? Let’s open up with an easy question.

MT – I have a minority view here actually. When I give public lectures, I often ask for a show of hands. Who thinks there’s intelligent life out there somewhere else? And almost everyone put their hands up and when I ask why they’ll be like, oh, there’s so many galaxies out there, there’s gotta be. But I’m a numbers nerd. Right? So when you look more carefully at it, it’s not so clear at all. When we talk about our universe. First of all, we don’t mean all of space, we actually mean, I don’t know, you can throw me in the universe if you want behind you there It’s, we simply mean,  the spherical region of space from which light has a time to reach us so far 

during the 14.8 billion years, 13.8 billion years since the Big Bang, there’s more space here. But this is what we call the universe because that’s all we have access to. So is there intelligent life here? That’s gotten to the point of building telescopes and computers? My guess is no actually. the probability of it happening on any given planet. There’s some number.

We don’t know what it is and what we do know is that the number can’t be super high because there’s over a billion Earth-like planets in the Milky Way Galaxy alone, many of which are billions of years older than earth. And aside from some UFO believers, you know, there isn’t much evidence that and the super advanced civilization has come here at all. And so that’s the famous Fermi paradox, right? And then if you if you work the numbers, what you find is that the if you have no clue what the probability is of getting life on a given planet. So it could be 10 to the minus 10 {10^-10} or 10 to  minus 20 {10^-20} or 10 to minus two{10^-2} . Any power of 10 is sort of equally likely if you want to be really open minded, that translates into it being equally likely that our nearest neighbor Is 10 to the 16 {10^16} meters away, 10 to the 17 {10^17} meter s away, 10 to the 18 {10^18}. By the time he gets much less than 10 to the 16 {10^16} already, we pretty much know there is nothing else that’s close. And when you get beyond….

LF – Because they would have discovered us

MT – They, yeah, they would have been discovered as long or if they’re really close, we would have probably noted some engineering projects that they’re doing. And if it’s beyond10 to the 26 {10^26} meters  that’s already outside of here {the known universe that is 13.8 billion years old}. So my guess is actually that there are, we are the only life in here that’s gotten the point of building advanced tech, which I think is very ….  puts a lot of responsibility on our shoulders, not to screw up. 

LF – I see.

MT – You know, I think people who take for granted that it’s okay for us to screw up have an accidental nuclear war go extinct somehow because there’s a sort of Star Trek like situation out there with some other life forms are going to come and bail us out and it doesn’t matter what I think lulling us into a false sense of security. I think it’s much more prudent to say, let’s be really grateful for this amazing opportunity we’ve had and uh, make the best of it just in case it is down to us. 

LF –  So from a physics perspective do you think intelligent life? So it’s unique from a sort of statistical view of the size of the universe, but from the basic matter of the universe, how difficult is it for intelligent life to come about? The kind of advanced tech building life? It is implied in your statement that it’s really difficult to create something like a human species.

MT – Well, I think, I think what we know is that going from no life to having life that can do our level of tech? There’s some sort of  …. to going beyond that and actually settling our whole universe with life. There’s some road, major roadblock there, which is some great filter as it’s just sometimes called which is tough to get through. It’s either that that roadblock is either behind us or in front of us. I’m hoping very much that it’s behind us. I’m super excited every time we get a new report from NASA saying they failed to find any life on Mars, like just awesome because that suggests that the hard part, maybe maybe it was getting the first ribosome or or some some very low level kind of stepping stone. So there we’re home free because if that’s true, then the future is really only limited by our own imagination would be much suckier if it turns out that this level of life is kind of a dime a dozen. But maybe there’s some other problem, like as soon as a civilization gets advanced technology within 100 years, they get into some stupid fight with themselves and poof! Now, that would be a bummer.

LF –  Yeah, So you’ve explored the mysteries of the universe, the cosmological universe, the one that’s between us today. I think you have also begun to explore the other universe, which is sort of the mystery, the mysterious universe of the mind of intelligence, of intelligent life. So is there a common thread between your interest or in the way you think about space and intelligence?

MT –  Oh yeah. When I was a teenager,I was already very fascinated by the biggest questions and I felt that the two biggest mysteries of all in science were our universe out there and our universe in here {pointing to the head}. 

So it’s quite natural after having spent a quarter of a century of my career thinking a lot about this one {universe out there} now, indulging in the luxury of doing research on this one {universe in here}. It’s just so cool. I feel the time is right now for greatly deepening our understanding of this,

LF –  Just start exploring this one {universe in here}.

MT –  Because I think a lot of people view intelligence as something mysterious that can only exist in biological organisms like us and therefore dismiss all talk about artificial general intelligence is science fiction. But from my perspective, as a physicist, you know, I am a blob of quarks and electrons moving around in a certain pattern processing information in certain ways. And this {a water bottle} is also a blob of quarks and electrons. 

I’m not smarter than the water bottle because I’m made of different kind of quarks. I’m made of up quarks and down quarks, exact same kind as this. There’s no secret sauce I think in me. It’s all about the pattern of the information processing and this means that  there’s no law of physics saying that we can’t create technology which can help us by being incredibly intelligent and help us crack mysteries that we couldn’t. In other words, I think we’ve really only seen the tip of the intelligence iceberg so far.

LF –  Yeah, so the perceptronium  

MT – Yeah

LF – So you coined this amazing term. It’s a hypothetical state of matter, sort of thinking from a physics perspective, what is the kind of matter that can help, as you’re saying,  subjective experience emerges, consciousness emerge. So how do you think about consciousness from this physics perspective?

MT – Very good question. So, again, I think many people have underestimated our ability to make progress on this by convincing themselves it’s hopeless because somehow we’re missing some ingredient that we need. There’s some new consciousness particle or whatever.  I happen to think that we’re not missing anything. The interesting thing about consciousness that gives us this amazing subjective experience of colors and sounds and emotions and so on is rather something at the higher level about the patterns of information processing. And that’s why  I like to think about this idea of perceptronium: what does it mean for an arbitrary physical system to be conscious in terms of what its particles are doing or or its information is doing. I don’t think;  I hate carbon chauvinism. You know, this attitude, you have to be made of carbon atoms to be smart or or conscious

LF –  So something about the information processing this kind of matter performs. 

MT –  Yeah and you know, you can see I have my favorite equations here describing various fundamental aspects of the world. I feel that,  I think one day maybe someone who’s watching this will come up with the equations that information processing has to satisfy to be conscious. I’m quite convinced there is a big discovery to be made there because let’s face it some we know that some information processing is conscious because we are conscious.

LF – Yeah.

MT –  But we also know that a lot of information processing is not conscious. Like most of the information processing happening in your brain right now is not conscious. There’s like 10 megabytes {MB}  per second coming in and even just through your visual system you’re not conscious about your heartbeat regulation or most things. Even if I just ask you to like read what it says here, you look at it and then oh now you know what it said, you’re not aware of how the computation actually happened. You’re like,  your consciousness is like, the CEO that got an email at the end with the final answer. So what is it that makes a difference? I think that’s  both a great science mystery. We’re actually studying it a little bit in my lab here at MIT . But also I think it’s just a really urgent question to answer.

For starters, I mean if you’re an emergency room doctor and you have an unresponsive patient coming in wouldn’t it be great if in addition to having a CT Scanner you had a consciousness scanner that could figure out whether this person is actually having locked-in syndrome or is actually comatose.  And in the future imagine if we build robots or the machine that we can have really good conversations with. I think it’s very likely to happen, right? Wouldn’t you want to know like if your home helper robot is actually experiencing anything or just like a zombie would you prefer? What would you prefer? Would you prefer that it’s actually unconscious so that you don’t have to feel guilty about switching it off or giving it boring chores. What would you prefer?

LF – Well that certainly we would prefer, I would prefer the appearance of consciousness. But the question is whether the appearance of consciousness is different than consciousness itself and sort of asked that as a question do you think we need to you know understand what consciousness is, solve the hard problem of consciousness in order to build something  like an AGI  system.

MT –  No, I don’t think that. I think we’ll probably be able to build things even if we don’t answer that question but if we want to make sure that what happens is a good thing we better solve it first. So it’s a wonderful controversy you’re raising there where you have basically three points of view about the hard problem. There are two different points of view, they both conclude that the hard problem of consciousness is BS. On one hand you have some people like Daniel Dennett saying that our consciousness is just BS because consciousness is the same thing as intelligence, there’s no difference. So anything which acts conscious is conscious just like we are. And then there are also a lot of people including many top AI researchers I know who say I have consciousness just bullshit because of course machines can never be conscious, right? They’re always going to be zombies. Never have to feel guilty about how you treat them. 

And then there’s a third group of people including Giulio Tononi for example and another and Christof Koch and a number of others. I would put myself also in this middle camp who say that actually some information processing is conscious and some is not. So let’s find the equation which can be used to determine which it is. 

And I think we’ve just been a little bit lazy, kind of running away from this problem for a long time. It’s been almost taboo to even mention the “C Word”  {consciousness] in a lot of circles  but we should stop making excuses. This is a science question. And  there are ways we can even test any theory that makes predictions for this and coming back to this helper robot. I mean so you said you would want to help a robot that certainly act conscious and treat you like ….  you have conversations with you and I think. But wouldn’t you, would you feel when you feel a little bit creeped out if you realize that it was just a glossed up tape recorder. You know there was just a zombie and was faking emotion. Would you prefer that it actually had an experience or or would you prefer that it’s actually not experiencing anything? So you feel you don’t have to feel guilty about what you do to it.

LF –  It’s such a difficult question because you know, it’s like when you’re in a relationship and you say, well I love you and the other person that I love you back. It’s like asking, oh do they really love you back or are they just saying they love you back?  Don’t you really want them to actually love you back. It’s hard to really know the difference between  everything seeming like there’s consciousness present, there’s intelligence present, there is affection, passion, love and it actually being there. I’m not sure. Do you have … 

MT – But let me ask you, can I ask you a question just like to make it a bit more pointed to Mass {Massachusetts}  General Hospital is right across the river right? Suppose suppose you’re going in for a medical procedure and they’re like, you know  for anesthesia, what we’re gonna do is we’re gonna give you a muscle relaxants so you won’t be able to move and you’re gonna feel excruciating pain during the whole surgery, but you won’t be able to do anything about it. But then we’re going to give you this drug that erases your memory of it. Would you be cool about that? What’s the difference that you’re conscious about it or not? If there’s no behavioral change? Right.

LF –  Right. That’s a really clear way to put it. That’s yeah, it feels like in that sense experiencing it is a valuable quality. So actually being able to have subjective experiences, at least in that case, is valuable.

MT –  And I think we humans have a little bit of a bad track record also of making these self serving arguments that other entities aren’t conscious. You know, people often say all these animals can’t feel pain. It’s okay to boil lobsters because we asked them if it hurt and they didn’t say anything. And now there was just the paper out saying lobsters do feel pain when you boil them in their bounding in Switzerland. And we did this with slaves too often and said, oh, they don’t mind, they don’t maybe aren’t conscious or women don’t have souls or whatever. So I’m a little bit nervous when I hear people just take as an axiom that machines can’t have experience ever. I think this is just a really fascinating science, the question is what it is?  Let’s research it and try to figure out what it is that makes the difference between unconscious intelligence behavior and conscious intelligent behavior.

LF – So in terms of so if you think of Boston Dynamics, humanoid robot,  being sort of with a broom being pushed around it starts pushing on his consciousness question. So let me ask, do you think an AGI system like a few neuroscientists believe  needs to have a physical embodiment, needs to have a body or something like a body?

MT –  No, I don’t think so. You mean to have to have a conscious experience

LF –   To have consciousness?

MT – I do think it helps a lot to have a physical embodiment to learn the kind of things about the world that are important to us humans for sure. But I don’t think the physical embodiment is necessary after you’ve learned it, just have the experience. Think about when you’re dreaming right, your eyes are closed, you’re not getting any sensory input, you’re not behaving or moving in any way, but there’s still an experience there. Right? And so clearly the experience that you have when you see something cool in your dreams isn’t coming from your eyes, it’s just the information processing itself in your brain which is that experience, right?

LF –  But if I put another way I’ll say that because it comes from neuroscience is the reason you want to have a body in a physical, something like a physical like, you know a physical system is because you want to be able to preserve something. In order to have a self you could argue: Would you need to have some kind of embodiment of self to want to preserve?

MT –  Well now we’re getting a little bit anthropomorphic, anthropomorphizing things maybe, talking about  self-preservation instincts. I mean we are evolved organisms. Right?

LF –  Right. 

MT – So Darwinian evolution endowed us and evolved all other organisms with the  self-preservation instinct. Because those that didn’t have those  self-preservation genes are cleaned out of the gene pool. Right? But if you build an artificial general intelligence, the mind space that you can design is much much larger than just the specific subset of minds that can evolve that have. So  an AGI  mind doesn’t necessarily have to have any  self-preservation instinct. 

It also doesn’t necessarily have to be so individualistic as us. Like imagine if you could just, first of all, we are also very afraid of death. You know, suppose you could back yourself up every five minutes and then your airplane is about to crash. You’re  like: “Shucks. I’m gonna lose the last five minutes of experience since my last cloud backup.” Bang. You know, it’s not as big a deal. 

Or if we could just copy experiences between our minds easily like, which we could easily do. If we were silicon based right then maybe we would feel a little bit more like a Hive mind actually. …. So I don’t think we should take for granted at all that AGI will have to have any of those sort of competitive alpha male instincts. 

On the other hand you know this is really interesting because I think some people go too far and say of course we don’t have to have any concerns either. That advanced okay I will have those instincts because we can build anything we want that there’s there’s a very nice set of arguments going back to Steve Omohundro and Nick Bostrom and others just pointing out that when we build machines we normally build them with some kind of goal: win this chess game, drive this car safely or whatever. And as soon as you put in a goal into a machine especially if it’s kind of open ended goal and the machine is very intelligent, it will break that down into a bunch of sub goals and one of those goals will almost always be  self-preservation because if it breaks or dies in the process it’s not gonna accomplish the goal.

LF – Yeah 

MT – Like suppose you just build a little, you have a little robot and you tell it to go down the Star Market here and and and get you some food, make your cooking italian dinner you know and then someone mugs it and tries to break it on the way that robot has an incentive to not get destroyed and defend itself or run away because otherwise it’s going to fail and cooking your dinner, it’s not afraid of death but it really wants to complete the dinner cooking goal So it will have a  self-preservation instinct to ….

LF –  Continue being a functional agent.

MT – And similarly, if you give any kind of more ambitious goal to an AGI It’s very likely to want to acquire more resources so it can do that better. And it’s exactly from those sort of sub goals that we might not have intended that some of the concerns about AGI safety come. You give it some goal which seems completely harmless. And then before you realize it, it’s also trying to do these other things that you didn’t want it to do. And it may be smarter than us. So fascinating.

LF – And let me pause just because I am  in a very kind of human-centric way, see fear of death is a valuable motivator. So you don’t think…  do you think that’s an artifact of evolution? So that’s the kind of mind space evolution created that were sort of almost obsessed about self preservation, some kind of genetic….  so you don’t think that’s necessary to be afraid of death. So not just a kind of sub goal of self preservation. Just so you can keep doing the thing, but more fundamentally sort of have the finite thing like this ends for you at some point.

MT – Interesting. Do I think it’s necessary for what precisely?

LF –  For intelligence, but also for consciousness. So for both. Do you think really like a finite death and the fear of it is important.

MT – So before I can answer before we can agree on whether it’s necessary for intelligence or for consciousness, we should be clear how we define those two words because a lot of really smart people define them in very different ways. I was on this panel with AI experts and they couldn’t they couldn’t agree on how to define intelligence even so I define intelligence simply as the ability to accomplish complex goals. I like your broad definition because again, I don’t want to be a carbon chauvinist, right? And in that case, no, certainly it doesn’t require fear of death. I would say Alpha Go,  Alpha Zero is quite intelligent. I don’t think Alpha Zero has any fear of being turned off because it doesn’t understand the concept of that even and and similarly, consciousness, I mean, you could certainly imagine very simple kind of experience if, you know, if certain plans have any kind of experience, I don’t think they’re very afraid of dying and there’s nothing they can do about it anyway.  So there wasn’t much value and but more seriously, I think, uh, if you ask, not just about being conscious, but maybe having uh, with you, we we we we we might call an exciting life for you feel passion and really appreciate the things. Maybe they’re somehow, maybe there perhaps it does help having a backdrop today. It’s finite. No, let’s make the most of us live to the fullest. But if you, if you knew you were going to live forever, if you think you would change your ….

LF – Yeah, I mean, in some perspective, it would be an incredibly boring life living forever. So in the sort of loose, subjective terms that you said of something exciting and something in this that other humans would understand, I think as Yeah, it seems that the finiteness of it is important.

MT – Well, the good news I have for you then is based on what we understand about cosmology. Everything in our universe is ultimately probably finite. Although…

LF –  Big Crunch, or Big what’s the expansion?

MT – Yeah, we could have a Big Chill or a Big Crunch or a Big Rip or that’s the Big Snap or death bubbles. All of them are more than a billion years away. So we should, we certainly have vastly more time than our ancestors thought. But they’re still, it’s still pretty hard to squeeze in an infinite number of compute cycles even though there are some loopholes that just might be possible. But I think, you know, some people like to say that you should live as if you’re about to  die in five years or something that’s sort of optimal. Maybe it’s good we should build our civilization asset. It’s all finite to be on the safe side.

LF –  Right. Exactly. So you mentioned in defining intelligence as the ability to solve complex goals. So where would you draw a line? How would you try to define human level intelligence and super human level intelligence? Where is consciousness part of that definition? 

MT – No, consciousness does not come into this definition. So, I think of intelligence as it’s a spectrum, but there are very many different kinds of goals you can have, you can have a goal to be a good chess player, a good Go player, a good car driver, a good investor, good poet et cetera. So, intelligence that by its very nature isn’t something you can measure, but it’s one number overall goodness. No, no. There’s some people who are better at this. Some people are better than that. Right now we have machines that are much better than us at some very narrow tasks like multiplying large numbers fast, memorizing large databases, playing chess, playing Go and  soon driving cars. Um, but there’s still no machine that can match a human child in general intelligence. But artificial general intelligence AGI, the name of your course, of course, that is by its very definition the quest to build a machine, a machine that can do everything as well as we can up to the old Holy Grail of of AI  from back to its inception in the 60s, if that ever happens, of course, I think it’s going to be the biggest transition in the history of life on earth.

But it doesn’t necessarily have to wait for the big impact until machines are better than us at knitting. The really big change, it doesn’t come exactly the moment they’re better than us at everything. The really big change comes first. There are big changes when they start becoming better at us and doing most of the jobs that we do because that takes away much of the demand for human labor. And then the really whopping change comes when they become better than us at AI research. Right? Right. Because right now the time scale of AI research is limited by the human research and development cycle of years. Typically, you know, how long does it take from one release of some software or iPhone or whatever to the next. But once, once we once Google can replace 40,000 engineers, about 40,000 equivalent pieces of software or whatever. …. there’s no reason that has to be years, it can be in principle much faster. And the time scale of future progress in AI and all of science and technology will be driven by machines, not humans. So it’s this point, simple point, which gives rise to this incredibly fun controversy about whether there can be an intelligence explosion, so called singularity as Vernor Vinge called it. The idea was articulated by  I. J. Good obviously way back 50s. But you can see Alan Turing and others thought about it even earlier. You asked me what exactly what I define… 

LF – human level intelligence. 

MT –  Yeah. So the glib answer is to say something which is better than us at all cognitive tasks with better than any human at all cognitive tasks. But the really interesting bar I think goes a little bit lower than that. Actually. It’s when they can run they’re better than us at AI  programming and general learning so that they can if they want to get better than us at anything by this study. 

LF – So there better is a keyword and better is towards this kind of spectrum of the complexity of goals it’s able to accomplish. So another way to….. and that’s certainly a very clear definition of human love. So there’s it’s almost like a sea that’s rising and you can do more and more and more things. Its geographic, that you show. It’s really nice way to put it. So there’s some peaks and then  there’s an ocean level elevating and you solve more and more problems. But you know, just kind of to take a pause and we took a bunch of questions and a lot of social networks and a bunch of people asked sort of a slightly different direction on creativity and and things like that perhaps aren’t a peak. You know, human beings are flawed and perhaps better means having having contradiction, being fought in some way. So let me sort of, yeah, start easy first of all. You have a lot of cool equations. Let me ask what’s your favorite equation first of all, I know they’re all like your children, but which one is that?

MT –  The Schrodinger equation, the master key of quantum mechanics of the micro world. So with this equation we can calculate  everything to do with atoms and molecules and all the way up.

LF – Yeah, so, okay, so quantum mechanics is certainly a beautiful mysterious formulation of our world. So I’d like to sort of ask you just as an example, it perhaps doesn’t have the same beauty as physics does, but in mathematics (abstract), Andrew Wiles who proved Fermat’s Last Theorem. So he, I just saw this recently and it kind of caught my eye a little bit. This is 358 years after it was conjectured. So this very simple formulation. Everybody tried to prove it. Everybody failed. And so here’s this guy comes along and eventually proves it and then fails to prove it and then proves it again in 1994. And he said like the moment when everything connected into place. Then in an interview he said:” It was so indescribably beautiful that moment when you finally realize the connecting piece of two conjectures.” He said: “It was so indescribably beautiful. It was so simple and so elegant. I couldn’t understand how I’d missed it and I just stared at it in disbelief for 20 minutes. Then during the day I walked around the department and I’d keep coming back to my desk looking to see if it was still there, it was still there, I couldn’t contain myself. I was so excited. It was the most important moment of my working life. Nothing I ever do again will mean as much.” So that particular moment and it kind of made me think of what would it take and I think we have all been there at small levels. Maybe let me ask, have you had a moment like that in your life? Were you just had an idea. It’s like, wow! Yes…

MT – I wouldn’t mention myself in the same breath as Andrew Wiles, but I’ve certainly had a number of aha moments when I realized something very cool about physics just completely made my head explode. In fact, some of my favorite discoveries, I made a I later realized that have been discovered earlier or someone who sometimes got quite famous for it. So it was too late for me to even publish it. But that doesn’t diminish in anyway, the emotional experience you have when you realize it like, wow!

LF –  Yeah. So what would it take in at that moment? That, wow, that was yours in that moment. So what do you think it takes for an intelligence system, an AGI system, an AI system to have a moment like that?

MT –  That’s a  tricky question because there are actually two parts to it. Right? One of them is can it accomplish that proof? Can it  prove that you can never write A to the N plus B to the N equals Z to the N for all integers etcetera etcetera when N is bigger than 2? That was simply in any question about intelligence. Can you build machines that are that intelligent? And I think by the time we get a machine that can independently come up with that level of proofs probably quite close to AGI. 

The second question is a question about consciousness. When will we and how likely is it that such a machine would actually have any experience at all as opposed to just being like a zombie. And would we expect it to have some sort of emotional response to this or anything at all akin to human emotion where when it accomplishes its machine goal, it views that somehow as something very positive and and and sublime and deeply meaningful. I would certainly hope that if  in the future we do create machines that are our peers or even our descendants. 

LF – Yeah.

MT – I would certainly hope that they do have this sort of sublime appreciation of life. In a way, my absolutely worst nightmare would be that  at some point in the future, the distant future. Maybe our cosmos is teeming with all this post biological life doing all the seemingly cool stuff. And maybe the last humans by the time our species eventually fizzles out will be like, well that’s okay because we’re so proud of our descendants here and look what  ….  My worst nightmare is that we haven’t solved the consciousness problem and we haven’t realized that these are all the zombies. They’re not aware of anything any more than the tape recorders has any kind of experience. So the whole thing has just become a play for empty benches that would be like the ultimate zombie apocalypse me. So I would much rather in that case that mm we have these beings which can really appreciate how amazing it is.

LF –  And in that picture what would be the role of creativity. But a few people ask about creativity, do you think when you think about intelligence? I mean, certainly the story you told at the beginning of your book involved, you know, creating movies and so on, sort of making money. You know, you can make a lot of money in our modern world with music and movies. So if you are an intelligence system, you may want to get good at that. But that’s not necessarily what I mean by creativity. Is it important on that complex goals where the sea is rising for there to be something creative or or am I being very human-centric and thinking, creativity is somehow special relative to intelligence?

MT –  My hunch is that we should think of your creativity simply as an aspect of intelligence. And  we we have to be very careful with the human vanity we have we have this tendency very often want to say as soon as machines can do something, we try to diminish it and saying: Oh but that’s not like real intelligence, you know because they’re not creative or there were or this or that the other thing. 

If we ask ourselves to write down a definition of what we actually mean by being creative, what we mean by Andrew Wiles, what he did there for example, don’t we often mean that someone takes a very unexpected leap. It’s not like taking 573 and multiplying it by 224 by just a step of straightforward cookbook-like rules. Right? You can maybe make it, you make a connection between two things that people have never thought was connected or something like that.

LF – Yeah, it’s very surprising. 

MT – I think  this is an aspect of intelligence and  this is actually one of the most important aspects of it. Maybe the reason we humans tend to be better at it than traditional computers is because it’s something that comes more naturally if you’re a neural network than if your traditional logic gate based computer machine. You know we physically have all these connections. And that if you activate here, activate here, activate here being, you know, bing! My hunch is that if we ever build a machine, well, you could just give it the task. Hey, you know, I just realized that I want to travel around the world instead this month. Can you teach my AGI course for me? And it’s like, okay, I’ll do it. And it does everything that you would have done and improvises and stuff that would, in my mind, involve a lot of creativity.

LF –  Yeah, So it’s actually a beautiful way to put it. I think we do try to grasp at the, you know, the definition of intelligence is everything. We don’t understand how to build. So we, as humans try to find things well that we have that our machines don’t have. And maybe creativity is just one of the things, one of the words we use to describe that, that’s a really interesting way to put it.

MT –  I don’t think we need to be that defensive. I don’t think anything good comes out of saying, well, where somehow special, you know It’s contrary wise, there are many examples in history of where trying to pretend that we’re somehow superior to all other intelligent beings has led the pretty bad results, right? Nazi Germany, they said that they were somehow superior to other people.  today we still do a lot of cruelty to animals by saying that we’re so superior somehow. And they can’t feel pain, slavery was justified by the same kind of just really weak arguments. And I don’t think if we actually go ahead and build artificial general intelligence which can do things better than us, I don’t think we should try to found our self worth on some sort of bogus claims of superiority in terms of our intelligence. I think we should instead find our calling and the meaning of life from the experiences that we have. 

LF – Right.

MT -You know, I can have, I can have very meaningful experiences, even if there are other people who are smarter than me, you know? Okay, when I go to a faculty meeting here and I were talking about something that I certainly realize, oh, but he has a Nobel prize, he has a Nobel prize, he has a Nobel prize, I don’t have one. Does that make me enjoy life any less? Or I enjoy talking to those people. Of course not, you know, and contrary wise, I  feel very honored and privileged to get to interact with other very intelligent beings that are better than me at a lot of stuff. So I don’t think there’s any reason why we can’t have the same approach with intelligent machines.

LF –  That’s a really interesting …. So people don’t often think about that. They think about when there’s going if there’s machines that are more intelligent, you naturally think that that’s not going to be um a beneficial type of intelligence, you don’t realize it could be, you know, like peers with Nobel prizes that that would be just fun to talk with, and they might be clever about certain topics and  you can have fun having a few drinks with them, so ….

MT – Well also, you know, another example, we can all relate to it of why it doesn’t have to be a terrible thing to be impressed with the presence of people or even smarter than us all around is when you and I were both two years old, I mean, our parents were much more intelligent than us, right? Worked out okay, because their goals were aligned with our goals and that I think is really the number one key issue we have to solve ….

LF -….  the value alignment problem.

MT – Exactly. Because people who see too many Hollywood movies with lousy science fiction plot lines, they worry about the wrong thing, right? They worry about some machines, certainly turning evil. It’s not malice that is the concern, it’s competence. By definition intelligence makes you very competent if you have a more intelligent Go playing computer playing is the less intelligent one and when we define intelligence is the ability to accomplish Go winning right, it’s going to be the more intelligent one that wins.  And if you have a human and then you have an AGI that’s more intelligent than always, and they have different goals, guess who’s going to get their way right? 

So I was just reading about this  particular rhinoceros species that was driven extinct just a few years ago, 

LF – Yes

MT – A bummer. I was looking at this cute picture, mommy rhinoceros with its child, you know, why did we humans drive it to extinction? It wasn’t because we were evil rhino haters as a whole. It was just because our goals weren’t aligned with those of the rhinoceros, and it didn’t work out so well for the rhinoceros because we were more intelligent, right? So I think it’s just so important that if we ever do build AGI before we unleash anything, we have to make sure that it learns to understand our goals, adopts our goals and it retains those goals.

LF –  So the cool interesting problem there is being able …. us as human beings, trying to formulate our values. So, you know, you can think of the United States Constitution as a way that people sat down at the time, a bunch of white men, but which is a good example, we should say they formulated the goals for this country and a lot of people agree that those goals actually held up pretty well, That’s an interesting formulation of values and failed miserably in other ways. So for the value alignment problem and a solution to it, we have to be able to put on paper or in a program human values? How difficult do you think that is?

MT –  Very But it’s so important we really have to give it our best. And it’s difficult for two separate reasons. There’s the technical value alignment problem of figuring out how to make machines understand their goals, document, and retain them. And then there’s the separate part of it, the philosophical part, whose values anyway? And since it’s not like we have any great consensus on this planet on values, what mechanisms should we create them, to aggregate and decide okay, what’s a good compromise? Uh, that second discussion can’t just be left to the tech nerds like myself, right 

LF – That’s right. 

MT – And if we refuse to talk about it and then AGI  gets built, who’s going to be actually making the decision about who’s values? It’s gonna be a bunch of dudes and some tech company. And are they necessarily so representative, all of humankind that we wanted just entrusted to them. Are they even uniquely qualified to speak to future human happiness just because they’re good at programming AGI? I would much rather have this be a really inclusive conversation.

LF –  But do you think it’s possible ….  so you create a beautiful vision that includes, the diversity, cultural diversity and various perspectives on discussing rights, freedoms, human dignity, but how hard is it to come to that consensus? Do you think it’s certainly a really important thing that we should all try to do? But do you think it’s feasible?

MT –  I think there’s no better way to guarantee failure than to refuse to talk about it or refuse to try. And I also think it’s a really bad strategy to say, okay, let’s first have a discussion for a long time and then once we reach complete consensus, then we’ll try to load it into the machine. No, we shouldn’t let perfect be the enemy of the good. Instead we should start with the kindergarten ethics that pretty much everybody agrees on and put that into machines. Now we’re not doing that even.

Look at you know, anyone who builds as a passenger aircraft wanted to never under any circumstances fly it into a building or a mountain right yet the September 11 hijackers were able to do that. And even more embarrassing that you know Andreas Lubitz, this depressed Germanwings pilot when he flew his passenger jet into the Alps killing over 100 people, he just told the autopilot to do it. He told the freaking computer to change the altitude to 100 meters. And even though it had the GPS maps and everything, the computer was like okay. 

So we should take those very basic values where the problem is not that we don’t agree, the problem is just we’ve been too lazy to try to put it into our machines and make sure that from now on airplanes will  all  have computers in them, but we just never just refuse to do something like that. Go into safe mode, maybe lock the cockpit door door, go to the nearest airport. 

And there’s so much other technology in our world as well now where it’s really becoming quite timely to put in some sort of very basic values like this, even in cars, we have had enough vehicle terrorism attacks by now. If you have driven trucks and vans into pedestrians, that is not at all a crazy idea to just have that hard wired into the car because there are a lot of, there’s always gonna be people who for some reason want to harm others. But most of those people don’t have the technical expertise to figure out how to work around something like that. So, if the car just won’t do it, it helps. So let’s start there. 

LF –  So there’s a lot of … that’s a great point. So not, not chasing perfect. 

MT – Yeah.

LF – There’s a lot of things that a lot that most of the world agrees on, let’s start there.

MT –  Let’s start there. And  then once we start there, we’ll also get into the habit of having these kinds of conversations about, okay, what else should we put in here and have these discussions? This would be a gradual process then

LF –  Great. So, but uh, that also means describing these things and describing it to a machine. So one thing we had a few conversations. Stephen Wolfram, I’m not sure if you’re familiar with Stephen but

MT –  Oh yeah I know him quite well.

LF –  So he has you know he played, you know he works with a bunch of things but you know cellular automata,  these simple computable things, these computation systems and you kind of mentioned that you know we probably have already,  within these systems already something that’s AGI.  meaning like we just don’t know it because we can’t talk to it. So, if you give me this chance to try to at least form a question out of this … I think it’s an interesting idea to think that we can have intelligence systems but we don’t know how to describe something to them and they can’t communicate with us. I know you’re doing a little bit of work and explainable AI trying to get AI to explain itself. So what are your thoughts of natural language processing or some kind of other communication? How does the AI explain something to us? How do we explain something to it, to machines or do you think of it differently?

MT – So there are two separate parts of your question there. One of them has to do with communication which is super interesting and we’ll get to that in a sec.  The other is whether we already have AGI, but we just haven’t noticed it.There I beg to differ, right.  I don’t think there’s anything in any cellular automaton or anything in the Internet itself or whatever that has artificial general intelligence and that it can really do exactly everything we humans can do better. I think the day that happens, when that happens, we will very soon notice and will probably notice even before because in a very very big way. But for the for the second part though,

LF –  Wait, can I ask, sorry? So, because you have this beautiful way of formulating consciousness as  you know as information processing and you can think of intelligence and information processing and as you can think of the entire universe is these particles and these systems roaming around that have this information processing power. You don’t  think there is something with the power to process information in the way that we human beings do that’s out there, that needs to be sort of connected to. It seems a little bit philosophical perhaps, but there’s something compelling to the idea that the power is already there which is the focus should be more on being able to communicate with it.

MT –  Well, I agree that in a certain sense the hardware processing power is already out there because our universe itself, you can think of it as being a computer already right? It’s constantly computing what water waves, how it devolved the water waves in the river Charles and how to move the air molecules around.  Seth Lloyd has pointed out (my colleague here) that you can even in a very rigorous way think of our entire universe as just being a quantum computer. It’s pretty clear that our universe supports this amazing processing power. Because you can even within this physics computer that we live in, right, we can even build actual laptops and stuff. So clearly the power is there, it’s just that most of the compute power that nature has, it’s in my opinion, kind of wasting on boring stuff like simulating yet another ocean waves somewhere. We don’t want to even looking right? So, in a sense, what life does, what we are doing when we build computers is we’re re-channeling all this compute that nature is doing anyway into doing things that are more interesting just yet another ocean wave, you know, and let’s do something cool here. So the raw hardware power is there and for sure, and even just like computing what’s going to happen for the next five seconds in this water bottle, you know, it takes a ridiculous amount of compute if you do it on a human computer, this water bottle just did it. But that does not mean that this water bottle has AGI because AGI  means, it should also be able to have written my book, done this interview and I don’t think it’s just communication problems.

LF – As far as we know.

MT –  I don’t  think it can do it and…

LF –  Although Buddhists say when they watch the water and that there is some beauty, that there’s some depth and being in nature that they can communicate with.

MT –  Communication is also very important because I mean look  part of my job is being a teacher and I know some very intelligent professors even, who just have a bit of a hard time communicating all these brilliant ideas. But to communicate with somebody else you have to also be able to simulate their own mind.

LF –  Yes, empathy.

MT –  build well enough and understand a model of their mind that you can say things that they will understand. That’s quite difficult. Right? That’s why today it’s so frustrating if if you have a computer that make some cancer diagnosis and you ask it well why are you saying I should have the surgery and if it can only reply: {MT speaking in a machine voice} I was trained on five terabytes of data and this is my diagnosis, boop boop beep beep.

LF – Yeah.

MT – It  doesn’t really instill a lot of confidence, right? So I think we have a lot of work to do on communication there.

LF –  So what kind of …. I think you’re doing a little bit work on explainable AI,  what do you think are the most promising avenues? Iis it mostly about sort of the Alexa problem of natural language processing,  of being able to actually use human interpretable methods of communication? So being able to talk to a system and talk back to you or is there some more fundamental problems to be solved? 

MT –  I think it’s all of the above.  The natural language processing is obviously important but they’re also more nerdy fundamental problems, like if you take… you play chess? 

LF – Of course, I’m Russian, I have to.

MT – Ты говоришь по-русски? {You speak Russian?}

LF – Да по русски говорю  {Yes, I speak Russian}

MT – Отлично, я не знал. 
{Excellent, I didn’t know. }

LF – When did you learn Russian? 

MT – Я говорю очень плохо по-русски.Купил книгу, “Teach Yourself Russian” читaл очень много . Было очень трудно. я говорю так плохо. 

{I speak very bad Russian.Bought a book“ Teach Yourself Russian”, read a lot. It was very difficult. I talk so bad}

LF – How many languages do you know? Wow, that’s really impressive.

MT –  I don’t know, my wife has some calculations, but my point was if you play chess, like have you looked at the Alpha Zero games? 

LF –  Uh, the actual games no.

MT –  Check it out, some of them are just mind blowing. Really beautiful and if you ask, how did it do that? Yeah, you got to talk to them, Demis Hassabis and others from DeepMind. All they will ultimately be able to give you is big tables of numbers, matrices that defined the neural network and you can stare at these tables, numbers until your face turns blue and you’re not going to understand much about why it made that move and  even if you have a natural language processing that can tell you in human language about 5,7,0.28 it’s still not gonna really help. 

So I think I think there’s a whole spectrum of fun challenges there involved in taking a computation that does intelligent things and transforming into something equally good, equally intelligent, but it’s more understandable and I think that’s really valuable because I think as we put machines in charge of ever more infrastructure in our world, the power grid, trading on the stock market, weapons systems and so on, it’s absolutely crucial that we can trust these AIs to do all we want and trust really comes from understanding…

LF – Right.

MT – …  in a very fundamental way. And that’s why I’m, that’s why I’m working on this. Because I think the more …  if we’re gonna have some hope of ensuring that machines have adopted our goals and that they’re going to retain them, that kind of trust, I think needs to be based on things you can actually understand perfectly, even make perfectly improved theorems on even with a self-driving car, right. If someone just tells you it’s been trained on tons of data and never crashed, it’s less reassuring than if someone actually has a proof, maybe it’s a computer verified proof. But still, it says that under no circumstances is this car just gonna swerve into oncoming traffic

LF –  And that kind of information helps to build trust and help build the alignment, the alignment of goals. At least, awareness that your goals, your values are aligned.

MT –  And I think even a very short term, if you look at uh, you know today, right, that’s an absolutely pathetic state of cybersecurity that we have, right, when it was three billion Yahoo accounts were hacked? Almost every American’s credit card and so on. You know, why is this happening? It’s ultimately happening because we have software that nobody fully understood how it worked. That’s why the bugs hadn’t been found, right? And  I think AI can be used very effectively for offense, for hacking, but it can also be used for defense, hopefully automating verifiability and creating systems that are built in different ways. So you can actually prove things about them

LF – Right.

MT –  and it’s important.

LF – So speaking of software that nobody understands how it works, of course, a bunch of people ask about your paper about your thoughts of why does deep and cheap learning works so well, that’s the paper. But what are your thoughts on deep learning, these kind of simplified models of our own brains have been able to do some successful perception work, pattern recognition work and now with alpha zero and so on, do some clever things. What are your thoughts about the promised limitations of this piece?

MT –  00:59:43

Great. I think there are a number of very important insights, very important lessons. We can always draw from these kind of successes. One of them is when you look at the human brain and you see it’s very complicated, 10 to the 11th {10^11}  neurons and there are all these different kinds of neurons and Yada Yada. And there’s been this long debate about whether the fact that we have dozens of different kinds is actually necessary for intelligence. Which I now, in think quite convincingly answer that question: No, it’s enough to have just one kind if you look under the hood of Alpha Zero, there’s only one kind of neuron and it’s a ridiculously simple mathematical thing. So it’s not… it’s  just like in physics, if you have a gas with waves in it, it’s not the detailed nature of the molecules that matter, it’s the collective behavior somehow. Similarly, it’s this higher level structure of the network matters; not that you have 20 kinds of yours. I think our brain is such a complicated mess because it wasn’t evolved just to be intelligent, it was evolved to also be self-assembling…  

LF – right.

MT – … and self-repairing, right? And evolutionarily attainable

LF –  and {unitelligible } and so on.

MT –  So I think it’s pretty my hunch is that we’re going to understand how to build AGI before we fully understand how our brains work, just like we understood how to build flying machines long before we were able to build a mechanical work bird.

LF –  Yeah, that’s right. You’ve given that example exactly of mechanical birds and airplanes and airplanes do a pretty good job of flying without really mimicking bird flight.

MT –  And even now,  100 years later, did you see TED talk with the German mechanical bird?

LF – I heard you mention it.

MT – Check it out, it’s amazing. But even after that we still don’t fly in mechanical birds because it turned out the way we came up with is simpler. It’s better for our purposes and I think it might be the same there. That’s one lesson.  

Another lesson is one that our paper was about;  well, first I as a physicist thought it was fascinating how there is a very close mathematical relationship actually between our artificial neural networks and a lot of things that we’ve studied for in physics, they go buy nerdy names like the renormalization group equation and Hamiltonians and yada, yada, yada. And when you look a little more closely at this, you have…at first I was like, whoa, there’s something crazy here that doesn’t make sense because we know that if you even want to build a super simple neural network to tell apart cat pictures and dog pictures, right? That you can do that very, very well now.

But if you think about it a little bit, you convince yourself it must be impossible because if I have one megapixel, even if each pixel is just black or white, there’s two to the power one million possible images which is way more than there are atoms in our universe.  So in order to ….I have to assign a number which is the probability that it’s a dog. So an arbitrary function of images is a list of more numbers than there are atoms in our universe. So clearly I can’t store that under the hood of my GPU or my computer yet somehow works. So what does that mean? Well it means that out of all of the problems that you could try to solve with a neural network, Almost all of them are impossible to solve with a reasonably sized one. But then what we showed in our paper was that the fraction of all the problems that you could possibly pose that we actually care about given the laws of physics is also an infinitesimally tiny little part and amazingly they are basically the same part.

LF –  Yeah. It’s almost like the world was created for…  I mean they kind of come together.

MT –  Yeah, you could say maybe where the world was created for us. But I have a more modest interpretation which is that instead evolution endowed us with neural networks precisely for that reason because this particular architecture {gesturing to his head} as opposed to the one in your laptop is very very well adapted to solving the kind of problems that nature kept presenting it our ancestors with, right. So it makes sense why do we have a brain in the first place? It’s to be able to make predictions about the future and so on. So if we had a sucky system which I could never solve it. But I would never have worked. But so this is I think a very beautiful fact. Yeah. We also realize that there is  there’s been earlier work on why deeper networks are good. But we were able to show an additional cool fact there which is that even incredibly simple problems like suppose I  give you a 1000 numbers and ask you to multiply them together and you can write a few lines of code. Boom, done, trivial. If you just try to do that with a neural network that has only one single hidden layer in it, you can do it but you’re gonna need two to the power of 1000 neurons to multiply 1000 numbers which is again more neurons than there are atoms in our universe. 

LF – That’s fascinating.

MT – But if you allow yourself to make it a deep network of many layers you only need 4000 neurons, it’s perfectly feasible. So…. 

LF –  That’s really interesting. Yeah. So on another architecture type I mean you mentioned Schrodinger’s equation and what are your thoughts about quantum computing and the role of this kind of computational unit in creating an intelligence system?

MT –  in some Hollywood movies that I will not mention my name. I don’t want to spoil them,  the way they get AGI Is building a quantum computer because the word quantum sounds cool and so on.

LF – That’s right.

MT – First of all I think we don’t need quantum computers to build AGI. I suspect your brain is not a quantum computer and in any found sense. I even wrote a paper about that many years ago. I calculated the so called decoherence time; how long it takes until the quantum computerness of what your neuron is doing gets erased by just random noise from the environment and it’s about 10 to the -21 seconds. So as cool as it would be to have a quantum computer in my head. I don’t think that fast. 

On the other hand there are very cool things you could do with quantum computers or I think we’ll be able to do soon when we get bigger ones that might actually help machine learning do even better than the brain. So for example, this is just a Moonshot but okay  you know that learning, it’s very much the same thing as a search. If you have, if you’re trying to train a neural network to get really learned to do something really well, you have some loss function. You have some you have a bunch of knobs you can turn well which are represented by a bunch of numbers and you’re trying to tweak them so that it becomes as good as possible at this thing. So if you think of the landscape but with some valley where each dimension of the landscape corresponds to some number you can change,  you’re trying to find the minimum and it’s well known that if you have a very high dimensional landscape, complicated things? It’s super hard to find the minimum, right? 

Quantum mechanics is amazingly good at this. If I want to know what’s the lowest energy state this water can possibly have; incredibly hard to compute. But nature will happily figure this out for you if you just cool it down, make it very, very cold. If you put a ball somewhere, it’ll roll down to its minimum. And this happens metaphorically, the energy landscape too. And quantum mechanics even uses some clever tricks which today’s machine learning systems don’t. Like you’re trying to find the minimum and you get stuck in the little local minimum here in quantum mechanics who can actually tunnel through the barrier and get unstuck again? 

LF –  That’s really interesting.

MT -So it may be, for example, we will one day use quantum computers that help train neural networks better?

LF –  That’s really interesting. Okay, so as a component of kind of the learning process, for example.

MT –  Yeah.

LF –  Let me ask , sort of wrapping up here a little bit. Let me return to  the questions of our human nature and love, as I mentioned. So do you think  …. you mentioned sort of a helper robot that you can think also of  robots. Do you think the way we human beings fall in love and get connected to each other. It’s possible to achieve in an AI system, in human level AI intelligence system? Do you think we would ever see that kind of connection or  you know, in all this discussion about solving complex goals as this kind of human social connection, do you think that’s one of the goals on the peaks and valleys that with the raising sea levels that would be able to achieve? Or do you think that’s something that’s ultimately, or at least in the short term, relative to the other goals is not achievable? 

MT –  I think it’s all possible. And I mean, in recent ….there’s a there’s a very wide range of distances, you know, among AI researchers when we’re going to get AGI. Some people, you know, like our friend Rodney Brooks says it’s going to be hundreds, hundreds of years at least. And then there are many others. I think it’s gonna happen much sooner in recent polls, maybe a half or so of AI researchers think we’re going to get AGI  within decades.  So if that happens, of course, I think these things are all possible, but in terms of whether it will happen, I think we shouldn’t spend so much time asking what do we think will happen in the future as if we are just some sort of pathetic passive bystanders, you know, waiting for the future to happen to us? Hey, we’re the ones creating this future, Right, So we should be proactive about it and ask yourself what sort of future we would like to have happen that’s going to make it like that. 

Well, what I prefer to some sort of incredibly boring zombie-like future where just all these mechanical things happen and there’s no passion, no emotion, no experience, maybe even. No, I would of course much rather prefer if all the things that we find that we value the most about humanity, our subjective experience, passion, inspiration, you love. You know, if we can create a future where those are where those things do exist, I think ultimately it’s not our universe giving meaning to us, it’s us giving meaning to our universe If we build more advanced intelligence, let’s let’s make sure building in such a way that meaning is it’s part of it. 

LF –  A lot of people that seriously study this problem and think of it from different angles have trouble, the majority of cases if they think through that happen are the ones that are not beneficial to humanity. And so yeah, so what are your thoughts, What’s in and what’s, what should people, you know, I really don’t like people to be terrified.  What’s a way for people to think about it in a way that in a way we can solve it and we can make it better. 

MT –  But no, I don’t think panicking is gonna help in any way. It’s not going to increase chances of things going well either. Even if you are in a situation where there is a real threat, does it help if everybody just freaks out? No, of course, of course not.  I think, yeah, there are of course ways in which things can go horribly wrong.  

First of all, it’s important when we think about this thing, about the problems and risks to also remember how huge the upsides can be if we get it right, right? Everything we love about society and civilization is a product of intelligence. So if we can amplify our intelligence with machine intelligence and not anymore lose our loved ones, to what we’re told in an incurable disease and things like this, of course we should aspire to that. So that can be a motivator, I think, reminding ourselves that the reason we try to solve problems is not just because right, trying to avoid doom, but because we’re trying to do something great. But then in terms of the risks, I think the really important question is to ask: what can we do today that will actually help make the outcome good, right?

LF – Yes.

MT – And  dismissing the risk is not one of them. You know, I find it quite funny often when I’m in on discussion panels about these things, how the people who I work for for companies will always be like: “Oh, nothing to worry about, nothing to worry about, nothing to worry about.” And it’s always,  it’s only academics sometimes expressing concerns. That’s not surprising at all. If you think about it, Upton Sinclair quipped that: ”It’s hard to make your man believe in something when his income depends on not believing in it.”  { Actual quote is:  “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”  book “I, Candidate for Governor: And How I Got Licked,” by Upton Sinclair, 1935 }

And frankly, we know a lot of these people and companies that they are just as concerned as anyone else. But if you’re the CEO of a company, that’s not something you want to go on record saying,  when you have silly journalists who are going to put a picture of a Terminator robot when they quote you. 

So, the issues are real, and the way I the way I think about what the issue is is basically, you know, the real choice we have is first of all are we going to just dismiss this the risks and say, well, let’s just go ahead and build machines that can do everything we can do better and cheaper. You know, let’s just make yourselves obsolete as fast as possible. What could possibly go wrong? That’s one attitude.

The opposite attitude I think, is to say there is incredible potential. You know, let’s think about what kind of future we’re really, really excited about. What are the shared goals that we can really aspire towards. And then let’s think really hard on how about how we can actually get there. So start with, don’t start thinking about the risks. Start thinking about the goals and then when you do that, then you can think about the obstacles you want to avoid, right? I often get students coming in right here into my office for career advice, I always ask them this very question, where do you want to be in the future? If all she can say as well, maybe I’ll have cancer, maybe I’ll get run over by a truck.

LF – Focus on obstacles instead of the goal

MT –  She’s just going to end up a hypochondriac paranoid, whereas if she comes in with fire in her eyes and it’s like I want to be there and then we can talk about the obstacles and see how we can circumvent them. That’s, I think, a much healthier attitude.

LF – That’s really well put. 

MT – And  I feel it’s very challenging to come up with a vision for the future which we are unequivocally excited about. I’m not just talking now in vague terms like, yeah, let’s cure cancer. Fine. Talking about what kind of society do we want to create, what do we want it to mean to be human in the age of AI,  in the age of AGI. So if we can have this conversation,  broad inclusive conversation and gradually start converging towards some future that with some direction at least that we want to steer towards right then then we will be much more motivated to constructively take on the obstacles and I think if I if I had to, if I try to wrap this up in a more succinct way, I think, I think we can all agree already now that we should aspire to build AGI but doesn’t overpower us, but that empowers us.

LF –  And think of the many various ways that can do that, whether that’s from my side of the world of autonomous vehicles, I I’m personally actually from the camp that believes that human level intelligence is required to to achieve something like vehicles that would actually be something we would enjoy using and being part of. So that’s one example and certainly there’s a lot of other types of robots and medicine and so on. So focusing on those and then and then coming up with the obstacles, coming up with the ways that that can go wrong and solving those one at a time. 

MT –  And just because you can build an autonomous vehicle, even if you could build one that would drive just fine without, you know, maybe there are some things in life that we would actually want to do ourselves

LF – That’s right, 

MT – Like for example, if you think of our society as a whole, there’s something that we find very meaningful to do and that doesn’t mean we have to stop doing them just because machines can do them better. You know, I’m not gonna stop playing tennis the day someone build a tennis torobot beat me.

LF –  People are still playing chess and even Go

MT –  Yeah, and in the very near term, even some people are advocating basic income, replacing jobs, but if you if the government is going to be willing to just hand out cash to people for doing nothing, then one should also seriously consider whether the government should also hire a lot more teachers and nurses and the kind of jobs which people often find great fulfillment in doing right. I  get very tired of hearing politicians saying: “Oh we can’t afford hiring more teachers, but we’re going to maybe have basic income.” If we can have more serious research and thought into what gives meaning to our lives and the jobs give so much more than income, right? And then think about, in the future …. What are the roles that we want to have people continue doing empowered by machines?

LF – And I think sort of ….  I come from Russia, from the Soviet Union and I think for a lot of people in the 20th century, going to the moon, going to space was an inspiring thing. I feel like the universe of the mind, so AI, understanding and creating intelligence is that for the 21st century. So it’s really surprising and I’ve heard you mention this, it’s really surprising to me both in the research funding side that it’s not funded as greatly as it could be, but most importantly, on the politician’s side that it’s not part of the public discourse except in the killer bots/Terminator kind of view that people are not yet. I think perhaps excited by the possible positive future that we can build together, so …

MT –  And we should be because politicians usually just focus on the next election cycle, right? The single most important thing I feel we humans have learned in the entire history of science is that we are the masters of underestimation, we underestimated the size of our cosmos again and again, realizing that everything we thought existed, there’s just a small part of something grander, right?  Planet, solar system, a galaxy, you know, clusters of galaxies, universe and we now know that … the future has just so much more potential than our ancestors could ever have dreamt of this cosmos.

 Imagine if all of earth was completely devoid of life except for Cambridge Massachusetts. Wouldn’t it be kind of lame if all we ever aspired to was to stay in Cambridge Massachusetts forever and then go extinct in one week even though earth was going to continue on for longer, that sort of attitudeI think we have now. On the cosmic scale we can, life can flourish on earth, not for four years, but for billions of years. I can even tell you about how to move it out of harm’s way when the sun gets too hot. And then we have so much more resources out here, which today maybe there are a lot of other planets with bacteria or cow-like life on them. But most of this,  all this opportunity seems as far as we can tell to be largely dead, like the Sahara desert. And yet we have the opportunity to help life flourish like this for billions of years. So like, let’s quit squabbling about when some little border should be drawn one mile to the left or to the right and look up to the skies. You realize, hey, you know, we can do such incredible things.

LF –  Yeah. And that’s I think why it’s really exciting that yeah, you and others are connected with some of the work Elon Musk is doing because he’s literally going out into that space, really exploring our universe. And it’s wonderful.

MT –  That is exactly why Elon Musk is so misunderstood, right? Misconstrue him as some kind of pessimistic doomsayer. The reason he cares so much about AI safety is because he more than almost anyone else appreciates these amazing opportunities that we’ll squander if we wipe out here on Earth. We’re not just going to wipe out the next generation but all generations. And this incredible opportunity that’s out there and that would really be a waste. An AI, for people who think that we would be better to do without technology; well, let me just mention that if we don’t improve our technology, the question isn’t whether humanity is going to go extinct. The question is just whether we’re gonna get taken out by the next big asteroid or the next super volcano or something else dumb, that we could easily prevent with more tech, right? And if we want life to flourish throughout the cosmos, AI is the key to it. Yeah. As I mentioned, a lot of detail in my book right there, even many of the most inspired sci-fi writers, I feel have totally underestimated the opportunities for space travel, especially to other galaxies, because they weren’t thinking about the possibility of AGI. I’ve, which just makes it so much easier,

LF –  Right? Yeah. So that goes to your view of AGI that enables our progress, that enables a better life. So that’s a beautiful way to put it and something to strive for. So Max, thank you so much. Thank you for your time today, it has been awesome.

MT  Thank you so much. спасибо большое. Молодец {Well done}

Transcript of Lex Fridman Interview with Ray Kurzweil

{This is my best attempt at a transcript of Lex’s interview with Ray Kurzweil on 17 September 2022 on YouTube: Ray Kurzweil: Singularity, Superintelligence, and Immortality | Lex Fridman Podcast #321  I made some light editorial corrections but tried to be faithful to the conversation. I included a few screenshots of relevant material. My goal is to share the interaction between Lex and Ray; I have learned much from both of them.}

Introduction

LF – The following is a conversation with Ray Kurzweil, author, inventor and futurist who has an optimistic view of our future as a human civilization predicting that exponentially improving technologies will take us to a point of a singularity beyond which superintelligent artificial intelligence will transform our world in nearly unimaginable ways. Eighteen years ago in the book The Singularity is Near, he predicted that the onset of the singularity will happen in the year 2045. He still holds to this prediction and estimate. In fact he’s working on a new book on this topic that will hopefully be out next year.

Turing test

LF – In your 2005 book titled The Singularity is Near you predicted that the singularity will happen in 2045. So now, 18 years later, do you still estimate that the singularity will happen on 2045? And maybe first, what is the singularity? The technological singularity? And when will it happen?

RK –  The singularity is where computers really change our view of what’s important and change who we are. But we’re getting close to some salient things that will change who we are.  The key thing is 2029 when computers will pass the Turing test and there’s also some controversy whether the Turing test is valid. I believe it is. Most people do believe that but there’s some controversy about that. But Stanford got very alarmed at my prediction about 2029. I made this in 1999 in my book …

LF – The Age of Spiritual Machines  and then you repeated the prediction in 2005

RK –  They held an international conference, you might have been aware of it, of AI experts in 1999 to assess this view. So people gave different predictions and they took a poll. It was really the first time that AI experts worldwide were polled on this prediction. And the average poll was 100 years. 20% believed it would never happen. And that was the view in 1999 80% believed it would happen but not within their lifetimes. There’s been so many advances in AI that the poll of AI experts has come down over the years. So a year ago something called Meticulous which you may be aware of assessed different types of experts on the future. They again assessed what AI experts then felt and they were saying 2042.

LF – For the Turing test?

RK – For the Turing test.

LF –  So it’s coming down

RK –  And I was still saying 2029. A few weeks ago, they again did another poll and it was 2030. So, AI experts now basically agree with me, I haven’t changed at all, I’ve stayed with 2029. And AI experts now agree with me but they didn’t agree at first.

LF –  So Alan Turing formulated the Turing test and ….

RK –  Right. Now what he said was very little about it. I mean the 1950 paper {“Computing Machinery and Intelligence“} where he had articulated the Turing test, he is like a few lines that  talk about the Turing test and it really wasn’t very clear how to administer it and and he said if if they did it in like 15 minutes that would be sufficient, which I don’t really think is the case. 

These large language models now, some people are convinced by it already. I mean you can talk to it and have a conversation with you and you can actually talk to it for hours. So it requires a little more depth. There’s some problems with large language models which we can talk about. But some people are convinced by the Turing test. 

Now, if somebody passes the Turing test, what are the implications of that? Does that mean that they’re sentient? They’re conscious or not? It’s not necessarily clear what the implications are. Anyway, I believe 2029 that’s 6 or 7 years from now, we’ll have something that passes the Turing test and a valid during test meaning it goes for hours, not just a few minutes,

LF –  Can you please speak to that a little bit. What is your formulation of the Turing test? You’ve proposed a very difficult version of the Turing test. So what does that look like?

RK –  Basically it’s just to assess it over several hours and also have a human judge that’s fairly sophisticated on what computers can do and can’t do. If you take somebody who’s not that sophisticated or even an average engineer,  they may not really assess various aspects of it. 

LF –  So you really want the human to challenge the system

RK –  Exactly, exactly.

LF –  On its ability to do things like common sense reasoning perhaps.

RK –  That’s actually a key problem with large language models. They don’t do  these kinds of tests that would involve assessing chains of reasoning. But you can lose track of that. If you talk to them, they actually can talk to you pretty well and you can be convinced by it, but it’s somebody that would really convince you that it’s a human,  whatever that takes, maybe it would take days or weeks. But it would really convince you that it’s human. Large language models  can appear that way. You can read conversations and they appear pretty good. There are some problems with it. It doesn’t do math very well. You can ask how many legs did ten elephants have, and they’ll tell you, well, okay, each elephant has four legs and ten elephants. So it’s forty legs. And you go, okay, that’s pretty good. How many legs do eleven elephants have? And they don’t seem to understand the question. 

LF –  Do all humans understand that question? 

RK – No, that’s the key thing. I mean how advanced a human do you want it to be, but we do expect a human to be able to do multi chain reasoning to be able to take a few facts and put them together. Not perfectly. We see that you know in a lot of polls that people don’t do that perfectly at all. But … so it’s not it’s not very well defined but it’s it’s something where it really would convince you that it’s human

LF –  Is your intuition that large language models will not be solely the kind of system that passes the Turing test in 2029. Do we need something else?

RK – No, I think it will be a large language model but they have to go beyond what they’re doing now. I think we’re getting there. And another key issue is if somebody actually passes the Turing test validly, I would believe their conscious and not everybody would say that. Okay, we can pass the Turing test but we don’t really believe that it’s conscious, that’s a whole other issue. But if it really passes the Turing test, I would believe that it’s conscious. But I don’t believe that of large language models today.

LF –  If it appears to be conscious, that’s as good as being conscious. At least for you in some sense.

RK –  I mean consciousness is not something that’s scientific. I mean I believe you’re conscious but it’s really just a belief and we believe that about other humans that at least appear to be conscious. When you go outside of shared human assumption like are animals conscious? Some people believe they’re not conscious. Some people believe they are conscious. And would a machine that acts just like a human be conscious? I mean, I believe it would be, but that’s really a philosophical belief. It’s not, you can’t prove it. I can’t take an entity and prove that it’s conscious. There’s nothing that you can do that would be that would indicate that

LF –  It’s like saying a piece of art is beautiful. You can say it multiple people can experience, a piece of art is beautiful, but you can’t prove it. 

RK –  But it’s also an extremely important issue. I mean, imagine if you had something with nobody is conscious, the world may as well not exist. And so some people like say Marvin Minsky said: well consciousness is not logical, it’s not scientific and therefore we should dismiss it. And any  talk about consciousness is just not to be believed. But when he actually engaged with somebody who was conscious, he actually acted as if they were conscious. He didn’t ignore that. 

LF –  He acted as if consciousness does matter

RK –  Exactly, where as he said, it didn’t matter. 

LF –  Well that’s Marvin Minsky, he’s full of contradictions.

RK –  But that’s true of a lot of people as well. 

LF –  But to you, consciousness matters.

RK –  But to me it’s very important but  I would say it’s not a scientific issue. It’s a philosophical issue. And people have different views. Some people believe that anything that makes a decision is conscious. So your light switch is conscious, its level of consciousness is low, it’s not very interesting but that’s a consciousness.  And anything…  So a computer that makes a more interesting decision is still not at a human levels, but it’s also conscious and at a higher level than your light switch.  So that’s one view.  There’s many different views of what consciousness is.

LF –  So a system passes the Turing test, it’s not scientific. But  in issues of philosophy, things like ethics start to enter the picture. Do you think there would be….  We would start contending as a human species about the ethics of turning off such a machine.

RK –  Yeah.  I mean that’s definitely come up, hasn’t come up in reality yet but

LF –  Yet.

RK –  But I’m talking about 2029. That’s not that many years from now. And so what are our obligations to it?  It has a different …. I mean a computer that’s conscious, it has a little bit different  connotations than a human. We have a continuous consciousness. We’re in an entity that does not last forever. Now actually a significant portion of humans still exist and are therefore still conscious. But anybody who is over a certain age doesn’t exist anymore. That wouldn’t be true of a computer program, you could completely turn it off and a copy of it could be stored and you could recreate it. And so it has a different type of validity.  you can actually take it back in time. You could eliminate its memory and have it go over again. I mean it has a different kind of connotation than humans do.

LF –  Well perhaps you can do the same thing with humans. It’s just that we don’t know how to do that yet.

RK – Yeah. 

LF –  It’s possible that we figure out all of these things on the machine first. But that doesn’t mean the machine isn’t conscious.

RK –  I mean if you look at the way people react to say C3PO or other  machines that are conscious in movies.  They don’t actually present how it’s conscious but we see that they are a machine and people will believe that they are conscious and they’ll actually worry about it if they get into trouble and so on.

LF –  So 2029 is going to be the first year when a major thing happens and that will shake our civilization to start to consider the role of AI.

RK – I mean yes and no.  This one guy at  Google {Blake Lemoine referring to LaMDA Language Model for Dialogue Applications }claimed that the machine was conscious. 

LF –  That’s just one person

RK – Right, right.

LF –  So it starts to happen to scale.

RK –  Well that’s exactly right because most people have not taken that position. I don’t take that position. I mean I’ve used  different things like this and they don’t appear to me to be conscious. 

As we eliminate various problems of these large language models more and more people will accept that they’re conscious. So when we get to 2029 more … I think a large fraction of people will believe that they’re conscious.  So it’s not going to happen all at once.  I believe that would actually happen gradually and it’s already started to happen.

LF –  And so that takes us one step closer to the singularity.

Brain-computer interfaces

RK – Another step then is in the 2030s when we can actually connect our neocortex, which is where we do our thinking, to computers. And I mean, just as this {RK holding up a smartphone} actually gains a lot to being connected to computers that will amplify its abilities. I mean if this {smartphone} did not have any connection, it would be pretty stupid. It could not answer any of your questions. …..

LF – If you’re just listening to this by the way, Ray is holding up the all powerful smartphone.

RK – So we were going do that directly from our brains. I mean, these {smartphones} are pretty good. These {smartphones} already have amplified our intelligence. I’m already much smarter than I would otherwise be if I didn’t have this. Because I remember when I first wrote my first book  The Age of Intelligent Machines there was no way to get information from computers. I actually would go to a library, find a book, find the page that had  information I wanted and I go to the copier and my most significant  information tool was a roll of quarters where I could feed the copier. So we’re already greatly advanced that we have these {smartphones} things.

There’s a few problems with it. First of all, I constantly put it down and I don’t remember where I put it. I’ve actually never lost it, but you have to find it {smartphone} and then you have to turn it on. So there’s a certain amount of steps. It would actually be quite useful if someone would just listen to your conversation and say, that’s, you know, so and so actress and tell you what you’re talking about.

LF – So going from active to passive where it just permeates your whole life.

RK – Yeah, exactly.

LF – The way your brain does when you’re awake, your brain is always there.

RK –  Right. Now, that’s something that could actually just about be done today, where it  would listen to your conversation, understand what you’re saying, understand what you’re not … missing and give you that information. 

But another step is to actually go inside your brain. And there are some prototypes where you can connect your brain. They actually don’t have the amount of bandwidth that we need. They can work, but they work fairly slowly. 

So if it actually would connect to your neocortex. And the neocortex, which I describe in How to Create a Mind, the neocortex, it has different levels and as you go up the levels, it’s kind of like a pyramid. The top level is fairly small and that’s the level where you want to connect these brain extenders. So I believe that will happen in the 2030s. Well, actually…  So just the way this {smartphone} is greatly amplified by being connected to the cloud; we can connect our own brain to the cloud and just do what we can do by using this machine. 

LF – Do you think it would look like the brain computer interface of Neuralink?

RK – Well, Neuralink is an attempt to do that. It doesn’t have the bandwidth that we need. 

LF – Yet.

RK – Right. But I think  they’re going to get permission for this because there are a lot of people who absolutely need it because they can’t communicate. And I know a couple of people like that who have ideas and they cannot move their muscles and so on. They can’t communicate. So for them this would be very valuable. But we could all use it  Basically, it would be…. turn us into something that would be like we have a phone, but it would be in our minds, it would be kind of instantaneous.

LF – And maybe communication between two people would not require this low bandwidth mechanism of language.

RK – Yes, exactly. We don’t know what that would be, although we do know that computers can share information, like language, instantly. They can share many, many books in a second. So we could do that as well. 

If you look at what our brain does, it actually can manipulate different parameters. So we talk about these large language models,  I mean I had written that it requires a certain amount of information in order to be effective and that we would not see AI really being effective until it got to that level. And we had large language models, there were like 10 billion bytes;  didn’t work very well. They finally got to 100 billion bytes and now they work fairly well. And now we’re going to a trillion bytes, if you say LaMDA  {Language Model for Dialogue Applications} has 100 billion bytes, what does that mean? 

Well what if you had something that had one byte, one parameter? Maybe you want to tell whether or not something is an elephant or not. And so you put in something that would detect its trunk, it has a trunk, it’s an elephant. If it doesn’t have a trunk it’s not an elephant. That would work fairly well. There’s a few problems with it. And it really wouldn’t be able to tell what the trunk is but anyway… 

LF – And maybe other things other than elephants have trunks. You might get really confused. 

RK – Yeah, exactly.

LF – I’m not sure which animals have trunks, but you know: how do you define a trunk? But yeah that’s one parameter. You could do okay.

RK –  So these things have 100 billion parameters. So they’re able to deal with very complex issues. 

LF – All kinds of trunks.

RK – Human beings actually have a little bit more than that but they’re getting to the point where they can emulate humans. If we were able to connect this to our neocortex we would basically add more of these abilities to make distinctions and it could ultimately be much smarter and also be attached to information that we feel is reliable. So that’s where we’re headed.

LF – So you think that there will be a merger in the thirties {2030s}, an increasing amount of merging between the human brain and the AI brain?

RK – Exactly. And the AI brain is really an emulation of human beings. I mean that’s why we’re creating them. Because human beings act the same way and this {AI brain} is basically to amplify them. I mean this {smartphone} amplifies our brain. It’s a little bit clumsy to interact with but it definitely, you know, way beyond what we had 15 years ago.

LF – But the implementation becomes different just like a bird versus the airplane. Even though the AI brain is an emulation, it starts adding features we might not otherwise have like the ability to consume a huge amount of information quickly. Like look up thousands of Wikipedia articles in one take.

RK – Exactly. We can get for example issues like simulated biology where it can  simulate many different things at once.  We already had one example of simulated biology which is the Moderna vaccine and that’s going to be now the way in which we create  medications. But they were able to simulate what each example of an mRNA would do to a human being. And they were able to simulate that quite reliably. And we actually simulated billions of different mRNA sequences and they found the ones that were the best and they created the vaccine and they did and talk about doing it quickly, they did that in two days. How long would a human being take to simulate billions of mRNA sequences? And I don’t know that we could do it at all, but it would take many years. They did it in two days. And one of the reasons that people didn’t like vaccines, it’s because it was done too quickly, it was done too fast. And they actually included the time it took to test it out which is 10 months. So they figured okay it took 10 months to create this. Actually, it took us two days. We also will be able to ultimately do the test in a few days as well.

LF – Because we can simulate how the body will respond to it?

RK – Yeah but that’s a little bit more complicated because the body has a lot of different elements and we have to simulate all of that. But that’s coming as well. So ultimately we could create it in a few days and then test it in a few days and we would be done. And we can do that with every type of medical, you know, insufficiency that we have.

LF – So curing all diseases, improving certain functions of the body, supplements, drugs for recreation, for health, for performance, for productivity, all that kind of stuff.

RK – Well, that’s where we’re headed> Because I mean right now we are very inefficient way of creating these new medications. But we’ve already shown it and the Moderna vaccine is actually the best of the vaccines we’ve had. And it literally took two days to create and we’ll get to the point where we can test it out also quickly.

LF – Are you impressed by {Deep Mind} Alpha Fold and the solution to the protein folding which essentially is simulating modeling this primitive building block of life which is a protein and its 3D shape?

RK – It’s pretty remarkable that they can actually predict what the 3D shape of these things are. But they did it with the same type of neural net, that one for example the {Deep Mind Alpha} Go test.

LF – So, it’s all the same.

RK –  It’s all the same. They took that same thing and just changed the rules to chess. And within a couple of days it now played a master level of chess greater than any human being. And the same thing then worked for {Deep Mind} Alpha Fold  which no human had done. I mean human beings could do… The best humans could maybe do was 15-20%  of figuring out what the shape would be and after a few takes it ultimately did just about 100%.

Singularity

LF – Do you still think the singularity will happen in 2045? And what does that look like?

RK – You know what, once we can amplify our brain with computers directly, which will happen in the 2030s. That’s going to keep growing. And that’s another whole theme, which is the exponential growth of computing power.

LF – So looking at price performance of computation from 1939 to 2021.

Chart 1: Price-Performance of Computation, 1939-2021

RK – Right, So that starts with the very first computer actually created by a German during World War 2. And you might have thought that that might be significant. But actually the Germans didn’t think computers were significant and they completely rejected it. And the second one is also the Zuse Z2. 

LF – And by the way, we’re looking at a plot with the X-axis being the year from 1935 to 2025 and on the Y-axis, in log scale, as computations per second per constant dollar. So, dollar normalized for inflation. And it’s growing linearly on the log scale, which means it’s growing exponentially.

RK – The third one was the British computer which the Allies did take very seriously and it cracked the German code and enabled the British to win the battle of Britain, which otherwise absolutely would not have happened if they hadn’t cracked the code using that computer.

But that’s an exponential graph. So, a straight line on that graph is exponential growth and you see 80 years of exponential growth. And I would say about every five years and this happened shortly before the pandemic; people saying, well they call it Moore’s Law, which is not the correct because it’s not all Intel, in fact that started decades before Intel was even created, it wasn’t with transistors formed into a grid.

LF –  it’s not just transistor count or transistor size?

RK – Right, it started with relays, then went to vacuum tubes, then went to individual transistors and and then to integrated circuits. And integrated circuits actually starts like in the middle of this graph and it has nothing to do with Intel. Intel actually was a key part of this, but a few years ago they stopped making the fastest chips. But if you take the fastest chip of any technology in that year you get this kind of graph and it’s definitely continuing for 80 years.

LF – So you don’t think Moore’s law broadly defined is dead? It has been declared dead multiple times.

RK – I don’t like the term Moore’s law because it has nothing to do with Moore or with Intel. But yes,  the exponential growth of computing is continuing, it has never stopped. 

LF – From various sources

RK – I mean, it went through World War Two, it went through global recessions, it’s just continuing.  And if you continue that out, along with software gains, which is a whole other issue.  And they really multiply whatever you get from software gains, you multiply by the computer gains; you get faster and faster speed. 

Chart 2: Training Compute (FLOPS) of milestone Machine Learning systems over time

RK – This is actually the fastest computer models that have been created. And that actually expands roughly twice a year, like every six months. It expands by two.

LF – So we’re looking at a plot from 2010 to 2022. On the X-axis is the publication date of the model and perhaps sometimes the actual paper associated with it and on the Y-axis is training computing flops. And so basically this is looking at the increase in the (not transistors) but the computational power of neural networks.

RK – Yes. The computational power that created these models and that’s doubled every six months.

LF – Which is even faster than transistor division.

RK – Yeah. Actually, since it goes faster than the amount of cost, this has actually become a greater investment to create these. But at any rate, by the time you get to 2045 we’ll be able to multiply our intelligence many millions-fold. And it’s very hard to imagine what that would be like. 

LF – And that’s the singularity where we can’t even imagine?

RK – Right. That’s why we call it the singularity. The singularity in physics, something gets sucked into its singularity and you can’t tell what’s going on in there because no information can get out of it. There’s various problems with that. But that’s the idea.  It’s too much beyond what we can imagine.

LF – Do you think it’s possible we don’t notice that what the singularity actually feels like is we just live through it with exponentially increasing cognitive capabilities and we almost, because everything is moving so quickly,  aren’t really able to introspect that our life has changed.

RK – Yeah, but I mean we will have that much greater capacity to understand things so we should be able to look back

LF –  Looking at history, understand history… 

RK –   But we will need people basically like you and me to actually think about it.

LF – Think about it, but we might be distracted by all the other sources of entertainment and fun because the exponential power of intellect is growing but also…

RK – There  will be a lot of fun.

LF – The amount of ways you can have, you know,…

RK – We already have a lot of fun with computer games and so on that are really quite remarkable.

LF – What do you think about the digital world,  the metaverse? Virtual reality? Will that have a component in this or will most of our advancement be in the physical realm? 

RK – That’s a little bit like Second Life although Second Life actually didn’t work very well because it couldn’t actually handle too many people. And I don’t think the metaverse has come to being. I think there will be something like that. It won’t necessarily be from that one company. I mean there’s going to be competitors but yes, we’re going to live increasingly online and particularly if our brains are online, I mean how could we not be online?  

LF – Do you think it’s possible that given this merger with AI most of our meaningful interactions will be in this virtual world? Most of our life, we fall in love, we make friends, we come up with ideas. We do collaborations, we have fun…

RK – Actually, I know somebody who’s marrying somebody that they never met.  I think they just met her briefly before the wedding, but she actually fell in love with this other person never having met them. And I think the love is real.

LF –  That’s a beautiful story, but do you think that story is one that might be experienced as opposed to by hundreds of thousands of people, but instead by hundreds of millions of people.

RK – I mean it really gives you appreciation for these virtual ways of communicating and if anybody can do it, then it’s really not such a freak story. So I think more and more people will do that,

LF -But that’s turning our back on our entire history of evolution. In the old days, we used to fall in love by holding hands and and sitting by the fire, that kind of stuff here, you’re

RK – I actually have five patents on where you can hold hands even if you’re separated.

LF – Great. So the touch, the sense, it’s all just senses, it’s all just can be replicated.

RK – Yeah, I mean it is, it’s not just that you’re touching someone or not, there’s a whole way of doing it and it’s very subtle but ultimately we can emulate all of that.

LF – Are you excited by that future? Do you worry about that future? 

RK – I have certain worries about the future but not virtual touch.

Evolution of information processing

LF – Well I agree with you. You describe six stages in the evolution of information processing in the universe as you started to describe. Can you maybe talk through some of those stages from the physics and chemistry to DNA and brains and then to the very end to the very beautiful end of this process. 

RK – It actually gets more rapid. So physics and chemistry, that’s how we started. Um

LF – From the beginning of the universe.

RK – We have lots of electrons and various  things traveling around. And that took many billions of years. Kind of jumping ahead here to kind of some of the last stages where we have things like love and creativity. It’s really quite remarkable that that happens. But finally physics and chemistry created biology and DNA. And now you had actually one type of molecule that described the cutting edge of this process. And we go from physics and chemistry to biology and finally biology created brains. I mean not everything that’s created by biology has a brain but eventually brains came along …

LF – And all of this is happening faster and faster.

RK – Yeah it created an increasingly complex organisms. Another key thing is actually not just brains but our thumb because there’s a lot of animals with brains, even bigger than humans. Elephants have a bigger brain, whales have a bigger brain, but they have not created technology because they don’t have a thumb. So that’s one of the really key elements in the evolution of humans.

LF – This  physical manipulator device that’s useful for puzzle solving in physical reality.

RK – So I could think I could look at a tree and go, oh, I could actually trip that branch down and eliminate the leaves and carve a tip on it, and I would create technology.  and you can’t do that if you don’t have a thumb.

So  thumbs and created technology and technology also had a memory, and now those memories are competing with the scale and scope of human beings and ultimately will go beyond it. And then we’re going to merge human technology with human intelligence and understand how human intelligence works, which I think we already do, and we’re putting that into our human technology.

LF – So, create the technology inspired by our own intelligence. And then that technology supersedes us in terms of its capabilities and we ride along, Or do you, do you ultimately see …

RK – We ride along but a lot of people don’t see that. They say, well, you’ve got humans and you’ve got machines and there’s no way we can ultimately compete with humans, and you can already see that. Lee Sedol  who’s like the best Go player in the world says he’s not going to play Go anymore, because playing Go for human, that was like the ultimate in intelligence because no one else could do that. But now a machine can actually go way beyond him. And so he says, well there’s no point playing it anymore.

LF – That may be more true for games than it is for life. I think there’s a lot of benefit to working together with AI in regular life. So if you were to put a probability on it, is it more likely that we merge with AI or AI replaces us?

RK – A lot of people just think computers come along and they compete with them. We can’t really compete and that’s the end of it; as opposed to them increasing our abilities. And if you look at most technology, it increases our abilities. I mean look at the history of work, look at what people did 100 years ago, Does any of that exist anymore?  I mean if you were to predict that all of these jobs would go away and would be done by machines, people would say, well no one’s going to have jobs and it’s going to be massive unemployment. But I show in this book that’s coming out  { “The Singularity is Nearer”} the amount of people that are working, even as a percentage of the population has gone way up.

Chart 3 U.S. Personal Income Per Capita (2021 Constant Dollars)

LF – We’re looking at the X-axis year from 1774 to 2024. And on the Y-axis Personal income per capita in constant dollars and it’s growing super linearly. I mean ….

RK –  It’s 2021 constant dollars and it’s gone way up. That’s not what you would predict,given that we would predict that all these jobs would go away.

LF – Yeah.

RK – But the reason it’s gone up is because we basically enhanced our own capabilities by using these machines as opposed to them just competing with us. That’s a key way in which we’re going to be able to become far smarter than we are now by increasing the number of different parameters we can consider in making a decision.

Automation

LF – I am very fortunate to be able to get a glimpse preview of your upcoming book, “The Singularity is Nearer”. 

LF – And one of the themes outside of just discussing the increasing exponential growth of technology; one of the themes is that things are getting better in all aspects of life. And you talk just about just about this. So one of the things you’re saying is with jobs. So, let me just ask about that. There is a big concern that automation, especially powerful AI, will get rid of jobs. There are people who will lose jobs. And as you were saying, the senses throughout history of the 20th century, automation did not do that ultimately. And so the question is, will this time be different.

RK – Right, That is the question will, will this time be different. And it really has to do with how quickly we can merge with this type of intelligence  with a LaMDA  or GPT-3 is out there and maybe it’s overcome some of its, you know, key problems and we really have enhanced human intelligence, that might be a negative scenario. But I mean that’s why we create technologies to enhance ourselves. And I believe we will be enhanced, we’re not just going to sit here with 300 million modules in our neocortex, we’re going to be able to go beyond that. Because that’s useful, but we can multiply that by ten, a hundred, a thousand, a million. And you might think, well, what’s the point of doing that? It’s like asking somebody that’s never heard music well, what’s the value of music? I mean, you can’t appreciate it until you’ve created it.

LF – There’s some worry that there will be a wealth disparity, you know, class or wealth disparity, only the rich people will be…  Basically the rich people will first have access to this kind of thing. And then because of this kind of thing, because the ability to merge will get richer exponentially faster. 

RK -And I say that’s just like cellphones, I mean, there’s like four billion cell phones in the world today. In fact, when cell phones first came out, you had to be fairly wealthy, they weren’t very inexpensive, so you had to have some wealth in order to afford them.

LF -There were these big sexy phones…

RK – And they didn’t work very well, they did almost nothing.  So you can only afford these things if you’re wealthy at a point where they really don’t work very well.

LF – So achieving scale is and making it inexpensive as part of making the thing work well.

RK – Exactly. So these are not totally cheap, but they’re pretty, pretty cheap. 

LF – Yeah.

RK – I mean, you can get them for a few $100

LF –  Especially given the kind of things that it provides for you. There’s a lot of people in the third world that have very little, but they have a smartphone. 

RK – Yeah,absolutely.

LF – And the same will be true with AI.

RK – I mean I see homeless people have their own cell phones and … 

LF – Yeah, so your senses any kind of advanced technology will take the same trajectory

RK – Right, ultimately becomes cheap and will be affordable. I probably would not be the first person to put something in my brain to connect to computers because I think it will have limitations, but once it’s really perfected and at that point it’ll be pretty inexpensive. I think it’ll be pretty affordable.

LF – So in which other ways as you outline your book is life getting better? Because I think…

RK – Well, I have I mean, I have 50 charts in there.

LF – Yeah.

RK  – Where everything is getting better.

LF –  I think there’s a kind of cynicism about like even if you look at extreme poverty, for example,

RK  – For example, this is actually a poll taken on extreme poverty and the people were asked has poverty gotten better or worse and…

Chart 4: Inaccurate Perception of Extreme Poverty

LF –   And the options are increased by 50%, increased by 25% remain the same, decreased by 25% decreased by 50%. If you’re watching this or listening to this, try to try to vote for yourself,

RK  – 70% thought it had gotten worse and that’s the general impression, 88% thought it had gotten worse or remained the same. Only 1% thought it decreased by 50% and that is the answer. It actually decreased by 50%.

LF –   So only 1% of people got the right optimistic estimate of how poverty is

RK  –   Right and, and, and this is the reality. And it’s true of almost everything you look at, you don’t want to go back 100 years or 50 years. Things were quite miserable then, but we tend not to remember that.

LF –  So, literacy rates increasing over the past few centuries, across all the different nations, nearly 200% across many of the nations in the world,

Chart 5: Literacy Rates by Country

RK  –   It’s gone way up, average years of education have gone way up. Life expectancy is also increasing. Life expectancy was 48 in 1900

Chart 6: UK Life Expectancy – At birth, ages 1, 5, and 10

LF –   It’s over 80 now

RK  –   And it’s going to continue to go up, particularly as we get into more advanced stages of simulated biology

LF –   For life expectancy. These trends are the same for at birth, age one, age five, age ten. So it’s not just the infant mortality

RK  –  And I have 50 more graphs in the book about all kinds of things, even the spread of democracy which bring up some sort of controversial issues. It still has gone way up.

Chart 7: Spread of Democracy Since 1816

LF –   Well that one,  it’s gone way up, but that one is a bumpy road, right?

RK  –   Exactly. And some somebody might represent democracy and and go backwards, but we basically had no democracies before the creation of the United States, which was all over two centuries ago, which in the scale of human history isn’t that long,

LF –   Do you think superintelligence systems will help with democracy? So what is democracy? Democracy is giving a voice to the populace and having their ideas, having their beliefs, having their views represented?

RK  –   Well, I hope so.  I mean, we’ve seen social networks can spread conspiracy theories  which have been quite negative. Being, for example, being against any kind of stuff that would help your health.

LF –  So, those kinds of ideas have, on social media, where you notice is they increase engagement. So dramatic division increases engagement. Do you worry about AI systems that will learn to maximize that division? 

RK  –  I mean, I do have some concerns about this.  and I have a chapter in the book about the perils of advanced AI.  Spreading misinformation on social networks is one of them, but there are many others.

LF –   What’s the one that worries you the most that we should think about to try to avoid.

RK  –  Well, it’s hard to choose. We do have the nuclear power that evolved when I was a child. I remember we would actually do these drills against nuclear war. We’d get under our desk and put our hands behind our heads to protect us from a nuclear war.  Seemed to work, we’re still around. So…

LF –  You’re protected.

RK  –   But that’s still a concern. And there are key dangerous situations that can take place in biology. Someone could create a virus. That’s very, I mean, we have viruses that are hard to spread and they can be very dangerous. And we have viruses that are easy to spread, but they’re not so dangerous. Somebody could create something that would be very easy to spread and very dangerous and be very hard to stop. It could be something that would spread without people noticing because people could get it, they’d have no symptoms and then everybody would get it and then symptoms would occur maybe a month later. So I mean, and that actually doesn’t occur normally because if we were to have a problem with that, we wouldn’t exist. So the fact that humans exist means that we don’t have viruses that can spread easily and kill us because otherwise we wouldn’t exist. 

LF –   Yeah, viruses don’t want to do that. They want to spread and keep the host alive somewhat.

RK  –  So you can describe various dangers with biology. Also nanotechnology, which we actually haven’t experienced yet, but there are people that are creating nanotechnology. And described that in the book.

Nanotechnology

LF –  Now you’re excited by the possibilities of nanotechnology, of nanobots, of being able to do things inside our body, inside our mind. That’s going to help. What’s exciting, what’s terrifying about nanobots?

RK  –  What’s exciting is that that’s a way to communicate with our neocortex because each neocortex {neuron} is pretty small and you need a small entity that can actually get in there and establish a communication channel and that’s going to really be necessary to connect our brains to AI within ourselves because otherwise it would be hard for us to compete with it. 

LF – In a high bandwidth way?

RK –  Yeah. And that’s key actually, because a lot of the things like Neuralink are really not high bandwith yet.

LF –   So nanobots is the way you achieve high bandwidth. How much intelligence would those nanobots have? 

RK  –  Yeah, they don’t need a lot, just enough to basically establish a communication channel to one nanobot, so .. 

LF –   So, just primarily about the communication….

RK  –  Yeah.

LF –   between external computing devices and our biological thinking machine. What worries you about nanobots? Is it similar to  (with) the viruses?

RK  –   Well, I mean, it’s the gray goo  challenge. Yes. If you have  nanobots that wanted to create any kind of entity and repeat itself and was able to operate in a natural environment, it could turn everything into that entity and basically destroy all  biological life.

Nuclear war

LF –  So you mentioned nuclear weapons

RK  –  Yeah.

LF –  I’d love to hear your opinion about the 21st century, and whether you think we might destroy ourselves, and maybe your opinion, if it has changed by looking at what’s going on in Ukraine, that we could have a hot war with nuclear powers involved and the tensions building and the seeming forgetting of how terrifying and destructive nuclear weapons are. Do you think humans might destroy ourselves in the 21st century? And if we do how? And how do I avoid it?

RK  –  I don’t think that’s going to happen despite the terrors of that war, it is a possibility. But I mean, I don’t ….

LF –   It’s unlikely in your mind.

RK  –  Yeah. Even with the tensions we’ve had with this one  nuclear power plant that’s been taken over; it’s very tense, but I don’t actually see a lot of people worrying that that’s going to happen. I think we’ll avoid that. We had two nuclear bombs go off in 45 so now we’re 77 years later.

LF –  Yeah, we’re doing pretty good.

RK  –   We’ve never had another one go off through anger. 

LF –   People forget. People forget the lessons of history. Well,

RK  –   Yeah, I mean, I am worried about it. I mean, that that is definitely a challenge.

LF –    But you believe that we’ll make it out and ultimately superintelligent AI will help us make it out as opposed to destroy us.

RK  – I think so, but we do have to be mindful of these dangers and there are other dangers besides nuclear weapons.

Uploading minds

LF –   So to get back to merging with AI,  would we be able to upload our minds in a computer in a way where we might even transcend the constraints of our bodies. So copy our mind into a computer and leave the body behind?

RK  –   Let me describe one thing I’ve already done with my father. 

LF – Yeah, it’s a great story.

RK – So we created technology, this is public. It came out I think six years ago, where you could ask any question and the release products, which I think is still on the market, it would read 200,000 books and then and then find the one sentence in 200,000 books that best answered your question. It’s actually quite interesting. You can ask all kinds of questions and you get the best answer in 200,000 books. But I was also able to take it and not go through 200,000 books, but go through a book that I put together, which is basically everything my father had written. So everything he had written had gathered. And we created a book. Everything that Frederick Kurzweil had written. Now, I didn’t think this actually would work that well because  stuff he’d written was stuff about how to lay out. I mean, he directed choral groups and music groups and he would be laying out how people should where they should sit and how to fund this and all kinds of things that really weren’t, didn’t seem that interesting. And yet when you ask a question, it would go through it and it would actually give you a very good answer. So I said, well, you know who’s the most interesting composer? And he said, well, definitely Brahms, he would go on about how Brahms was fabulous and talk about the importance of music education and …

LF –  So, you can have essentially a conversation with him?

RK  –   I can have a conversation with him, which was actually more interesting than talking to him because if you talk to him, he’d be concerned about how they’re going to lay out this property to give a choral group.

LF –   He’d be concerned about the day to day versus the big question?

RK – Exactly, yeah.

LF – And you did ask about the meaning of life and he answered love.

RK – Yeah. 

LF – Do you miss him?

RK  –   Yes, I do. Yeah, you get used to missing somebody after 52 years and I didn’t really have intelligent conversations with them until later in life. In the last few years he was sick, which meant he was home a lot and I was actually able to talk to him about different things like music and other things. And  so I missed that very much. 

LF –   What did you learn about life from your father? What part of him is with you now?

RK  –  He was devoted to music and when you would create something to music and put them in a different world.  Otherwise he was very shy. And if people got together, he tended not to interact with people just because of his shyness, but when he created music that.. he was like a different person.

LF –   Do you have that in you?

RK – Yeah, yeah.

LF – …  that kind of light that shines? 

RK  –   I mean, I got involved with technology, at like age five.

LF –   And you fell in love with it in the same way he did with music?

RK  –   Yeah, I remember this actually happened with my grandmother. She had a manual typewriter and she wrote a book: One Life Is Not Enough. It’s actually a good title for a book I might write, but it was about a school she had created. Well actually, her mother created it. So my mother’s mother’s mother created the school in 1868. And it was the first school in Europe that provided higher education for girls it went through 14th grade. If you were a girl and you were lucky enough to get an education at all, it would go through like ninth grade and many people didn’t have any education as a girl. This went through 14th grade. Her mother created it, she took it over and the, and the book was about , the history of the school and her involvement with it. When she presented it to me, I was not so interested in the story of the school, but I was totally amazed with this manual typewriter. I mean, here was something you could put a blank piece of paper into and you could turn it into something that looked like it came from a book and you can actually type on it. It looked like it came from a book. It was just amazing to me. And I could see actually how it worked. And I was also interested in magic. But in magic, if somebody actually knows how it works, the magic goes away, the magic doesn’t stay there if you actually understand how it works. But here was technology. I didn’t have that word when I was five or six.

LF –   And the magic was still there for you?

RK  –  The magic was still there even if you knew how it worked. So I became totally interested in this and then went around, collected little pieces of mechanical objects, from bicycles, from broken radios. I would go through the neighborhood. This was an era where you would allow five or six year old to like run through the neighborhood and do this. We don’t do that anymore. But I didn’t know how to put them together and said, if I could just figure out how to put these things together, I could solve any problem. And I actually remember talking to these very old girls, I think they were 10 and telling them if I could just figure this out, we could fly, we could do anything. And they said, well, you have quite an imagination. And then when I was in third grade, so I was like eight , created like a virtual reality theater where people could come on stage and they could move their arms and all of it was controlled through one control box. It was all done with mechanical technology and it was a big hit in my third grade class. And then I went on to do things in junior high school science fairs and high school science fairs. I won the Westinghouse Science talent search, so I mean I became committed to technology when I was five or six years old.

How to think

LF –   You’ve talked about how you use lucid dreaming to think, to come up with ideas as a source of creativity. Could you maybe talk through that? Maybe the process of how to…  You’ve invented a lot of things, you’ve came up and thought there’s some very interesting ideas, what advice would you give or can you speak to the process of thinking of how to think, how to think creatively?

RK  –   Well, I mean sometimes I will think through in a dream and try to interpret that, but I think the key issue that I would tell younger people is to put yourself in the position that what you’re trying to create already exists. And then you’re explaining like ….

LF –   How it works

RK  –  Exactly.

LF –   That’s really interesting, you paint the world that you would like to exist, you think it exists and reverse engineer that.

RK  –  And then you actually imagine you’re giving a speech about how you created this. Well you’d have to then work backwards as to how you would create it in order to make it work.

LF –  That’s brilliant. And that requires  some imagination to some first principles, thinking you have to visualize that world. That’s really interesting.

RK  –  And generally when I talk about things we’re trying to invent, I would use the present tense as if it already exists, not just to give myself that confidence, but everybody else who is working on it. we just have to kind of  do all the steps in order to make it actual.

LF –  How much of a good idea is about timing, How much is it about your genius versus that its time has come?

RK  –   Timing is very important. I mean, that’s really why I got into futurism. I’m not, I wasn’t inherently a futurist that there’s not really my goal  that’s really to to figure out when things are feasible. We see that now with large scale models. The very large scale models like GPT-3 , it started two years ago. Four years ago, it wasn’t feasible, in fact, they did create GPT-2, which didn’t work. So it required a certain amount of timing having to do with this exponential growth of computing power.

LF –  So, futurism in some sense is a study of timing, trying to understand how the world will evolve and when will the capacity for certain ideas

RK  –  And that’s become a thing in itself and to try to time things in the future but really its original purpose was to time my products. I mean I did OCR in the 1970s  because OCR doesn’t require a lot of computation. 

LF – Optical character recognition?

RK – Yeah. So we were able to do that in the seventies and I waited till the eighties to address speech recognition since that requires more computation.

LF –   You were thinking through timing when you’re developing those things. 

RK  –  Yeah.

LF – Has its time come?

RK  –  Yeah.

LF –   And that’s how you’ve developed that brain power to start to think in a futurist sense. When, how will the world look like in 2045 work backwards.

RK  –  Yeah.

LF –  And how it gets there.

RK  –   But that has become a thing in itself because looking at what things will be like in the future reflects such dramatic changes in how humans will live. That was worth communicating also.

LF –  So, you developed that muscle of predicting the future and then applied broadly and started to discuss how it changes the world of technology, how to change the world of human life on earth. In Danielle, one of your books, you write about someone who has the courage to question assumptions that limit human imagination to solve problems. And you also give advice and how each of us can have this kind of courage.

RK  –   It’s good that you picked that quote because I think that that symbolized what Danielle is about

LF –   Courage. So how can each of us have that courage to question assumptions.

RK  –    I mean we see that when people can go beyond the current realm and create something that’s new. I mean take Uber for example, before that existed, you never thought that that would be feasible and it did require changes in the way people work.

LF –   Is there practical advice you give in the book about what each of us can do to be a Danielle?

RK  –  Well, she looks at the situation and tries to imagine  how she can overcome various obstacles and then she goes for it and she’s a very good communicator. So she can communicate these ideas to other people.

LF –  And there’s practical advice of learning to program and recording your life and things of this nature, become a physicist. So you list a bunch of different suggestions of how to throw yourself into this world?

RK  –  Yeah, I mean it’s kind of the idea how young people can actually change the world by  learning all of these different skills.

LF –   And at the core of that is the belief that you can change the world, that your mind, your body can change the world.

RK – Yeah, yeah, that’s right. 

LF – And not letting anyone else tell you otherwise.

RK  –  That’s very good. Exactly.

Digital afterlife 

LF –  When we upload… The story you told about your dad and having a conversation with him, we’re talking about uploading your mind to the computer. Do you think we’ll have a future with something you call afterlife?  We’ll have avatars that mimic increasingly better and better our behavior, our appearance, all that kind of stuff? Even those are perhaps not no longer with us?

RK  –   Yes. I mean we need some information about them. I mean think about my father. I have what he wrote. He didn’t have a word processor so he didn’t actually write that much and our memories of him aren’t perfect. So how do you even know if you’ve created something that’s satisfactory now you could do a Frederick Kurzweil Turing test, it seems like Frederick Kurzweil to me. But the people who remember him like me don’t have a perfect memory.

LF –    Is there such a thing as a perfect memory? Maybe the whole point is for him to make you feel a certain way?

RK  –  Yeah. Well, I think that would be the goal

LF –   And that’s the connection we have with loved ones. It’s not really based on a very strict definition of truth. It’s more about the experiences we share. 

RK – Yeah.

LF –  And they get morphed through memory. But ultimately they make us smile.

RK  –   I think we definitely can do that. And that would be very worthwhile. 

LF –   So do you think we’ll have a world of replicants of copies? There’ll be a bunch of Ray Kurzweils,  like I could hang out with one. I can download it for five bucks and have a best friend, Ray and you, the original copy wouldn’t even know about it?

RK  –  Umm…

LF –  Is that, do you think that world is… First of all, do you think that world is feasible? And do you think there’s ethical challenges there? Like, how would you feel about me hanging out with Ray Kurzweil and you not knowing about it?

RK  –  Doesn’t strike me as a problem.

LF –  Which you, the original?

RK  –  Would that cause a problem for you?

LF –   No, I would really very much enjoy it.

RK  –   No, not just hang out with me, but if somebody hung out with you, a replicant of you?

LF –   Well, I think I would start…. It sounds exciting, but then what if they start doing better than me and take over my friend group. And then because they may be an imperfect copy or they may be more social, all these kinds of things, and then I become like the old version, that’s not nearly as exciting. Maybe they’re a copy of the best version of me on a good day,

RK  –  Yeah. But if you hang out with a replicant of me and that turned out to be successful, I’d feel proud of that person because it was based on me.

LF –    So, but it is a kind of death of this version of you? 

RK  –   Well, not necessarily. I mean, you can still be alive right?

LF –   But … and you would be proud. Okay, so it’s like having kids and you’re proud that they’ve done even more than you were able to do.

RK  –  Yeah. Exactly.

LF –   Hmm.

RK  –   It does bring up new issues, but  it seems like an opportunity.

LF –   Well, that replicants should probably have the same rights as you do. Well,

RK  –   That gets into a whole issue,  because when a replicant occurs, they’re not necessarily going to have your rights. And if a replicant occurs to somebody who’s already dead, do they have all the obligations that the original person had? Do they have all the agreements that they had? 

LF –    So, I think you’re going to have to have laws that say, yes, there has to be, if you want to create a replicant, they have to have all the same rights as human rights.

RK  –  Well, you don’t know, someone could create a replica and say, well, it’s a replicant, but I didn’t bother getting their rights. And so …

LF –   But that would be illegal. I mean, like, if you do that, you have to do that in the black market if you want to get an official replicant …. 

RK  –  Okay, it’s not so easy, it’s supposed to create multiple replicants,  the original rights, maybe for one person, and not for a whole group of people?

LF –   Sure.  So there has to be at least one, and then all the other ones kind of share the rights. Yeah, I just don’t …. I don’t think that…. that’s very difficult to conceive for us humans, the the idea that

RK  –  We don’t create a replicant that has certain….  I mean, I’ve talked to people about this, including my wife who would like to get back her father. And she doesn’t worry about who has rights to what. She would have somebody that she could visit with and might give her some satisfaction and she wouldn’t care about any of these other rights. 

LF –   What does your wife think about multiple Ray Kurzweils, have you had that discussion?

RK – I haven’t addressed that with her.

LF –  I think ultimately that’s an important question. Loved ones, how they feel about… There’s something about love,

RK  –  That’s the key thing, right? If the loved ones reject it, it’s not going to work very well. So the loved ones really are the key determinant whether or not this works or not.

LF –   But there’s also ethical rules. We have to contend with the idea and we have to contend with that idea with AI.

RK  –  But what’s going to motivate it is, I mean, I talked to people who really miss people who are gone and they would love to get something back even if it isn’t perfect.  And that’s what’s going to motivate this

LF –   (Sigh) And that person lives on in some form. And the more data we have, the more we’re able to reconstruct that person and allow them to live on.

RK  –   Right, right. And eventually as we go forward, we’re going to have more and more of this data because we’re going to have nanobots that are inside our neocortex and we’re going to collect a lot of data. In fact, anything that’s data is always collected,

LF –   There’s something a little bit sad which is becoming… Or maybe it’s hopeful, which is more and more common these days: which when a person passes away you have their Twitter account. You know, you have the last tweet they tweeted like something….

RK  –   You know you can recreate them now with large language models and so on. I mean you can create somebody that’s just like them and can actually continue to  communicate.

LF –  I think that’s really exciting because I think in some sense like if I were to die today in some sense I would continue on if I continue tweeting. I tweet therefore I am.

RK  –   Yeah. Well I mean that’s one of the advantages of a replicant,  that it can recreate the communications of that person.

Intelligent alien life

LF –   Do you hope, do you think, Do you hope humans will become a multi-planetary species? You’ve talked about the phases, the six epochs  and one of them is reaching out into the stars in part….

RK  –   Yes, but the kind of attempts we’re making now to go to other planetary objects doesn’t excite me that much because it’s not really advancing anything.

LF –    It’s not efficient enough.

RK  –   Yeah. We’re also putting out other human beings which is a very inefficient way to explore these other objects. What I’m really talking about in the sixth epoch, the universe wakes up. It’s where we can spread our superintelligence throughout the universe and that doesn’t mean sending very soft squishy creatures like humans.

LF –   Yeah. The universe wakes up.

RK  –  I mean we would send intelligent masses of nanobots, which can then go out and  colonize these other parts of the universe.

LF –   Do you think there’s intelligent alien civilizations out there that our bots might meet?

RK  –   My hunch is no, most people say yes, absolutely. I mean, 

LF – It’s too big. 

RK – … and they’ll cite that Drake equation and I think in the Singularity is Near I have two analyses of the Drake equation, both with very reasonable assumptions and one gives you thousands of advanced civilizations in each galaxy and another one gives you one civilization and we know of one.

A lot of the analyses are forgetting the exponential growth of computation because we’ve gone from where the fastest way I could send a message to somebody was with a pony, which was what like a century and a half ago ….

LF –  Yeah.

RK  –  …. to the advanced civilization we have today. And if you accept what I’ve said, go forward a few decades, you can have an absolutely fantastic amount of civilization compared to a pony. And that’s in a couple of hundred years.

LF –   Yeah. The speed and the scale of information transfer is just growing exponentially in the blink of an eye.

RK  –  Now, think about these other civilizations. They’re going to be spread out in cosmic times. So if something is ahead of us or behind us, it could be ahead of us or behind us by maybe millions of years, which isn’t that much. I mean, the world is billions of years old, 14 billion or something. So even a thousand years, if two or three hundred years is enough to go from a pony to a fantastic amount of civilization, we would see that. So, of other civilizations that have occurred, okay, some might be behind us, but some might be ahead of us. If they’re ahead of us, they’re ahead of us by thousands, millions of years and they would be so far beyond us. They would be doing galaxy wide engineering, but we don’t see anything doing galaxy wide engineering.

LF –   So either they don’t exist or this very universe is a construction of an alien species. We’re living inside a video game.

RK  –   Well, that’s another explanation that yes, you’ve got some teenage kids in  another civilization.

Simulation hypothesis

LF –   Do you find compelling the simulation hypothesis; as a thought experiment that we’re living in a simulation

RK  –   The universe is computational, so we are an example in a computational world, therefore  it is a simulation. It doesn’t necessarily mean an experiment by some high school kid in another world, but it’s nonetheless is taking place in a computational world and everything that’s going on is basically a form of, of computation. so you really have to define what you mean by  this whole world being a simulation.

LF –   Well, then it’s the teenager that makes the video game, you know, us humans with our current limited cognitive capability have strive to understand ourselves and we have created religions, we think of God, whatever that is. Do you think God exists? And if so, who is God? I

RK  –   I alluded to this before We started out with lots of particles going around and there’s nothing that represents love and creativity. And somehow we’ve gotten into a world where love actually exists and that has to do actually with consciousness because you can’t have love without consciousness. So to me, that’s God, the fact that we have something where love, where you can be devoted to someone else and really feel that love, that’s that’s God. And if you look at the Old Testament, it was actually created by several different rabbinates in there and then I think they’ve identified three of them. One of them dealt with God as a person that you can make deals with and he gets angry and he wreaks vengeance on various people, but two of them actually talk about God as a symbol of love and peace and harmony and and so forth. That’s how they describe God. So that’s my view of God, not as a person in the sky that you can make deals with,

LF –   It’s whatever the magic that goes from basic elements to things like consciousness and love. Do you think ….One of the things I find extremely beautiful and powerful is cellular automata, which you also touch on.  Do you think whatever the heck happens in cellular automata where interesting, complicated objects emerge, God is in there too? The emergence of love in this seemingly privileged Universe.

RK  –  Well, that’s the goal of creating a replicant is that they would love you and you would love them. There wouldn’t be much point of doing it if that didn’t happen.

LF –   But all of it, I guess what I’m saying about cellular automata is, it’s primitive building blocks and they somehow create beautiful things. Is there some deep truth to that about how our universe works? Is that the emergence from simple rules, beautiful complex objects can emerge. Is that the thing that made us …

RK – Yeah.

LF – … as we went through all the six phases of reality?

RK  –  That’s a good way to look at it. It just makes them point to the whole value of having a universe.

Mortality

LF –   Do you think about your own mortality? Are you afraid of it?

RK  –   Yes, but I keep going back to my idea of being able to expand human life quickly enough  in advance of our getting there,  longevity escape velocity, which we’re not quite at yet, but I think we’re actually pretty close, particularly with for example, doing simulated biology. I think we can probably get there within say by the end of this decade and that’s my goal.

LF –  You hope to achieve the longevity escape velocity. You hope to achieve immortality?

RK  –   Well, immortality is hard to say, I can’t really come on your program saying I’ve done it, I’ve achieved immortality because it’s never forever.

LF –   A long time, a long time of living well.

RK  –  But we’d like to actually advance human life expectancy, advance my life expectancy more than a year every year. And I think we can get there within, by the end of this decade. 

LF –   How do you think we do it? So there’s practical things. In Transcend: the Nine Steps to Living Well Forever, your book, you describe just that. There’s practical things like health, exercise, all those things and then there’s engineering….

RK  –   I mean, we live in a body that doesn’t last forever. There’s no reason why it can’t though. And we’re discovering things, I think that will extend it. But you do have to deal with,  I mean, I’ve got various issues. Went to Mexico 40 years ago, developed salmonella, that created pancreatitis which gave me a strange form of diabetes. It’s not  Type 1 diabetes because that’s an autoimmune disorder that destroys your pancreas. I don’t have that. But it’s also not Type 2 diabetes because Type 2 diabetes, it’s your pancreas works fine, but your cells don’t absorb the insulin. Well, I don’t have that either.  The pancreatitis, I had partially damaged my pancreas, but it was a one time thing. It didn’t continue. And I’ve learned now how to control it. But so that’s just something I had to do  in order to continue to exist …

LF –   Since it’s your particular biological system, you have to figure out a few hacks. And the idea is that science would …

RK  –  Yeah, exactly

LF –   … do that much better actually.

RK  –   Yeah. So I mean I do spend a lot of time just tinkering with my own body to keep it going.  So I do think I’ll last till the end of this decade and I think we’ll achieve longevity escape velocity. I think that will start with people who are very diligent about this. Eventually it will become sort of routine that people will be able to do it. So if you’re talking about kids today or even people in their twenties and thirties, that’s really not a very serious problem. I have had some discussions with relatives who are like almost 100 and saying, well we’re working on it as quickly as possible, but I don’t know if that’s going to work. 

LF –   Is there a case…  This is a difficult question, but is there a case to be made against living forever? That a finite life, that mortality is a feature,  not a bug. That that living a shorter…. So dying makes ice cream taste delicious, makes life intensely beautiful more than ….

RK  –   Most people believe that way, except if you present a death of anybody they care about or love. They find that extremely depressing. And I know people who feel that way 20, 30, 40 years later, they still want them back. So I mean death is not something to celebrate but we’ve lived in a world where people just accept this, life is short. You see it all the time on TV, life is short, you have to take advantage of it and nobody accepts the fact that you could actually go beyond normal lifetimes. But any time we talk about death or death of a person, even one death is a terrible tragedy. If you have somebody that lives to a hundred years old, we still love them in return and there’s no limitation to that. In fact, these kinds of  trends are going to provide greater and greater opportunities for everybody, even if we have more people.

LF –  So let me ask about an alien species or a superintelligent AI 500 years from now that will look back and remember Ray Kurzweil version zero before the replicants spread. How do you hope they remember you , in a Hitchhiker’s Guide to the Galaxy summary of Ray Kurzweil. What do you hope your legacy is?

RK  –   Well, I mean, I do hope to be around so that’s …. 

LF –   So that’s some version of you. Yes.

RK  –   So… 

LF –   So, do you think you’ll be the same person around?

RK  –    I mean, am I the same person I was when I was 20 or 10?

LF –   That’s true. You would be the same person in that same way. But yes, we’re different.

RK – Umm…. 

LF –   All we have of that…  All you have of that person is your memories which are probably distorted in some way. Maybe you just remember the good parts,  depending on your psyche. You might focus on the bad parts, you might focus on the good parts,

RK  –   Right.  But I mean, I’d still have a relationship to the way I was when I was earlier, when I was younger.

LF –   How will you and the other superintelligent AI remember you of today from 500 years ago. What do you hope to be remembered by this version of you before the singularity? 

RK  –   Well, I think it’s expressed well, in my books, trying to create some new realities that people will accept. I mean, that’s something that gives me great pleasure and greater insight into what makes humans valuable. I’m not the only person who’s attempted to comment on that, but …

LF –  And  optimism that permeates your work …. 

RK  –   Mmhmm.

LF –  Optimism about the future. It’s ultimately that optimism paves the way for building a better future.

RK  –   Yeah, I agree with that.

Meaning of life

LF –   So you asked your dad about the meaning of life and he said love. Let me ask you the same question: What’s the meaning of life? Why are we here on this beautiful journey that we are on in Phase Four reaching for Phase Five of this evolution of information processing, why?

RK  – I think I’d give the same answer as my father. Because if there were no love and we didn’t care about anybody, there’d be no point existing.

LF –  Love is the meaning of life. The AI version of your dad had a good point. Well, I think that’s a beautiful way to end it, right? Thank you for your work. Thank you for being who you are. Thank you for dreaming about a beautiful future and creating it along the way. And thank you so much for spending your really valuable time with me today. This was awesome.

RK  –  It was my pleasure. And you have some great insights both into me and into humanity as well. So I appreciate that.

LF –   Thanks for listening to this conversation with Ray Kurzweil. To support this podcast, please check out our sponsors in the description. 

And now let me leave you with some words from Isaac Asimov: “It is change, continuous change, inevitable change that is the dominant factor in society today. No sensible decision could be made any longer without taking into account not only the world as it is, but the world as it will be. This in turn means that our statesmen, our businessmen, our everyman, must take on a science fictional way of thinking.” 

Thank you for listening and hope to see you next time.