Don't hate the players, change the game...
Don't hate the players, change the game...
Is complexity the new frontier of physics? How should we approach metaphysical uncertainty?
Today's Win-Win episode picks the brains of the incredible Sean Carroll. Sean is a theoretical physicist and philosopher who specializes in quantum mechanics, cosmology, and philosophy of science.
Chapters:
00:00:00 - Intro
00:01:59 - New Discoveries of the Webb Telescope
00:11:05 - Complexity The New Frontier in Physics
00:16:42 - How Physicists See The World
00:20:14 - Observing Trends in Complexity
00:26:21 - How To Approach Metaphysical (Un)certainty
00:30:41 - Quantum Measurement Problem
00:33:49 - Many Worlds vs Copenhagen Interpretation
00:40:35 - Emergence
00:44:40 - Information under Classical and Quantum Mechanics
00:50:20 - Democracy As A Physical System
00:56:35 - Decision Theory In Many Worlds
01:02:03 - Competition in Academia
01:07:20 - Academic Publishing
01:12:57 - The Problem with Superintelligence
01:23:59 - Computing and Theories of Mind
01:31:26 - Poetic Naturalism
01:41:58 - What Makes A Great Physicist?
01:44:29 - Are There Any Breakthroughs Left?
01:52:08 - Credences of Hypotheses
Links:
Credits:
♾️ Hosted by Liv Boeree
♾️ Edited and Mixed by Ryan Kessler
Transcript:
Liv: The difficulty with complexity is that we don't even have a firmly agreed upon definition.
Sean: Oh, a hundred percent. Yeah. There's different aspects to complexity, just as the same issue happens when you try to study the origin of life and no one in the field agrees on what life is. One aspect of it though, is that it doesn't algorithmically reduce to a small number of bits.
Take a glass and put, coffee on the bottom, cream on the top. Take a picture of it. And then take a picture after you've put a spoon in a little bit so there's now tendrils in there. The file size will be bigger than your image. Yes, because it's not as easy to compress it.
Complexity is ephemeral. It comes and goes. We right now in this room are in one of the most complex places and times in the history of the universe.
Liv: Hello my friends and welcome to the Win-Win Podcast. Today is an extra fun one for me because it's all about physics. Which, for those of you who know me, was where I was originally headed with my life before poker came along.
Because I'm speaking to Professor Sean Carroll. Sean is a leading theoretical physicist and philosopher at Johns Hopkins University, specializing in quantum mechanics, space time, complexity theory, entropy, all the coolest stuff. But on top of that, he's also one of physics's most popular communicators.
authoring numerous popular books, as well as hosting a very popular podcast called Mindscape, and giving loads of cool lectures on his YouTube. And today we dig into all sorts of topics, from the game theoretic considerations of the many worlds interpretation, to the relationship between complexity, information, and emergence, to the recent findings of the James Webb telescope.
Competitiveness within academia, so much good stuff. So on that note, here is my conversation with Sean Carroll.
Sean, thank you so much for joining us. To dig in, I would love to hear your thoughts on the recent findings of the James Webb Telescope, because from what I can see it sounds like The telescope has detected a couple of things which seem somewhat erroneous, or at least are unsettling our current understanding of cosmology.
The main one being these very large, very well developed galaxies that seem to have appeared far earlier in the universe's evolution than we would expect them to think they're around, they're these big developed boys and yet they seem to be only around, they emerged after about 500 million years. What's going on there?
Sean: So the first thing to keep in mind, do not panic. The Big Bang is fine, the general picture that we have is that the universe is about 14 billion years old. It started from a hot, dense state, gravitational instability brought things together, 100 percent locked in, we're confident of that.
Of course, all of the details are up for grabs. The dark matter, the nature of the dark matter where the initial perturbations came from, the specific processes of galaxy formation, these are all. Things we're interested in. Okay. So we're filling in the details of the broad picture. The second thing is that there's always a worry when you discover something that doesn't fit in.
Does it actually tell you that something isn't fitting in and we have to change our knowledge and update? Or did we make a mistake in the observations? So a lot, not all, but a lot of those claims that you have heard in the popular media about, Giant inexplicable galaxies in the JWST data are just wrong.
They have gone away. For a couple reasons. Number one, the telescope wasn't calibrated yet. And number two there's a way, of course, you need to know how young is the galaxy, how far away is it, right? And the way we do that is measuring its redshift. If you have some atoms that are giving off very specific spectral lines, that means that they're emitting radiation whose wavelength, when it left the source, you know exactly.
The universe expands between you and there, and so you know by observing those atomic lines how much the universe is expanded by. The problem is that to get the spectrum, that is to say, all of the But luminosity as a function of wavelength that this thing is emitting, that takes a lot of photons to get a spectrum, especially when it is literally the dimmest thing that you can see, right?
So astronomers have a cheat called photometric redshifts. They know that an average galaxy emits a certain amount in the blue light, certain amount in the yellow, certain around the red, et cetera. And so they don't look at individual spectral lines, they just look at the gross features. And that way they can get a photometric redshift, which is pretty good, especially statistically.
It is not completely reliable for individual weird galaxies. Many of these, I don't know exactly what claims you're talking about, but many of these claims have gone away. But not all of them have gone away. And I think that the most recent, most sort of reliable information we have right now that does require some rethinking is that.
Not galaxies, but massive black holes existed earlier in the universe than we thought they should. Our galaxy, the Milky Way, has a black hole at the center, three million times the mass of the sun or something like that. That's
Liv: Sagittarius A, right?
Sean: Sagittarius A star, yep. And that's actually not that massive as these things go.
The Milky Way is a pretty big galaxy, but its black hole is relatively small compared to its size. Some supermassive black holes are billions of times the mass of the sun. And so I'm not actually sure how they do it, but they have my Johns Hopkins colleague, Joe Silk, who is one of the world's leading theoretical cosmologists.
He and his friends have determined that pretty reliably these ultramassive black holes have apparently come into existence earlier than we would have thought.
Liv: Because that would require, in order for that black hole to form, it would Must've been around for a long time to then be at a sufficient density to then collapse into one another to create, and this thing's got to suck up all of these other stars.
So that, yeah, like that process that takes a long ass time usually.
Sean: That's the expectation and that's what is being challenged by the data. So it might be that the formation of black holes, just like what you said, is somehow much quicker. But it also might be that the way they formed is radically different than that.
Maybe they were formed in the early universe and got left over and served as seeds for galaxies, or maybe dark matter contributed to the formation of the black holes in some clever way. So that's what's exciting. We honestly don't know about that. It doesn't put the big bang model in jeopardy, but it is truly interesting astrophysics.
Liv: There's some evidence that it… so there's this thing called the Hubble tension, right? Which is that we know that space is expanding and from what physics shows it's span, it's expanding linearly. However, and so you have this thing called the Hubble constant, which is basically the rate of that expansion.
However, there's some tension between depending on how you try and measure it, those measurements don't line up and you would expect it to be uniform across. And again, apparently the findings of the Webb telescope have. They certainly did not resolve that tension. And if anything, they've made it worse.
And I even heard some rumors and maybe then maybe you can correct this, that there's evidence it's not even linear.
Sean: It was absolutely higher in the early universe. And exactly like you said, the Hubble tension is this apparent disagreement between two different ways of measuring. the value today of the Hubble constant.
One is directly. In other words, you look at galaxies relatively nearby and you compare their distance to their apparent velocity and you divide. And you get a velocity and that's more or less the Hubble constant. Not exactly, but it's related to it. And the other way is more global.
So you say, look, we have a cosmological model. That model includes a set of parameters. It's not that large. It's 12 numbers or something like that. The density of matter, the density of dark matter, the amplitude of fluctuations, and so on. And then you globally fit. So you fit the temperature differences in the cosmic microwave background.
You fit the pattern of large scale structure, the whole bit. And that Is enough data to quite closely constrain all of your parameters that you put in one of which is the hubble constant and those two methods don't agree the direct nearby measurements do not agree with this sort of global, uh, interpolation, no extraction of the hubble constant.
And in fact, again, my other Johns Hopkins colleague, Adam Reese is the boss of this. He's like the guy who found the hubble tension has been pushing it. Nobel prize winner for helping discover the accelerated universe. Former guest on my own podcast, Mindscape, as were you. So you know, you have that in common with that, with Nobel Prize winners.
Liv: I love that. Yeah. That's where it ends.
Sean: And Adam has done a pretty good job of convincing me that they haven't made any obvious mistakes, right? Because all of these measurements are super hard. Go out on a dark night and look at the night sky and look at the stars. How far away are they? It's, you can't tell, right?
It's very hard. So astronomers work super hard and they build up these distance measures. And if they do make a mistake at any one stage of that building up, the whole thing can fall apart. But they're super careful. They've been doing it for a century now. And yeah, it's not clear. I don't know.
So the other thing I should say about that, the Hubble tension is, In 1998, we discovered that the universe is accelerating. Okay. That's what's given rise to dark energy, the cosmological constant, that's the explanation for it. And Adam and others were involved with that, and won the Nobel Prize. But there, so that was a shocking observational result.
But instantly, not only did we have a good solution to it, the cosmological constant, but it actually helped other puzzles go away. Once you say, okay, there is dark energy, that actually solved a bunch of problems all at once. So even though it was a shocking result, the scientific community very quickly said, oh, okay, that's it.
The Hubble tension is not like that. It's one of these things where, I don't know, we have no idea what the answer would be, right? So it's a, the joke is, it's an experiment that has not yet been confirmed by theory. We don't have the and it's perfectly legit to say, if we don't have a good explanation for it, let's be skeptical until we're absolutely sure.
Of course the experiments ultimately have to win, but experimenters also make mistakes and theorists have to make mistakes in interpreting experiments.
Liv: And you're a theorist.
Sean: 100 percent theorist. Yes.
Liv: So what are you currently working on now? Because you've recently relocated, you were at Caltech.
You are now a joint professor of philosophy and physics at Johns Hopkins. What is making you scratch your head the most at the moment?
Sean: I think, Two answers that are a little bit different, but in my mind, they're connected. One is quantum mechanics and the emergence of spacetime. So I have my own view on how quantum mechanics works.
It's based on the many worlds interpretation of quantum mechanics. I wrote a book about it, something deeply hidden, everyone can read. Certainly not my invention, many other people also believe it, but I think that people haven't quite faced up to it in a down to earth way. I think they're still cheating.
Many times in physics What I say about physicists is they are absolutely 100 percent focused on getting the right answer, even if they do so for the wrong reasons, right? And that works fine if you think you know the right answer, but sometimes you don't. Like how do we quantize gravity? We don't know the right answer.
And I think that going back and understanding quantum mechanics better will help us with that. So I'm trying to sketch out that kind of connection between the foundations of quantum mechanics and quantum gravity, emergent space time, things like that, possibly with some cosmological people. Implications.
And then the other thing that I'm very interested in is complexity. 24 hours ago, as we're doing this interview, I was at the Santa Fe Institute, where I go all the time. It's the world's leading research institute in complex systems. And it's that to me, that's the frontier for physics beyond the simple things, because we figured out The simple things remarkably well by the simple things.
The theories that explain what is going on in the atoms and particles of you and me, we understand gravity and particle physics, et cetera. Really well in the regime where you and I live our everyday lives. We don't understand dark matter, the big bang or whatever, but we're not there either.
So what is, it's a little frustrating right now in physics. How do we make progress? We have these theories that fit the data super well. And we know they're not right, because they don't fit dark matter and dark energy and things like that. We need to go beyond them, and that's a great thing to do.
My colleagues are trying very hard. I have, throughout my career, tried to do that, but it's too hard. There's a whole nother frontier about taking all of these ingredients, the atoms and molecules and whatever, and putting them together to make systems that are themselves not that simple. You can put atoms together to make water or air and, both the ingredients and the collective are simple, but a bacterium or a society is not simple. And the people at Santa Fe are trying to understand the general principles of complexity and how they fit in with everything else. And my sort of job, my self appointed task, is to specifically to reconcile the existence of complexity with the fact that we know pretty well what the underlying laws are and they're pretty simple.
Liv: So to dig into complexity, I think one of the, maybe just by its name the difficulty with complexity is that we don't even have a firmly agreed upon definition.
Sean: Oh, a hundred percent. Yeah. It's hilarious.
Liv: I think my favorite definition I've seen is that A highly complex system is one that displays a lot of variation without that variation being random, which seems to imply that what complexity is pertaining to is actually information, but it's not just any old kind of information, right?
It's meaningful information that we can actually do something with because technically you take a white noise situation, so a very high entropy state. According to Shannon entropy, which is one definition of entropy, which is related to complexity is that it's a highly informational state, but at the same time, it's pretty useless.
Sean: Exactly. Yes.
Liv: So it seems like complexity is almost like this middle zone of patternicity, which contains all this information. And then it also has, there's like a sort of cluster of adjectives that seem to come with the complexity, like it's evolving, it's emergent, its whole is greater than the sum of its parts.
But I'd love to hear your personal definitions of complexity, or at least how you go about thinking, trying to define it in your work.
Sean: Yeah, look, you're completely correct. There's lots of different definitions and there's no one right one. There's different aspects to complexity, just as the same issue happens when you try to study the origin of life and no one in the field agrees on what life is. But living creatures have certain characteristics in common, and you can study those. And I think that's what's going on with complexity. There are just different definitions. We're in our informal, natural language way, attaching this word to a variety of different things. So one aspect of it, though, is that It doesn't algorithmically reduce to a small number of bits, right?
There is a lot going on. So pi, the number pi, is that complex or not? If you just go out there in the digits, it looks completely random. It satisfies every statistical test for being random, but it's not random. It's pi, right? And also what that means is that this distinction becomes very hard to see.
Because if I say The digits of pi, but I start with digit number 518 and just give you the digits of pi, you would never know it was the digits of pi. You would never know there was that hidden simplicity under there, right? So this is just one of the challenges. I'm not actually answering your question yet.
The way I think about it is, again I'm trying to connect Complexity to the laws of physics as we understand them, in particular, the evolution of the universe over time from a very simple initial condition. Near the big bang, the future is going to be very simple. Also, all the galaxies are going to evaporate.
It'll be left with nothing but empty space, heat death of the universe. Complexity is ephemeral. It comes and goes. We, right now, in this room, are in one of the most complex places and times in the history of the universe.
Liv: Arguably from what we can tell, unless we discover another alien civilization, this is presumably the most complex thing.
So why is it complex? Is it because it's hard to describe or is it?
Sean: I think that the basic thing is exactly that. It is hard to describe. So the way Scott Aronson, who we both know is another Texan, and I wrote a paper together with some students where we did the dumbest possible thing. We modeled cream mixing into coffee because when the cream and coffee are all separate from each other.
That's both simple, but also low entropy, right? Everything is organized, but they're all mixed together. It is high entropy, but still simple. And it's in between when you see tendrils of cream and coffee mixing into each other. Okay. So in fundamental physics, there's a conservation of information. The information you need to specify a closed system is exactly the same with every moment in time.
Laplace's demon says that if you knew exactly what you were The state of the world, at any one moment in time, you could figure out the future and the past, but our observations don't give us access to that. You can't see the molecules of cream and coffee when you look at the cup, right? So you coarse grain, you say, okay, in this little region, there's approximately this much coffee, this much cream or whatever.
And that sort of Papers over a lot of the microscopic distinctions. If you took pie and you took chunks of a hundred digits at a time and took the average, it would all be five , right? It would just be 5, 5, 5, 5, 5. You've gotten rid of all the, you've made it very compressible, and so we defined complexity in our sense, Scott and I, and our collaborators as.
What we call apparent complexity. First coarse grain. So you just take some average overall view and then algorithmically compress. Like how short is the minimal way of describing this system? Algorithmically
Igor: complex it's usually Kolmogorov complexity, but it sounds
Sean: like
Igor: It's a bit different.
Sean: Yes.
The reason why I'm not quite saying that is there is this wonderful idea of Kolmogorov complexity, which is if I could take a string of numbers, what is the shortest possible computer program that will output that number? If it's truly random, the shortest possible program is print, quote, and then the number.
If it's not random, if there is some structure in there, then maybe you can for pi, you can construct a little very short computer program. The problem with that, it's a wonderful definition, and you can even show it doesn't really matter what computer language you use, etc. It is uncomputable. There is a theorem that says you cannot actually compute the Komal Grove complexity.
Why not? Why can't you just try every computer program until it prints it out? Because there's something called the halting problem. You never know when you write a computer program, whether it will ever halt. So to just write every single simple computer program and ask if it prints out your number takes more than an infinite length of time.
But you can approximate it. And indeed when you have a JPEG file on your computer, there are very clever compression algorithms that do exactly this. So here's a fun activity for the kids at home. Take a glass and put, coffee on the bottom, cream on the top. Take a picture of it. And then take a picture after you've Put a spoon in a little bit.
So there's now tendrils in there. The file size will be bigger than your image. Yes, because it's not as easy to compress it.
Liv: That's so cool. Yeah.
Sean: So that's, so of course that's like the dumbest, most naive, most straightforward definition of complexity. And one of the projects I'm working on right now is tracking through the stages of development of complexity. Because what happens is systems learn to take advantage of information about their environment. And that's like a whole nother level of complexity. Even something as simple as a bacterium knows the difference between where there's more food and less food, right?
But of course, you and I have this whole apparatus where we can do calculations, run scenarios, run simulations, but there's That's a level of complexity that the bacterium cannot even dream of to
Igor: connect that to the universe. Basically we would expect that also the most complex era of the universe would probably be one that is in between while we still have new stars being formed while some of them become black holes.
And we have each of those different phases that matter goes through basically and it's how it combines itself with other matter, just like all of it still exists. Be active.
Sean: The way I think about it is the early universe had a very low entropy and we can talk about the details of that.
That's a whole thing, but it's true. The early universe had low entropy. To physicists, low entropy means high information. That's the opposite of what it means to
Liv: computer scientists
Sean: or communication theorists. Because the question they're asking is, how do you convey information in a code or an alphabet or some symbolic system?
And of course, when you're conveying information. You don't want to use a code where 99 percent of your symbols are the same one, because then you're not conveying any information very conveniently, very efficiently. But physicists want to know, what do I know about the system if you tell me the macroscopically available information?
So if I tell you the cream is on the top and the coffee is on the bottom, that's low entropy. It's low entropy because there's relatively few arrangements of the atoms that look that way, okay? It's simple because it's so easy to describe. All the cream's on top, all the coffee's on the bottom. So these are two different things.
Just like when you all mix them together, high entropy Because there's many ways to mix them all together, still simple at the macroscopic level. I just don't. So to the physicist, if I give you a specific low entropy state, like all the cream on top, all the coffee on the bottom, it conveys an enormous amount of information.
Because I have a pretty good idea of what allowed configurations there are. There aren't that many of them, right? And anyway, the way that I think about the cosmological question is, the early universe had low entropy. That means we know a lot. We have a lot of information about the, Microstate of the universe and that gradually degrades over time we lose it as entropy increases So that low entropy the early universe is a resource It's an it's a resource of information and what happens is as the universe grows and you make stars and planets and life And ecosystems and internets you're taking advantage of that information resource more and more in more and more clever ways.
Igor: So the highest complexity state of the universe is at the point of compromise between the definitions of computer scientists and physicists around how entropy relates to complexity.
Sean: Yeah, roughly, I want to say yes, we don't have a great way of quantifying that right now. It's one of the things I'm trying to understand better.
But let me point out one thing just so people don't get too pleased with listening to this podcast episode. Most of the stars that will ever be made in the history of the universe have already been made.
Liv: Are there any counterintuitive things relating to complexity that your work has uncovered?
Just the fact
Sean: that it goes up and then it goes down. I think that there's a lot of people, especially on the biology side of things, that want to say why complexity always goes up. And not necessarily the complexity of individual organisms, because it doesn't, right? There's individual species that have lost complexity over time.
But the complexity of the biosphere tends to go up. And so they look for laws of nature that try to describe this. And my attitude is it goes up because we haven't gotten enough time yet. It will go down if you wait long enough. Eventually, the heat death does win in the end. And so I think that it's important to really fit in.
Any story that you have of complexity into a sensible additional story about the laws of nature more generally and take that seriously. We're all going to die. I did, previously, all about the journey. It's all about the journey. It has to be because that's all there is. A previous Santa Fe workshop was on immortality and they talked about immortality.
Of course, I don't really mean immortality. They mean living a long time or something like that. And so I just gave the wet blanket talk saying if you really care about immortality, the universe has a harsh lesson for you because being a living being requires fuel, requires low entropy, free energy.
We can be specific about what it means, but that is a finite resource that the universe is going to run out of. There is no way to actually, even in principle, Imagine an immortal being in the universe as we know it right now.
Liv: That said, and I accept that, but I'm going to ask this question anyway. We do, presuming AI, et cetera, goes well, we are on this exponential curve, seemingly of intelligence whether it ends up being digital or Silicon base, whatever.
It does feel a little almost hubristic to say that we, and I say, I'm using agentic intelligent life forms of whatever substrate base won't figure out ways to
So, is it, like, how certain is this, I guess my question is, is it completely out of the realm of possibility that there might be some way to overcome the second law in the long run? Through agency and problem solving, because it almost feels as biology and life is increasingly complex, that seems to be its curve.
I don't know if it feels like there may be some step change that could happen, or some key that gets unlocked.
Sean: Every bell curve starts out as an exponential, the fact that you're on an exponential right now means almost nothing because extrapolation is super duper hard. But the more general question, you have to separate it into metaphysical certainty kinds of questions.
Are we absolutely certain of this or that? And the answer is always no. We're not certain. We can't. In the world of physics, in the world of empirically understanding the world, there could always be a surprising result tomorrow that changes our minds. But then there is a slightly easier question about, given what we know about the laws of physics, can you do this, right?
If you have a discussion of high speed rail you're not worried about the speed of light being a limit. But it is there, and you can't say we're getting better and better at building high speed rail. If we extrapolate it a million years in the future, we're going to hit the speed of light. That's just not right.
Same thing is true with the entropy of the universe. It doesn't matter what configuration of stuff you have, it's still relying on free energy in the universe, and that amount is finite. So there's no way of being clever to get around it. The laws of physics are involved.
Igor: So we're as certain about the heat death being a good model of the, what's happening in the future, in your view, as we are about, probably equivalently, the Big Bang as the beginning.
Or how do they stack up against each other? Where would you like the certainty level of theories standing the test of time?
Sean: That's a very good point because of course we don't know the future is hard to predict. Um, when I'm talking about the heat death of the universe, et cetera, I'm presuming a certain very natural.
set of assumptions about the future evolution of the universe that might not be right. So I put much less credence on that than you will never travel backward in time or faster than the speed of light. Those are much more solid statements. But conditional on, so the statements I'm making are conditional on our best current theory of the future.
I think it's the way to plan, until something better comes along. So when I talk about the big bang theory. the speed of light traveling backward in time. My credences are 99. 999, etc. When I talk about the future heat death of the universe, maybe 90 percent is a better credence. When we're talking about space time emerging from quantum entanglement, Who knows, but probably if I'm careful, it'd be less than 50%.
I think it's a very promising way forward, but we just don't know.
Liv: 10%! Yeah that's not,
Sean: As a order of magnitude guess, I would imagine, but as poker players, you're like that
Igor: happens all the time. Yeah.
Liv: That's the most hopeful thing I've heard in a long time. I'm thrilled.
Igor: Then also one more thing on the heat death though, because now let's dig into those 10%.
So how should I think about. Because there quantum fluctuations would still occur after the heat death. And like, yes and no. Okay. So maybe yeah. And the heat death lasts for a very long time. I don't know if time actually still makes sense as a concept at the time. But yeah.
Can a sufficient amount of quantum particles appear to form something new? It's a question, right? Okay. Yeah, it might not be that we can interact with it because at that point, sorry, you're gone. It was the heat. By definition. Yeah.
Sean: Yeah. So we don't know for sure. I suspect no. Even if yes, it'll be super duper far in the future.
But this is worth saying something about, we haven't talked too much directly about quantum mechanics yet. I don't know. We're going to get into it. Let me say one particular thing about quantum mechanics: the phrase quantum fluctuations is fraught. It's a dangerous one. It's not that it's wrong, but it gives you exactly the wrong impression because it gives you the impression that you close your eyes and you see all these things popping out of existence.
That's exactly wrong.
If you have a quantum mechanical system and you look at it and you measure it, then you will get measurement outcomes that seem to jitter around, okay? Which makes you think that when you're not looking at it, it's jittering around. But all the rules of quantum mechanics say no.
There's a fundamental difference between what the system does when you're not looking at it and what it does when you measure it. So something like empty space, as far as we understand it. Is stationary, is static. There's nothing happening. So there's no probability per unit time that something wild is going to occur.
If you were to probe it over and over again, you would get different measurement outcomes, but there's no one around to probe it. There's no measurements going on.
Igor: Fluctuations are the a result of us interacting with, or anything interacting
Sean: The appearance of fluctuations is exactly that.
Interesting. And it makes sense that it
Igor: would change We'll probably also talk about what an observer is, but it makes sense that if you interact with it, you poke it, and something should occur with the
Sean: system. It does make sense, but for the wrong reasons. Because in classical mechanics, if quantum mechanics had not been right, it would still make sense that if you poke things, you might get different answers.
But in that theory, you can poke things gently. You can poke things as gently as you want and disturb them as little as you want. And quantum mechanics is fundamentally different. Even the gentlest of pokes can really give you a very different measurement outcome than you expected in quantum mechanics.
So there is something fundamentally new there.
Liv: So what, why is that? That there is, it's always going to create a fundamental. This is
Sean: the quantum measurement problem. This is what is very hard. So different people give you different answers as to why that is. question depending on their attitude towards what quantum mechanics really is.
If you're a many worlds person, such as myself, this conversation we've already had about entropy increasing, et cetera, is super relevant here, because what happens is you, the observer, and the system you're looking at, what you call a measurement, involves you becoming entangled with that system. So if the system is just an electron that could be spinning clockwise or counterclockwise, Or, according to the rules of quantum mechanics, it can be in a superposition.
So it's not fluctuating back and forth, it really is both, spinning clockwise and counterclockwise. But when you and I measure it, we only ever see spinning clockwise or counterclockwise full stop. We never see that superposition. And the world story is that you have become entangled with the spin, and there is now Part of the wave function of the universe saying the spin was clockwise and you saw it be clockwise.
And there's a whole nother part of the wave function universe saying the spin was counterclockwise. And that's also what you saw. And these two parts are never going to communicate with each other. Never going to interact, never going to influence each other. Those are the separate worlds that we talk about in many worlds.
Liv: Why are you more subscribed to the many worlds interpretation as opposed to. The perhaps more popular or at least more common interpretation, which is the Copenhagen, is that when an observer interacts with a quantum system, the wave function essentially collapses and the superposition, like it chooses a path and that's it.
And the other potential literally ceases to exist. Why do you fall in so many worlds?
Sean: That's a great question. Perfectly fair. And two things are going on. One is that the Copenhagen interpretation, which you just described very well, is not a good theory. It's by a good theory, I don't mean I'm unlikely to be right.
It just doesn't make sense. It's not written down, it's not full and complete, there are questions you can ask that it doesn't give an answer to. When does a measurement happen? What counts as a measurement? Not to mention that it's grossly ugly, right? It's like literally I'm not saying that quantum systems behave in one way when you don't measure them, and they behave in a very different way when you do measure them, and I'm never going to tell you what a measurement is.
That's not okay. That's not fun, that's not a viable thing to be, the most fundamental theory of nature. So I'm not really even worried about Copenhagen being right. It's just, it doesn't have a chance of being right. But there are many alternatives. There's not just many worlds. There's hidden variable theories.
There's epistemic theories. There's objective collapse theories. So there's a good set of perfectly viable alternatives. The reason why many worlds is my favorite is because it's the simplest. Honestly, it just says that thing that you thought was true about quantum systems evolving differently when you're looking at them and not looking at them, that was a mistake.
All quantum systems always evolved the same way, according to the Schrodinger equation. What you left out, the mistake you made, not you, Liv, personally, but Niels Bohr is that you forgot that you are a quantum system, that you have a wave function, that you can become entangled with other things.
Liv: So putting my computer scientist hat on.
I feel like they would say that the many worlds is actually less simple from a computational standpoint, because especially if you think that the universe is in some ways doing, and a lot of people do subscribe to the idea of a computational universe, then presumably a computational multiverse would be Requiring hella data centers, like that's going to be incredibly computationally expensive.
I don't think even exponential describes it, it would be so absurd. So they would argue that it's actually less simple from that standpoint. And so that's, I think a lot of them would throw out the many worlds interpretation.
Sean: Unless you are really deep into the simulation argument, there are no data centers that are running the computation of the universe.
The universe computes itself, okay? And the way to measure the complexity of that computation is not to say how much data is there in the universe, but what is the algorithm? That's what counts. And Many Worlds has by far the simplest algorithm. It is only the Schrodinger equation. That is the only thing that ever happens.
So if you, the difficult thing to believe about Many Worlds is that there could be, which I think is the, let me rephrase what you're getting at here, That's a lot of worlds, that's a lot of stuff going on, right? Okay, that stuff is potentially there in any version of quantum mechanics. If you believe that an electron can be in a superposition of spin up and spin down, and you believe that you're made of electrons and protons and neutrons, then you should be able to believe, stretch your imagination, You can be in a superposition of different states, and indeed, the universe can be in a superposition of different states.
That's just part of the buy in. It doesn't get heavier and heavier. You're not carrying that.
Liv: It's not new information getting created every time it branches, because that same information was contained within the superposition.
Sean: That's right. It's one quantum state. It's the, it's an element of the same mathematical space.
It's only obeying one equation.
Igor: Forever. But computationally, it would still require more computation to render, I would imagine, if we stick with a simulation hypothesis. Gazillion worlds versus one, even when there are some superpositions, because they're Sometimes just like a probability function without having yet to run a Monte Carlo simulation over each of those in superposition states.
You can just keep them computationally probably fairly compressed in a superposition and only later when they interact, it's ah, now in many worlds, now it's like a Monte Carlo simulation over Igor interacting here and all the different outcomes.
Sean: So there's two different answers I can give to this: the glib one and the more relevant one.
The glib one is before or after the universe starts out in a simple state, the quantum mechanical state of the universe is a vector, not in a three dimensional space, but in a very large dimensional space. Hilbert
Liv: space, right?
Sean: Hilbert space is what it's called. The length of the vector doesn't change.
All it does is it rotates. Okay? That's all the universe ever does. It's a vector moving around in Hilbert space. The length doesn't change. So the reason why it seems more complicated to you is because you started it lined up in a very simple looking state, and then its components become more and more difficult to prescribe.
But it's still a vector. You just put it in. chose a less useful basis to express that vector in. It didn't get any bigger. Okay. So the information that it contains is entirely relevant.
Igor: Oh, I should have a second longer, but as you described that the vector moves, you mean that it. Touches upon higher dimensionality.
Sean: I mean that it obeys the Schrodinger equation. So I guess the reason why it sounds fuzzy is because we're used to a world full of stuff. We have space, we have things, we have nice chairs in the space and they're spread out. And so that's a way of describing. the universe, in terms of which, and I guess this is my second answer to the question, in those terms, you need a lot more information to describe the universe today than you did before, and that's exactly because the complexity is increasing.
But mathematically, it's no harder. It's our choice of variables to use, right? It's our choice of description that we chose to look very simple at the beginning and it becomes less and less convenient as time goes on. People really have this feeling like that the universe is in a box and it gets heavier and harder to carry and that's really not a good
Liv: Coming back to emergence, I feel like it's a term that often gets dismissed by scientists or rationalists as being too associated with woo.
And to be fair, it often gets dismissed. Sucked into that world and used to explain things often like quantum mechanics as well, like this like quantum mysticism where people are like, Oh it's the quantum world that's doing it. And it's used as this almost just like a hand wavy explainer for anything that we can't quite explain.
But emergence is actually a very real and important field of study. How is emergence featured in your work at the moment?
Sean: Glad you asked that, Liv, because I have a book coming out, Volume 2 of The Biggest Ideas in the Universe. Volume 1 was called Space, Time, and Motion, Classical Mechanics, Relativity.
Volume 2 is called Quanta and Fields, Quantum Mechanics, Particle Physics, Quantum Field Theory. And I'm writing Volume 3, the title of which will be Complexity and Emergence. So Emergence, and it's very much, these three books are very much appetizers, main course desserts. The middle book, the quantum field theory book is.
action packed, there's a lot going on in there, but super duper nutritious if you go through it, and then you get to have fun and play in book three, where we talk about complexity and emergence. But you're absolutely right that the definitions don't line up, just like life or information or entropy and things like this people argue over complexity, what the right definition is.
There's no right definition. There's different aspects or different things that can happen that naturally get associated with emergence. One, the one that I like to focus on, is just absolutely indisputably real, which is, there are conditions in which I can take a theory that is somehow more microscopic and comprehensive, like a theory of atoms or quantum mechanics or whatever, and I can Throw away almost all the information contained in a state, keep the right information, and still make accurate predictions.
A classic example would be the Earth going around the Sun. I don't need to know what all the atoms in the Earth are doing to tell you how it goes around the Sun. What I need to know is its center of mass location and its center of mass velocity. All I need to know. Six numbers, because we're in three dimensional space, as opposed to ten to the fiftieth numbers that I would need to give you all the atoms, right?
That's miraculous, just the fact that we can make good predictions, throwing away almost all the information about a state. That's a kind of emergence. It also happens when you just take the air in this room and you average over the locations of the molecules, etc. You get things like pressure and temperature and density, and you can make predictions.
Meteorologists do that, right? So that's emergence. It's the existence of a hidden macroscopic pattern within the microscopic laws that allows you to throw away information. But of course, when you do that The center of mass of the earth example is misleading because the kind of theory you started with is the same before and after emergence.
It's a theory of points obeying Newtonian mechanics, okay? In the air example, the kind of theory you end up with is different. You started with points of molecules, you end up with a continuum of fluid, right? And that can happen in very more dramatic ways. And so a lot of people will point to emergencies as what happens when you are surprised.
at the macroscopic behavior. Some people will even invoke emergence to mean you could not possibly have predicted that this would happen. And in both those latter cases, I struggle to understand what they mean. What if I'm not surprised? Did the emergence go away if I choose not to be surprised?
What if I didn't know how to predict it but then I figured out how to predict it? Did the emergence go away? These are all like human judgment words. They should not be part of your definition of emergence.
Liv: So it does feel, and this perhaps ties back in with the question around earlier, information being conserved in the universe in certain, because if we look at emergent things that are playing out in, through the evolution of life on earth.
It, presumably, if you were observing the eukaryotic cells, once they emerged it would have been hard to predict the development of multi organ animals. That seems like almost new information that has been created. So is that not really what's going on? Or is it like a misnomer of the term information?
Because I've always struggled to wrap my head around this idea of the, No, there was all the information that is in the universe that has always existed from the moment of the big bang. It's just that it's different. You just couldn't access it or something like that. So what?
Igor: Laplace's demon.
Liv: Yeah. Only Laplace's demon.
This hypothetical thing that has perfect information. Or perfect access to information, or is it truly the case that new information is being created in certain situations?
Sean: If we put aside quantum mechanics for a second, that's a more subtle question. Let's get to that. Let's imagine first that we're just classical that Newton had been Right.
Okay. It's still, you could absolutely describe, much of what happened since the Big Bang and the development of eukaryotic organisms, et cetera. Mostly in a classical style. framework. And there, information is conserved at the microscopic level. If you were Laplace's demon, if you had access to the exact microstate of the universe, the information is the same from moment to moment.
But we're not Laplace's demon. Someone I forget exactly what the formulation was, but someone on my Patreon for my own podcast said they want to start a drinking game to have a drink every time I say you are not Laplace's demon because you're not. And your accessible information certainly changes over time.
Imagine, for example, you were playing poker and in a Newtonian classical world, there is a predetermined fact about the matter, about what your two whole cards are going to be. You can't predict it. You don't have the information. Laplace's demon. And if the deck of cards had 10 to the 100 cards in it, you would be essentially out of luck in predicting what it would be because the probability of getting away would be so small, but it's still there.
So this distinction between what can be done in principle by Laplace's demon and what can be accessed and used in the real world by we bounded macroscopic creatures is super duper important.
Igor: And Laplace's demon would have, Also, the ability to know that pressure will emerge as a macroscopic description of the many atoms.
Well,
Sean: So Laplace's demon doesn't exist. So you can make up whatever Laplace's demon you want. And I know that some people. Yeah, so this is a hilarious question because there are people who will insist the Laplace's demon only knows the exact microscopic state of the universe, not how to coarse grain it, none of the emergent higher level things.
So Laplace's demon doesn't know about entropy or temperature or love or life or death, right? Of course I can make up a version of Laplace's Demon that is slightly cleverer than that, who knows all those things, yeah. But let me also say, the reason why quantum mechanics has to be put aside and then brought back in, is, and again, it's the complexity evolving over time.
The early universe, From a quantum mechanical point of view, could very well have had its exact quantum mechanical state be super simple. But then this branching happens over time, and the branching is the way that I like to think about it. It takes a perfectly circular disk, and it's very simple, there's not a lot there, but now I divide it in two with some complicated cuts.
Now, both of the pieces are complicated, even though they fit together to form a simple thing. In quantum mechanics, the universe is like that. Our observable branch of the wave function of the universe requires an enormous amount of information to specify what's going on. And there are also many other branches, and they all fit together to make something very simple, which is awesome.
Liv: Some people seem to believe that it's possible to infer the sort of macroscopic behavior we're seeing of humanity and the economy and how it will progress over time by reducing it back down to and it feels to me like that level of reductionism is hubristic, would you agree?
Sean: I'm super hubristic. Come on. I study the universe. What can I tell you? I think that The laws of physics that govern the universe that we actually experience in our everyday lives are pretty well understood. I think that the macroscopic behavior of the universe, up to and including politics and society, must be compatible with those underlying laws.
So you have to distinguish sort of dependence from usefulness in some way. In principle, everything that's happening in society or biological evolution or whatever follows from the laws of physics initial conditions of the universe. In practice, that's a dumb way to think about society, right? It's much easier.
If you want to make a cup of tea and someone says, okay, start with the standard model of particle physics, that's just of no help, much less complicated things, right? There might, however, be some things we can get out of thinking as physicists. There's a whole field called econophysics, where they try to use principles from physics to understand economics.
I'm working on a long term project, it's going to be years before it comes out, but it'll be a book called The Physics of Democracy. Where we use ideas from physics about phase transitions and emergence and complexity to understand society and voting and emergent choices. There's no better, more vivid example of emergence than taking the preferences of individuals and grouping them together to get the preferences of a society.
And I don't think that we've been, we've been slapdash about choosing how to do that. The interesting new thing is that we're both. There is this emergent thing, but we're shaping it through the forms of government, the voting system that we have, the ways we communicate and things like that.
So there's this feedback loop between, it's not the air in the room where there's one right way to coarse grain to get an emergent description. We're choosing how to coarse grain ourselves. And I think we should be doing that. In a self aware kind of way,
Liv: right? Yeah, because historically all of our well, there have been some instances such as the founding fathers who decided we want to start a new country.
And we're actually going to think about this from first principles and design this system in an intelligent way. I'm not saying America is perfect, but it's worked out pretty well. That's
Sean: much better than it should have, honestly.
Liv: But that, yeah, that speaks for something, there's clear value in thinking about designing the rules of the game as opposed to letting them just emerge naturally from or evolve from previous, especially the nature, which is often read in tooth and claw.
So are there particular
Sean: I'm still very much in the research stage here. But both voting theory and game theory are very important, right? There's different actors with different interests and we're trying to figure out what's going on. How best to accommodate them. And it's not just a descriptive question.
It's a prescriptive question. Whose values count, who, how much do they count? But I do think that at least we know more about voting theory and game theory than the founding fathers did. So in principle we could do better. Now in practice, the reason why I say that America did much better than it should have is.
The people there who were there in the 18th century, designing the constitution, et cetera, were both pretty darn smart. And for the most part, acted in good faith. There were, the people from the South wanted certain aspects of things. People from the North wanted other aspects of things.
I saw Hamilton. It was great. You can actually see that play out in real life. Fine. But overall they were reading their Locke and Rousseau and Plato and Aristotle and thinking hard, the Federalist Papers were a really in depth examination of some of the very difficult questions that would arise in politics.
I doubt we could reproduce that. Now, I don't, the last thing I want to see now is a constitutional convention. Because it's, the right people would not be there. A good constitutional convention right now for the United States could vastly improve our system. The chances that's what we would get are infinitesimally tiny, I
Liv: think.
Is that because power maximizers would be attracted to it? Which tends to be less
Sean: good faith actors would be chosen and also, the country's old now it's settled in its ways. There's a lot of historical baggage that we have to put up with much less of a feeling of experimentation. And,
Liv: it's a more complex system.
Sean: It's a more complex system thing that certain norms are entrenched and so forth. Yeah, I would not want to see who, I mean look at who we vote for. Come on. And look at the, more specifically
Liv: look at the options that are provided by the system. I think that's the most interesting, yes.
Okay. The voting thing, but I, Not to get too heavily political, but I'm not thrilled about either of the likely options that's going to be presented now. I can't vote. I'm an immigrant, so I just have no involvement anyway. But still, it's like, how is it that a system that is in many ways been so successful is just that beyond a certain level of complexity, the inertia becomes so big that it tends to just Put forward the two things that are most likely to continue maintaining its existing structure.
Sean: I don't think it's inertia, but I do think that there are, I think in some sense, the system is almost, the country is almost too big to be governed in a fair way. It's so
Igor: difficult for people to figure out new things. They just stick to the old. Like you mean that, Oh, Biden again, Trump again.
Oh, Clinton, new Clinton. Like we're okay. Just stick with something simple. It's like the desire for simplicity.
Liv: No, it's more it's that. Well, at this point, it feels like America is almost split into these two strong power structures. You've got Republicans and Democrats.
And what are the ways for that to continue? sustaining itself is to put forth the two candidates who just looking back at history, they've been, they've both successfully been voted in before. So let's just put them in again.
Sean: I think logic might not be the right word to talk about this. Because on the one hand, we have primaries.
People voted for Trump and Biden, right? They had alternatives. So why did they vote? And, the answer is complicated. I don't want to over simplify it. Part of it is not paying attention, being low information voters. Part of it is money, getting out the word, different things. Part of it is the. Media system chooses to highlight certain things and not others.
There's just a million things that go into it and full employment for people who study this stuff for a living, but. Don't oversimplify it is the only thing, only huge mistake you could make.
Liv: How does game theory, or actually more specifically decision theory, change, or would it change in the many worlds framework?
Sean: So I think it wouldn't, but this is a very good question. I did talk a little bit on my podcast on Mindscape with Lara Bouchock, who is a very elite philosopher of decision theory. And her angle is that standard formulations of rational choice theory, et cetera, don't allow for risk aversion in particular risk aversion to be taken into consideration.
If, standard decision theory will tell you that if. If there's truly two choices, one is between an option with utility one, and the other is 50 percent utility zero or two, you should be indifferent between them. If they're truly measures of utility, obviously there's marginal values, et cetera.
But okay, if they're truly useful. And Lara's point of view is that I can absolutely invent a set of axioms for rational choice that says, no, I don't want to risk getting zero. And it would be completely consistent, right? It's not like you reach some contradiction or anything like that. And she says that we, that's.
She's not saying you have to do that, but that it's just as, it's just as rational as you have to be. Okay. So if that's true, I'm giving you a long winded backwards answer, but in the straightforward way of doing decision theory or rational choice theory many worlds have no impact. In other words, it's not any different than just random numbers, just expectation values for probabilities no different than the Copenhagen interpretation.
If you say there's a 50, 50 chance of being in one world with a lot of value and another world without. Your decision would be no different than if there was just a certain 50 percent chance that one world would come about. As opposed to two worlds with a hundred percent probability, but both worlds exist.
But if you were super risk averse, that might change that. And honestly, the answer is, I don't know.
Liv: So do you think there's any low hanging fruit or solutions to these multipolar coordination problems? Because so many of the issues that we're now facing are a result of our, it's getting increasingly hard to coordinate in many ways, particularly with information that the ecosystem is breaking down and so on.
Have you had any, are there any sort of schools of thought that you're going down? Or areas that you think hold promise.
Sean: I think I'm more in the stage of diagnosing the problems than in solving them, I have to confess. The problem, as you mentioned, is very much there.
What I think is under-appreciated, at least temporarily from my current point of view, is just, again, Big the system is that we're more connected than ever before. So not only is the world literally bigger, but the parts of the world we interact with are bigger, right? Because we can, we have giant corporations.
We have the internet. We can talk to people. We have a. Fewer and fewer mainstream news outlets to get, it's no, no local newspapers anymore, right? And so that gives us a feeling of helplessness because we can't
Liv: affect
Sean: What is affecting us?
Liv: And yet we can see it all.
Sean: We can see it. Oh, it affects us, right?
But we can't affect it back. There's that asymmetry that is more powerful than ever before. And I think that people respond to that with various versions of tribalism and shortsightedness, right? And resentment. And not in perfectly rational ways. And people are hyper aware of the interests and values of their own community, and they keep bumping into other communities and just not getting them and not understanding them, et cetera.
And, but they have to bump into them in a way they didn't used to have to, right? And so I don't know how to do better at this. In fact, I worry that it's just going to get worse.
Liv: How does competition manifest within academia and physics?
Sean: That's a great question. And I will not try to wimp out, but just by saying it's complicated, but it is.
Often, one of the things that strikes me is we are not very well trained as a society to disagree with each other. I think we're better trained in academia because, people accuse, people on the internet are just like you. afraid of criticism or whatever. No, I'm afraid of idiotic criticism, which is what I get on the internet all the time.
I guess as a professional physicist and philosopher, people disagreeing with me is just my day job, right? Like it happens all the time, but it's not oh, you're an idiot. It's a substantive disagreement. It very often gets Not very often, but it will get heated and personal and annoying sometimes.
But much more often I have people on my podcast who I deeply disagree with. We have a great conversation about what it is we disagree with, and I'm just surrounded by people I disagree with and that capacity, even though it's only. It's a mediocre level in academia. It's almost non existent outside academia, at least in the general social media landscape.
So there's a hundred percent competition. There's very limited resources. I tell my graduate students. Most, at a typical good physics department graduate school, 99 percent of the people there want to become tenured professors of physics one day, 25 percent will. That is going to make life tough, right?
And I tell them, I don't want to hide it. So I'm like, this is what you're in for. I don't feel bad about letting people into grad school because getting a PhD in physics leads to good things otherwise. But I It's going to be competitive and it's not necessarily competitive with your nearest neighbor because you're competing with people elsewhere and things like that.
But another thing is this is all just anecdotes that add up to a picture, but students sometimes say, I would like to ask more questions and seminars, but I worry that the other professors are judging me and your instincts. Instinct is to go, don't worry, they're not judging you. But then you think about it and you're like yeah, they're judging you all the time.
And then you have to say they're judging you for asking questions and also not asking questions. They're judging you in the seminars and in the hallways and ranking people and saying who's good. Who's not so good is a hundred percent always happening. In academia it's a little bit more meritocratic than when it happens in Hollywood or Washington DC or whatever, or the business world.
But it's absolutely happening and it's the single worst aspect of academia. I love academia. I'm part of it. I'm old and established enough. I don't need to worry about that much anymore. I know there's people who think super highly of me. There's people who think super lowly of me. I've learned to deal with that and trying to get along with my work, but.
When you're a student or you don't even have your job yet, it's just ever present.
Liv: How much of a problem is the publishing pressure? Because obviously so much of science, the wider metagame is you need to, everyone's competing for funding or for tenure. How do you do that? You publish more papers in the right prestigious journals.
The journals have all this control because they're the ones who have this, they've got the, somehow they've gotten the monopoly on prestige and it creates this kind of stuck system. At least it appears that way from the outside. Is Actually the case,
Igor: your own little multipolar trap basically.
Sean: I don't, I was with you up until the end about the stuck system. I think that academia is actually pretty successful at generating new ideas. And there are bandwagons absolutely, but the bandwagons come and go, right? That's what you want. But, what I would point to is just the fact that there is hyper specialization so that I can't.
Productively judge the work being done by people in my department downstairs, right? Because it's too different. I can say it looks pretty good to me. And that's why people start relying on these simplistic metrics. How many papers have you written? How many times did they get cited? Where did you get it?
Like what you want ideally is to read the papers and judge whether they're good. But who has time for that or even the ability to do that, right? And That is there's a sort of cheapening of the way in which we evaluate people on the basis of that. And honestly, it's harder for smaller, less prestigious places because they're going to have fewer faculty.
They're going to be less able to judge whether the work is intrinsically good or not. So they're more likely to promote and hire people just because of quantity. Rather than quality. Um, I would encourage them not to do that. I would actually try to engage with the substantive output that people have.
, but the other thing to mention, which I say very sincerely, is for the most part I found that people within academia. Try really hard to be fair in judging other academics. I've been on a grant review committee. I'm on the committee that gives out Guggenheim fellowships and the blood, sweat and tears that these people do to read.
I've been on grant review committees where super famous people have applied and nobody's applied and nobody's got the money because the super famous people phoned it in their grant proposal and everyone was very comfortable saying, sorry, you didn't deserve it this time. But there's a million problems.
I, one problem is just that we spend way too much time begging for money. In academia, especially in science, we have a lot more money in science than in the humanities. But as a result of that, some huge fraction of effort by academics, especially experimenters, observers, and things like those who need telescope time or equipment money or whatever, they just generate grant proposals, right?
Rather than generating science. And That's no way for the system to run. The simplest thing would be to say, you get a grant and the grant lasts for three years and every year you have to say what your progress is, forget that you have the grant for nine years and every three years you say what your progress is so people can actually spend their time doing science.
But then that's a loss of control from the grant, granting agencies, et cetera. Yeah.
Igor: That said, it does seem odd to me, the current, as I understand it, to be the state of scientific journals and where they get their reviewers that are unpaid, as I understand, for the most part, then you submit after a lot of work you don't get paid for the submission by the journal either.
So you produce the content, they check the content unpaid, and then you And even you need to pay actually to then read someone, one of the other reviewers content later. So it's not like you're getting into the system. At least if I want to submit, I get to now be part of it. Seems a bit like a racket from the outside.
There's
Sean: definitely a racket aspect to it. Is
Igor: That's your, that's what I meant before, or we meant with is that the kind of academic flavor of the coordination problem that it's hard to break out of?
Sean: I think that there's a couple things going on. One is that. Capitalism exists and if people find a way to scrape money off of many other people, they're going to do it.
And publishers like Elsevier or whatever absolutely do this. There's and they're a more respectable one. There's a lot of just predatory publishers.
Igor: They own like most of the non Springer Nature journals, right? Things like that.
Sean: And they charge exorbitant fees, and it's completely out of proportion to the value that they add in, which is why normally I Tend to publish in places like the physical review that are published by the American physical society, not by a commercial publisher, right?
Furthermore, you can always just put your paper on the archive and people can read it for free. Yeah, sometimes I don't even publish my papers. It's too much work.
Igor: My perception is that the archive got a bit of a boost over the last few years. I see many more links now when I'm looking up someone's paper.
It also exists in the archive. I don't know. For me, I traced it back to my background. The COVID times when it seems like we changed a little bit, the speed of publishing because we needed new information and people were just posting on archive prior to peer review or with like minimal peer review is,
Sean: Is there such a trend?
It's very, I'm not going to even guess because it's a very complicated system and there's microclimates within the system, in the fields that I work in traditionally high energy physics, astrophysics and so forth. We invented the archive and we were early adopters of it and the idea of putting out your paper before it got peer reviewed and published is just second nature.
Of course, everyone does that. Lots of senior people in my field don't publish in journals at all. They only put it on the archive because they have tenure. What do they need, or whatever. Other fields are radically different from that. And there are ways to game the system vis a vis prestige, et cetera.
And, some people. If you want to publish in nature or science or whatever, then maybe they don't want you to put it in the archive before you do it. That just depends on the details. And there's a whole field of science studies that tries to think about this and I don't worry about it because it doesn't bother me.
I'm not in, I don't bring in millions of dollars because I don't have any equipment or very few people. My group is a few students and a postdoc or whatever, right? Some
Liv: whiteboards and some brains.
Sean: Yeah, exactly. I don't need to concern myself with that. So I'm not an expert, but. The only thing I would say is don't, it's very hard to get the full picture exactly because there's so many different styles.
I, I just, I've often just said, why doesn't everyone just make their papers publicly accessible? It's just completely insane to me that there is publicly funded research that is not publicly available for free.
Liv: But it's because it's a trap. That, that, that's the thing that, if they don't publish in the fancy journals that charge a lot of money.
Then they don't get the prestige that's it's a prestige track,
Sean: But also it's a coordination problem. If all the academics said, let's start a journal where we can just publish it and make that then it would change tomorrow. Has anyone tried?
Liv: And that's the thing. And then some old effect because people
Sean: have tried, but they've tried in the fields that were already amenable to it.
So it's like the Journal of High Energy Physics, Journal of Astrophysics and Cosmology but these fields were really not in that trap already. So it's more like the bio fields where you might actually be making a lot of money or computer technology kind of things. I'm free from the danger of making lots of money from my research, so I'm, people don't want to trap me really.
Liv: If you could redesign Physics, academia, what would you change?
Sean: I would make it easier to be interdisciplinary. I have this vision of at least a couple of universities that just don't have departments, that just try to hire the best people to offer classes, let students design their own curricula in consultation with a faculty mentor or whatever.
But I'm just, it's always very depressing to me when I'm on a search committee or whatever, we're searching for a biophysicist and someone says are they really a physicist or an economic historian? Are they an economist? Are they historians? Who cares? Like I'm a physicist philosopher and it was very difficult for me to get a job allowing me to do what I want to do.
Even if, if someone had done a job search for a physicist philosopher, I would have been right near the top, right? Maybe not the top, but I would be a viable candidate, but no one does that because you either do physics or philosophy.
Igor: You've recently done a solo podcast about your updated beliefs around the advent of super intelligent AI or highly intelligent AIs.
Great. Obviously that you're putting out, Hey, I had a certain set of beliefs and I've updated them in a way. Yeah. Can you talk to that first a little bit and maybe about the not journey, but like the point that made you change your opinion?
Sean: I've changed my opinion in various different aspects.
I still believe that I don't ever want not to use the words intelligent, much less super intelligent when talking about AIs. Because I do believe that what they're doing, the current. I'm not saying what is allowed or disallowed by the laws of physics, but looking at what is actually going on, I think it's just a very natural human mistake to treat them as the same kind of reasoning creatures that we are.
They are something, they're doing something, and they are Absolutely optimized to sound like they're doing the same things that you and I are, but they are clearly not doing the same things that you and I are. So
Igor: I agree with that, but them doing something else is not does not imply that they are not intelligent in another way.
I agree that we are like that, absolutely. But still yeah. And then we're going to the definition of intelligence, but is the. Chess engine, more intelligent at chess or smarter at chess. It's from one you could argue, maybe not, but in the end, the thing that matters is producing better results within chess.
So if we take the intelligence away, like I agree that we anthropomorphize it, but yeah. I can see it producing very valuable results
Sean: And you do too. Yeah. Yeah. But that's the reason why I don't want to overuse those words because until we had these AIs, the only other actually intelligent creatures we knew were ourselves.
So it's almost irresistible if we attach that word to something else to attach all the connotations of that word. It's not just AI. I don't like them, do you guys know the balloon analogy for the expanding universe? People try to explain the expanding universe by saying, imagine I have a little balloon, I put dots on it, and I blow it up, and then, all the dots move away from each other because the balloon is getting bigger, and they say the expanding universe is like that.
And I hate this analogy, and why? It's because a person who doesn't already know the answer says okay, so what's inside the universe? What is it expanding into? And you have to say no, I didn't mean that. That's just part of the analogy that doesn't go along, right? Some aspects should be taken seriously and not others.
It's super hard for people to actually do that. I would rather just not give them the misimpression in the first place. So I don't have the right vocabulary, but I would rather try to accurately capture what the AIs are doing rather than using phrases like super intelligence.
Igor: Cool. Yeah, I agree with the phrasing, but I wonder if as you've updated after listening to Jeffrey West, we, and it takes time as you correctly, like you pointed out, sometimes you hear something, it's not like it immediately changes, but rather.
One sits there and digests, et cetera. So I wonder if we'll get to another degree of update potentially through this conversation now. So the thing that I see differently is you talked about that. It's anthropomorphic to assume that they are intelligent in the same way or agentic agreed with that.
But neither of that implies that they won't be more, it won't be more capable than us across certain domains.
Sean: My phone is way more capable of multiplication than I am. It's impossible to disagree with what you just said. Yes.
Igor: And then I agree with the idea of artificial general intelligence, if it implies that, Hey, you're the AI, there are various definitions for it because again, it's a broad term.
But one that I like is that it is at least as capable as humans across all economically relevant tasks. And it's not necessarily this current architecture of AI, but even a different one that does that. And I think your point is relevant there. It's no, it's a different mind. So it might never actually cover even specifically every single economically relevant task we do.
Because it's a different structure of mind. But maybe that doesn't even matter is how I feel about it.
Sean: Exactly. I completely agree. The
Igor: The thing that matters to me is it will be more capable at a number of them, probably also very capable at a number of ones we don't even consider ourselves intelligent on.
And many we don't know about. And through that capability, we will integrate it into our economy and into our just like various aspects of life very deeply. That's where I think that also the point about it's not necessarily agentic stops mattering as much because at the point when one relinquishes a lot of the control, which is what you do to a more capable system or person.
Yeah, does it still matter whether it's agentic or not? You're Yeah, ultimately it doesn't have a goal, but you gave it so many steps to do that you can't monitor that it can just drift off in some way.
Sean: I don't know whether I'm hitting my head against a brick wall because people don't agree with me, but I would argue that, I agree with all the substance of what you just said.
I think AI and other technologies that we might not attach that label to are Will get much better than human beings at many things, maybe not all the things, but so many things. And we will be absolutely turning over a lot of work to these computers and algorithms, et cetera. I nevertheless just think we make ourselves dumber.
By applying anthropomorphic language to them, whether it's general intelligence or agents or values or any one of these things, I think that we're papering over the very real differences and therefore not addressing the more realistic worries. And there's good parts and bad parts to all of these things, right?
And by forcing them into a box that was created by biological evolution for entirely different purposes, We're going to mischaracterize them and so I want to take them seriously as what they are, which is a little bit different.
Igor: The thing that people often hear when one says are they intelligent?
And one is talking more about the different structure of mind. What many people hear, I believe is, oh, they will never even reach our capability. So the thing you're saying is you can very well imagine it surpassing human results across many domains. Do you think that we will be able to control what it does well?
Sean: I don't know. Do we control the atomic bomb?
Igor: We control whether we turn it on or off.
Sean: Not the people who got hit by it.
Liv: Right.
Sean: The control is diffused. Do we
Liv: control the internet?
Sean: Yeah, exactly. No. Can we turn off the internet? Yeah.
Liv: And I think that's an example.
People often are like, Oh, it's not how they view it. I see AI becoming increasingly more like a sort of distributed, more agentic form of the internet in that it will become. Just so ingrained into our economy and our society. That, that might be a great thing, but it also, it might speed up the things that were, that are already misaligned in our system further.
That's a big concern I have. Like right now our economy has been great for us, it's like we're here in this wonderful house, but there's all this technology and so on. Love that at the same time, we are externalizing so many costs. We're starting to. I'm not saying we're at, but we are, if trends continue, we're approaching some kind of planetary boundaries in terms of resource extraction.
And if I just speed up that process even faster, then that existing misalignment is going to increase. So that's like an area that I'm personally very concerned about. Does that resonate? A
Sean: hundred percent. That's the dark side of efficiency that I was mentioning before, right?
Efficiency as a concept all by itself sounds good, but if it's efficiency of extraction, it could be dangerous. You're going
Liv: in the wrong direction. The last thing you want to do is go even faster. You could be efficient, really good
Sean: doing something bad. And that's why I, it rubs me the wrong way when people worry about.
existential risks from AI. I'm not, I'm not against worrying, worrying about them, but I think that the right strategy for ameliorating that is to really worry about the short term obvious risks, right? And you will get better at dealing with AI and incorporating it in safer ways if we don't.
Extrapolate out to the extreme, but we just, what are all the less sexy, but really right in your face kind of things that could go wrong right away. Because there's a lot of them and,
Liv: but it's not an either or that's the thing. I think a lot of people, I don't understand how this concept has come up of where it's like, If you worry about existential risk, you aren't worrying about present day risk, because the people I know who are worried about existential risk are also very worried about the real term things as well, because they're like, this is, look, we've already got misalignments happening, they're already doing things we don't want them to do.
So to me that they're all under the same they're all examples of the same kind of malaise, essentially.
Igor: And by the way, I want to also throw in worry, people worry about it. And it's such a big thing, but it's because obviously the benefits are going to be so massive and great that ameliorating those risks means being able to capture the benefits.
That's the goal. That'd be fantastic.
Liv: But it's, yeah, it's, I guess it's just a quirk, I think, almost of the media in fact, because it is this sexy, dramatic sounding thing. What's the biggest story? Oh, Terminator extinction. It gets the media attention, so people feel like that's the thing where all the funding is going.
But from what it is, like all areas of AI risk are drastically underfunded compared to the amount of funding that's going into just pro, like speeding everything up and the progress.
Sean: People start being very idealistic and okay, we're going to work on saving the world. And then someone comes along and says, Okay, but if you do this other thing that is not really going to save the world, but I'll give you a billion dollars, it's very hard.
I don't know what I would do for a little while, but I'll put a little bit aside. Yeah.
Igor: I wonder actually, as we discussed anthropomorphism of AIs and actually of the concepts of intelligence and other things. I want to point towards consciousness and as I understand, like the history of thought. Of the different theories of mind, I'm personally more closer to the functional functionalist or functional theories, like attention schema and global workspace theory, et cetera.
So I like attention schema theory because it's probably a bit more understandable, but as I do have a physical body I have a representation of my physical body in my mind, which is taking up a separate space and that's the body schema. And the. Attention schema theory is that this thing is equivalent to how we have attention, what we're putting it on.
Obviously we're processing much more data, but not everything receives our attention and that we equivalently have an attention schema as we have the body schema, which is the brain's representation of our attention. And that this representation of the attention is not, it's equal to attention. It's slightly different, which is why consciousness and attention and that theory are.
So connected and experientially they're very connected, but they are slightly different.
Liv: So consciousness is like the flashlight of attention, looking at the attention schema that is yourself.
Igor: That would be more global workspace where it's like you put, where, um, you put the flashlight, you put on something that you're putting attention on and through that, It gets more capabilities to interact with the rest of your brain, basically, and can now access more memories and all sorts.
And Michael Graziano recently wrote a paper where he wanted to combine all of those functionalist theories and to say, Hey, we're using different languages to describe the same thing, let's work together. And then we will stand on a better footing to defend these theories as a good description of what's going on.
Sean: Whenever I see. Some famous philosopher, neuroscientist, and academic say there are these two opposing theories, but I can actually figure out how they're saying the same thing. What always happens is that one of the theories says, Oh yes, you're exactly right. And the other one's no, we didn't mean that there's a resistance to be.
Yeah.
Igor: I, it's a recent paper as well. So I don't, I haven't seen the backlash from higher order workspace theory yet. But yeah, as I understand the history of functionalist theories coming up, it's. Happened at the same time as we developed computers as well. And someone proposed that maybe It was only seeing the kind of and computers are very functionally designed so that it was designing them that allowed us potentially to understand our mind in a new way, in a better way.
So it's not, and people describe similar things where neuroscience has not always been informing AI, but sometimes actually I development has also recently informed neuroscience. So I wonder if. Equivalently to anthropomorphism, there is like computer morphism or something. You see the inverse of that also.
And maybe we are not that dissimilar in some way, maybe there are some structural things that are just efficient ways to combine information, right? As we discussed compression algorithms, for example, probably something that will exist. with any, um, anything that tries to be intelligent.
Sean: Yeah, I think that, again, these are complicated issues, but you're certainly right that human beings, as soon as they discover a new technology say maybe we're like that, back in the, when we discovered simple engines suddenly, People were machines, right? And then, we discovered steam engines and okay, people are thermodynamic, we discovered computers and people are computers and there's something right in all of these things, right?
They're not crazy. And I think that it's a little bit difficult, but we have to keep in mind two things. One is. There are certain universal principles that you would not be surprised if came up in both these artificially constructed things and in ourselves, right? Levers are all over the place, right?
Not that, not really that surprising. The form of the eye is not that different from the form of a camera, okay? But there is a difference because one is designed in one just came up through biological evolution. And that's a plus or minus. Like the artificially designed things tend to be much more fragile.
Then the things that are intelligently designed because we have a purpose. We try to optimize them for that purpose. They're much less general purpose things like a human being is very inefficient for lifting heavy objects compared to a forklift, but we're much better at other things, right? And, when you envision robots and things like that, they're always, These metallic things, but the thing about being metallic is it breaks and the human being can break their leg and it self repairs.
And that's because they're made of little tiny pieces that can coordinate.
Liv: This is a definition of the difference between complexity and complicated. It kind
Sean: of is. Yeah. A
Liv: complex systems have this ability to dynamically evolve. And emergence. Exactly. And it's, that's, I think what makes something alive is it can emerge and things where it's.
A forklift or even a very complicated machine is not, it is very complicated, but it is not alive because it will always do the same thing that it is designed to do. It's not going to evolve into something else. And I think that's where AI is interesting. And AI, particularly in, in its physical instantiation in robotics, if we have now, if we're now developing a machine that is able to Self edit its own code, make self adjustments.
Now we're merging the two. And we're blurring the distinction between the complicated and the complex, which is very exciting. It's also scary because where does this go? Yes.
Sean: Yes. Then, I do think I try to occasionally tell this to my physicist, computer science friends, but I do think that you can't ignore the biological side of progress that is going on right now because.
Biology has had a four billion year head start and it's figured some things out and, Francis Arnold Nobel prize winner from Caltech who does synthetic biology, she just shakes her head at everyone inventing nano machines and things like that. She goes, it takes them so long and then they reinvent the cell every time they reinvent.
biology. And I do think that the future form of technology will be more hybrid, right? More cyborg, more in between biological and technological. And I, that is a space that hasn't been explored nearly as much as it could have been. So that's the nature of phase transitions, is that there are different forms that will take over.
So they're hard to predict.
Igor: Seems there are then yeah, relevantly true patterns that you find while looking at one thing, potentially in the other thing, right? As you we're always trying to explain the human through the thing we're working with, and as you said, like sometimes we pick up some kernel that is relevant actually for us to understand ourselves better.
You've developed this or further the kind of an idea around naturalism, which is poetic naturalism and. That made just a lot of sense to me when I heard you describe it, maybe do so just briefly.
Sean: Sure poetic naturalism, the motto is, there's only one world, the natural world, the world that is discovered and described by science.
But there's many ways of talking about the natural world, and that's where emergence comes in. But the slightly longer description says there's not only many ways of talking about the world, but there are different ways that are useful for different purposes. So if you're being purely descriptive about the world, if you're just being a scientist, then the natural world itself tells you what ways to talk about it, right?
There's the Laplace's demon way where you're maximally comprehensive microscopic. There's the various emergent ways, where you have tables and chairs, planets and people and puppies. But then there are judgmental ways, normative ways, aesthetic ways of talking about the world. And those are not fixed by the phenomena.
You can disagree with someone about the aesthetic value of a painting, and there's no experiment you can do to figure out who's right and who's wrong. And poetic naturalism says that's okay, just recognize it and it goes further than paintings, I would go all the way to morality something that we make up, something that is not fixed by the data, something you can do experiments to distinguish between different theories, but it's still there, it's still legit, it's still important to Doesn't make it any ontologically less
Liv: real.
Sean: No, not at all.
Igor: And it's still you discussed that for example, free will is a. In that framework, something that, as long as it's useful to explain certain actions, then use it and it's relevant. And that's a question that I often wondered previously of okay, so what is this line in the sand of where something is real, like ontologically accurate as a description about the world and where Is it just a good model?
And it seems like the consistent thing to say is just, there is no I don't know what the ultimate reality ontologically accurate description is. So everything is a model. That's right. And then the notion of reality disappears. So then you start using it actually again, you redefine it and use it for other things.
So
Sean: I think you've discovered what is called structural realism. I don't know if you're a fan of structural realism already, but this is an existing perspective in the philosophy of science, one that I actually really and so the idea is the following, when you go from Newtonian mechanics to general relativity, so in Newtonian gravity you say what the world is an absolute three dimensional space, absolute time, some particles that are in the space.
instantaneously interacting with each other through an inverse square law of gravity. General relativity says, no, that's not the world. The world is a four dimensional space time and gravity is a field on that space time, which obeys some differential equations, like all of the words are different, but the predictions are almost the same, right?
Except for a little bit about mercury, et cetera, you get the same predictions at the end of the day for what you're observing. And there might be another level underneath. We don't know yet, right? Might be some quantum mechanical immersion space time. So there's a worry about what the real question is.
Like, how do we know what is real if we keep throwing away all of the ingredients in our ontology and replacing them with something else? So the structural realist would say the fact that the Empirical predictions agree in a certain regime reflects the fact that even though you're changing the ontology, you're keeping some structure.
You're keeping some relationship between the different parts of the ontology that is actually not going to get thrown away, right? We're not going to invent a new theory that we say is better for which Newton's inverse square law doesn't work anymore, right? So we don't need to know the stuff. It would be hubristic to say we know the fundamental stuff of reality, but we know some aspects of the behavior and the patterns and the structure within reality.
Igor: Yeah. Similarly in poetic naturalism, you're also not assuming that we know the stuff. And then basically though, the question of what's real and what's not moves, it shifts the line in the sand though, a little bit as well. So where does it go? Quantum mechanics is somewhat more real potentially than Newtonian mechanics in terms of it is more useful in more situations or something like that.
I could probably say. But Newtonian mechanics is also quite useful. Maybe it's also somewhat real. Then things like free will are also useful. They're also real in some sense. Where do you put the line in the sand? Is a girl that finds her relationship with a unicorn she imagines?
This is just going further. quite a bit further. Is that real in a relevant sense to you as well?
Sean: Yeah, I want to, in all super duper honesty, say I don't care. I want to know what helps you get through the world. What has causal efficacy? What makes good predictions? What can you share and convey information with?
I want to count the unicorn as not existing because I think it doesn't really have those like if the people who are not the girl don't believe in it, they, their lives are not affected in any way, right? You can't use the knowledge that she sees the unicorn there to make predictions about non things that are not related to her, et cetera.
So what are the pieces of various? causally efficacious pictures of the world at different levels. The chair is real, I would say, because when you say, please sit on the chair, I instantly conjure up various chair properties that are pretty reliable. It's this thing I can sit on. I know approximately the size and its purpose and things like that, right?
But if you get into the nitty gritty, which like my philosopher colleagues love to do, I lose interest. I think this is an effective get through the day kind of thing and I'm happy just to get through the day not to pretend to draw bright sharp lines where they don't exist. I
Liv: I mean it sounds to me like poetic naturalism is actually allowing for a superposition to exist.
theories to exist. That's really what it is. It's like quantum mechanics.
Sean: The chair does not stop existing when we learn that it's made of molecules.
Liv: I think that's just very beautiful.
Sean: One of the people who's at Santa Fe is Ted Chiang, the science fiction author. So we were talking about almost this question, he was raising the question, if you saw something in the world that was contrary to all of what you thought were your knowledge of the laws of physics, etc.
But it was really very vivid. Maybe you first say, oh, I'm hallucinating, right? Or I'm being tricked. But what does it take? When do you cross the threshold of, when do you truly throw out your worldview and replace it with something new? Versus just saying it's a hallucination. And I think that Most people, I think Ted was trying to make this point, like most people are reluctant to think that they're seeing a hallucination, even if it violates everything.
They're more likely to throw away everything they knew than to deny the evidence of their immediate senses. Maybe too much because our senses are very trickable.
Igor: Maybe that's a function of being around scientists, I feel like maybe, I wonder whether in the general populace, that's the inverse occurs.
Liv: Also, have you?
Igor: The theory would be like, the stronger your world models are, the less you, the more it takes for your senses to overpower how you make up reality in the end.
Sean: But you're right the early galaxies are a perfect example of this thing that it has to fit in. I talk about this in the big picture a lot.
That we don't judge phenomena isolated, right? One by one. We always ask how they fit into the bigger picture. The galaxies, so you see some galaxies earlier than you expected. That is just, it's hard to convey to non cosmologists, that's not going to cause scientists to doubt the Big Bang model. Because the other pieces of evidence unrelated to that particular phenomenon are just so overwhelming.
We might work hard to change our theories of early galaxy formation, that makes perfect sense. Yeah, and there's many examples of this. Then, when we discovered the accelerating universe like I said, the reason why that was so easy to accept is because you didn't have to change your whole world model.
You just had to fit things together in a different way. It helped. And it's why Karl Popper is not right about the philosophy of science. Because, science does not precede one falsification. After another, sometimes things are apparently like you get an experiment. Oh, look, neutrinos are moving faster than the speed of light.
And everyone's yeah, I doubt it. And then, Oh, a couple of months passed. We didn't plug in the cable. Okay. That's much easier to swallow. It's really,
Liv: That's the question of just are you a good Bayesian or not? And are your priorities sufficiently?
Sean: But specifically your priors are interconnected, right?
So it's not. It's, yes, you should be a good Bayesian, but the set of propositions for which you have priors are not isolated from each
Igor: other. Yeah, and sometimes the better theory is one that has some weird effects here, and the alternative theory has weird effects there. It's which version is actually the less confusing one for many other areas.
Sean: People have often wondered about the fundamental discreteness versus smoothness continuity of the universe. Sometimes people will tell you that quantum mechanics points towards fundamental discreteness, which is just wrong, that is just incorrect. Other times people will say Stephen Wolfram has a theory where it's just graphs, and that's true, he does have a theory, but it's not nearly as successful as the other theories we have.
In the battle of discreteness versus continuity, right now all of the victories belong to continuity, okay? So the reason why I wrote that paper, called Completely Discretized Finite Quantum Mechanics, or something like that, was just point of a little loophole that says, okay, here is how I think you could make a discrete universe without completely overthrowing the laws of physics as we currently understand them.
So I think it's possible, but I'm not necessarily advocating for that position. I'm just letting the world know that you don't have to throw the baby out with the bathwater if that's what you want.
Liv: What is it that makes a great physicist?
Sean: I'm a pluralist at the end of the day. I think that there's different ways to be a great physicist.
As there's different ways to be a great basketball player, or a poker player, or a podcaster, or whatever. I'll give you an example. Richard Feynman and Murray Gell-Mann, who were both great physicists, Not exactly the same age, but pretty close. And they were contemporaries at Caltech. I used to sit at Richard Feynman's old desk at Caltech.
That was my claim to fame. Completely orthogonal personality types and completely orthogonal styles as physicists. Feynman was, both of them were amazingly smart and they could do calculations and they made enormous contributions. Feynman and people don't quite, Realize this, Feynman almost never proposed a new law of physics.
His thing was taking the existing structures and understanding them better than anybody else. Maybe reformulating them to be even better. He didn't invent quantum mechanics, but he invented the path integral formulation of quantum mechanics, right? He didn't invent quantum field theory or renormalization, but he drew those diagrams and he made it much clearer and accessible to the masses, et cetera, et cetera.
Gell-Mann was like every Tuesday, he would invent a new model of quantum mechanics. Particles and so forth symmetries and he predicted new particles and they were found and so forth. Feynman was self conscious. a shaper of his own image. Like he worked really hard to be thought of in a certain way.
And that way was almost self consciously aw shucks, right? Ah, I'm not so smart. I'm just smarter than everybody else, right? He had a thick Brooklyn accent. That thick Brooklyn accent came and went depending on who he was talking to, right? Gell-Mann on the other hand, super uptight, dressed like very Natalie was the kind of person who would correct you on the pronunciation of your own name because he knew the etymology of it, right?
And he wanted everyone to think that he was the most sophisticated person in the world. Feynman wanted everyone to think he was the least sophisticated person in the world. They both wanted everyone to think they were the smartest person in the world, right? But they were both great at doing physics.
Paul Dirac and Albert Einstein could not be different. Galileo and Newton were very different from each other. Aside from being smart, working hard, being creative. I don't know what makes a great physicist. I think that doing great physics is the answer.
Igor: There's this famous picture of the 1927 conference where so many of The
Sean: Solvay conference.
Igor: Yeah. Greats were all at the same time and it seemed like, Looking back, it seems in a way like it's the golden age maybe of physics, but maybe it was a particularly great time to make discoveries. What do you think? Is it that we have those great physicists today and it's just so much harder?
Sean: Nature is not going to have any principle of handing out great scientific discoveries to be made on a regular interval. The first half of the 20th century will go down in history as being absolutely unique in the history of physics. And we're So we're close enough to that we still feel bad about it not being quite as cool, but we invented, relativity, we invented general relativity and special relativity, quantum mechanics, quantum field theory, the Big Bang, the expanding universe, nuclear physics, radioactivity much of modern quantum field theory, et cetera, particle physics, all in the first half of the 20th century.
The way that I say it is we discovered a whole new set of things in the first half of the 20th century, so much so that the second half of the 20th century In physics was basically just tidying up the first half, right? Okay, so what particles are there? How do we get the infinities away? How do we break symmetries?
But it was all in the framework that was set up. And then, now the 21st century is here and we're yeah, progress is not quite as fast as it used to be. It's not our fault. . We, there is something in science where there are low hanging fruits to be picked and the, those 50 years in the first half of the 20th century where we finally built a ladder tall enough to pick the juiciest fruits and there could very well be revolution tomorrow, that makes it just as interesting.
But there's nothing in the data that is forcing us to it, right? That's when the revolution happens, is that when the data. Don't quite let you become sanguine. People tell stories about how in the last years of the 1800s, people thought, Oh, physics is almost done. It's not really true.
Some people thought that, and honestly, they had good reasons to think it, but the super smart people were like, there's a couple of loose threads that are gonna unravel the whole picture. And then it happened.
Liv: Is it related to the ease of experimentation? Because it seems like now in order to do experimental verification of fundamental physics, you need ever larger particle accelerators and it's asymptotically more expensive if it feels like at least.
Sean: I think it's partly that, but it's worse because there's two things going on. One is as we understand more and more things. Yes, to go to the extremes, to the places where you don't understand something becomes more and more expensive. Bigger telescopes, bigger particle areas, whatever. But the other thing is, the thing that is surprising and we didn't have any reason to expect, the theories that we already have extrapolate really well.
So not only do you have to go further, but you go further and you don't find anything because your theories already work. That didn't used to be the case. And think of it this way. We, all this, these wonderful discoveries of, pions or cosmic rays or whatever, radioactivity even like radio waves and things like that, these were discovered at some point.
We didn't always know they were there, but they were there. They were just invisible to us, okay? And now we've found all the stuff that's there lying around, except for arguably dark matter, which is different. But dark matter doesn't bump into us, otherwise we would have discovered it already.
It's there, but yeah the low hanging fruit has been picked, and our theories are really good, and that's no reason to stop or give up or et cetera, et cetera, but it's just a reason to not be too hard on ourselves. We're trying our best and some of us individually like me are changing our research focus because it's hard to make progress in those areas.
Igor: Do you think the frequency of such big discoveries is basically just going to keep going down? And do you think there is, yeah, some limit to it eventually?
Sean: I think that flashlight of attention moves. I think that fundamental physics was ripe for revolutionary discovery in the first half of the 20th century.
And we did it, still living in the ramifications of that. Now it is harder. So the pace, we, again, the people and the experiments are just as good or better than they ever were. It's just harder to make something that is truly revolutionary. Like I said, it could change tomorrow, but I wouldn't place a lot of credence on that.
Complexity, biology, computer science, there's economics for that matter. There's plenty of other places to make hugely important earth shattering discoveries. So the rate of discoveries total is going to go up, I'm pretty comfortable predicting, but the core theory of particle physics and gravity is going to remain for a while.
Liv: Is physics still a clear win? Given how hard it is, because obviously it benefits the world if we have greater understanding of how physics works but given the way you've, given what we hear about how life can be in academia, how competitive it is and so on, the personal cost that comes with dedicating your life to this, is it in your experience been a win and do you think it still can continue to be that for new people coming into the field?
Sean: Yeah, it's a win for the winners, so I'm not, it's not fair to ask me in some sense, but let's point out two things. One is, most physics is not fundamental physics. Most physics is doing great. Kinesmatter physics, plasma physics, astrophysics, biophysics, like many areas of physics are still growing like gangbusters.
Within the specific field of, taking the standard model of particle physics plus general relativity and trying to improve on it, That's a really tough game to play right now. And I think, the students are not dummies, like they see that, right? And there's also a time lag, like you grow up reading books written 20 years before the research you do, right?
So I grew up reading books that were still in the heyday of discovering new particles and particle theory and things in cosmology and so forth. And so those are the things I wanted to do. I think I've done a little bit in them, but it's just hard to do really well. Shattering things. But we'll have to see.
I, I think that I don't ever, if someone is dedicated to understanding this, I want them to know how hard the route is going to be. Tough road to hoe, I think is the correct expression. But if there's nevertheless into it, I want to encourage them as much as I possibly can. I also want to make sure that if they're thinking about that, they're making an educated choice, that they know there's other exciting things out there, right?
And if they do that and still at the end of the day nope, I want to get the grand unified theory or the theory of everything or whatever. I know it's going to be tough. I know that the evidence, the experimental input is not guiding us very much, but it's my. thing I want to do, then good. There's no other reason to ever become an intellectual or a scholar than that.
Igor: I am curious about your stacking of theories that maybe I or Liv would know of where you think, okay, here I put the most nines in my ranking and these ones actually are currently still accepted, but. I would be very unsurprised if in 20 years, maybe even they're already going to be put by the wayside or explained better through something else.
So I'm curious about some of the tops and some of the bottoms.
Sean: Great. Yeah. That's a great set of questions. I do think as a structural realist, I do think that we can overthrow the underlying ontology for theory while keeping its predictions and structures. So the standard model of particle physics plus general relativity, what is called the core theory in the big picture and elsewhere.
is not going away. That's nines, many nines there. It's not complete, so it will be added to 100%. And it might even be tweaked around the edges because it's based on quantum mechanics itself, and I'm open to the possibility that quantum mechanics is not exactly right, but I would put low credence on that.
I think that quantum mechanics, quantum field theory and within the energy and scale regimes where it's supposed to apply, that's going to keep applying.
Igor: Those are S tier theories. That's
Sean: right.
Igor: That's
Sean: right. The general framework of the Big Bang model from one second after the Big Bang to today is going to stay there.
That's not going away. The general framework of relativity is not going away. The fact that there, the claim that there is dark matter in the universe, I would put at well above 90%, but not 100%. People try to replace it. I've tried to replace it. It's a game people play. But the amount of evidence.
that is accumulated in favor of dark matter is overwhelming. In fact, I will give advice to listeners. If anyone says, I have a theory that will replace dark matter. And the next thing they start talking about are the rotation curves of spiral galaxies. You don't have to listen to them because that was the best evidence for dark matter 40 years ago.
But now you have to talk about the cosmic microwave background, large scale structure and things like that. So if they're not talking about that, they're not really worth taking seriously. Dark matter. 98, 99%. Dark energy. There's only two, the fact that the universe is accelerating is many nines. That's been established.
Is it dark energy or is it modified gravity? It's almost certainly some kind of energy, some I've tried to modify gravity again, and it just makes things worse. But it's still we have that's relatively new, relatively young. Let's put it at 97 percent for dark energy. Is the dark energy Strictly constant Einstein's cosmological constant or is it slowly changing over time again?
Even just 20 years ago. I would have put a lot of I don't have credence on changing with time, but the theories have not panned out and the data are still happy with the constant. So I'm going to go at least 93 percent believing that it is a cosmological constant. And at the bottom end of
Igor: the
Sean: spectrum.
Yeah. So I'm working my way down to like more speculative things. What actually, cosmic inflation is a very popular idea for what happened in the first fraction of a second in the history of the universe. I think it is actually quite speculative. I think that it works quite well, it's very attractive for various reasons.
It also has problems. We don't have any alternatives is the real, as a good Bayesian our propositions to distribute credences over our inflation versus something we haven't invented yet. And there's another 2 percent for actual bad theories that people have invented, but I would give inflation like 50%.
Many of my colleagues would be at the 90 percent level, but I'd be 50%.
Igor: That's what would you give to string theory as currently understood, or as I understand, it might not be testable anywhere. We're struggling with finding a way to ever make a very interesting
Sean: case. I think it is certainly the leading candidate for quantum gravity right now.
That might not be saying a lot. We'd maybe just haven't thought of a better candidate yet. But the other reason which makes it hard to attach a credence to is that the theory keeps changing as we learn more about it. So it might be that, ultimately it's. It was the correct idea, but it manifests in a completely different way than we expected.
So yeah, I'm just very reluctant. I would, it'd be between 50 percent and 90 percent for me and string theory. And again, I know colleagues who put it at 99%.
Igor: What I'm noticing as well is that the theories that you described to be in the S tier category are also the ones that you're actually writing about to a significant degree, which is very fortunate.
Cause I think one of the also potential win-wins actually for people learning physics is that you will be less confused about everyday things. So learning the things that are very likely here to stay allows you to have. world models that are gonna be worth their salt for a longer amount of time than something else that you're trying to build it on.
Sean: No, that's absolutely true. I will say that one of my more idiosyncratic credences is in many worlds where I would put it at more than 95%. And few people would do that, but I think that just compared to the alternatives, that's where I would put it. Particularly in, and I've written a whole book about many worlds, right?
Something deeply hidden, but the new books I'm writing, the biggest ideas in the universe books, the. Whole point of those is to write things that will still be true 500 years from now. So I'm sticking to things that are at 99% and above, almost exclusively. I'll hint at things. It's gonna be harder to do in the complexity book because that's less well understood.
But I'm gonna try my best to stick to things that are not gonna go away.
Liv: Thank you.
Sean: My pleasure. Thank you for having me in the Purple Palace.
Liv: So there we go. Thank you so much for tuning in and huge thanks to Sean for this conversation. And my occasional co host Igor. Do check out the notes below for further reading including links to some of Sean's more recent papers, his research, and also some of his books.
I highly recommend his book, The Bigger Picture. It's just one of the most beautiful and comprehensive explorations of the intersection of physics and philosophy. Also check out his podcast, Mindscape. I was actually one of his early guests on it years ago and it was an amazing conversation.
Unsurprisingly, he is a fantastic interviewer. So check that out. It's over on YouTube again, linked below. And as always, if you enjoyed this, please tell your friends. Thank you. And I'll see you next week.
The Win-Win Podcast is an exploration of the games that drive our world. Created by poker champion and philanthropist Liv Boeree, it explores solutions to humanity's biggest issues through conversations with leading thinkers.
Incentives make the world go round. But as the stakes get higher, are the games we're playing taking us where we really want to go?
We can never know for certain which path the future will take, but we can change the likelihoods of the paths we want. How can humanity harness the power of both competition and collaboration to unlock truly abundant and sustainable futures?
Liv is joined by top philosophers, scientists, gamers, artists, technologists and athletes to understand how competition manifests in their world, and find solutions to some of the hardest problems faced by humanity.
Win-Win doesn't just talk abstract concepts and theories; it also digs into the guest's personal experiences and views, no matter how unusual.
This isn't just another culture war podcast. If anything, it is the opposite; seeking better coordination mechanisms through synthesis of perspectives.