Don't hate the players, change the game...
Don't hate the players, change the game...
Open-source movements have historically been a great source of collaboration. Can we harness the power of open source software, science or AI to help solve the world's coordination problems? Or could it just make everything worse?
In this episode of the Win-Win Podcast, Liv and Igor are joined by Peter Wang, a physicist, computer scientist, and founder of Anaconda, one of the most widely used open source platforms for Python development. Peter leads Anaconda’s AI Incubator, which focuses on advancing core Python technologies and developing new frontiers in open-source AI and machine learning, especially in the areas of edge computing, data privacy, and decentralized computing. In this wide-ranging conversation. we explore how these technologies, and their philosophies can potentially help us navigate the complexities of our informationally overloaded environment.
Chapters:
01:50 - What is Open Source Software?
10:29 - The History of the Open Source Movement
35:06 - Security and State Interests in Open Source Software
37:16 - Open Science and The Knowledge Commons
39:40 - The Central Problem of Coordination
43:46 - The Problems Markets Solve and Create
01:04:40 - Synchronous Attention As A Scarce Resource
1:09:23 - The Sensemaking Crisis
1:17:09 - Avoiding "Virtual" Dystopias & Escaping the Matrix
01:22:03 - Is Technology Values-Neutral?
01:32:30 - Is Moloch Controlling our Tech Stack?
01:35:57 - Psychosecurity and The Dangers of Attention-Renting Software
01:41:55 - Is Science Stuck?
01:43:51 - Understanding the Cosmos. Also: Aliens?
01:53:18 - The Stagnation of Physics
01:56:06 - AI from a civilizational perspective
2:05:23 - The Benefits of Open Source AI
2:27:27 - How to Minimize Risks of Open Source AI
Links:
♾️ Peter’s Twitter: https://x.com/pwang?lang=en
♾️ Anaconda: https://www.anaconda.com/
♾️ Peter’s Blog: https://medium.com/@pwang
Transcript:
Peter: Virtuality - a world of the virtual - is a manufactured world. It's a world where we allow other agents to intermediate between our direct experience and relationing and sensemaking. When you watch the matrix and the guy's eating the steak and the steak tastes good to me. So many aspects of our world, when you learn to look for it.
You realize, Oh, wait, that's made up like this. It's Disney world. It's Vegas. It's designed to make me feel a particular way. We have more and more things that are made and presented to us. Ready made, ready digested for us to consume in a mimetic sense or even an informational sense.
Liv: Hello friends and welcome to the Win-Win Podcast. I hope you are ready to have your brain stimulated, because today, me and my occasional co host Igor are speaking to the one and only Peter Wang. Peter is a physicist turned computer scientist turned entrepreneur who co-founded Anaconda, which is one of the most transformative open source platforms for the programming language, Python, and has over 40 million users worldwide since its founding in 2014.
But don't worry if you are not a techie. I certainly am not familiar with a lot of this programming stuff. The reason why I love this conversation is because it's an introduction and then a deep dive into the open sourcing movement and the philosophies behind it. Not just with open source software, but also with Python.
The concept of open sourcing science and of course the hotly contested debate right now, which is whether or not to open source AI. We also hear his views on what he calls virtualities and colorful dystopias. We even talk about aliens. So there's really something for everyone on that note. Here is my conversation with Peter Wang.
Peter, thank you so much for joining us. To start with, given this, I think a large bulk of this conversation is going to be about open sourcing. And to start with, I want to talk about open source software, but for the non technical people in the audience, of which I am partially one of, start us off by defining what exactly is open source software and why it's different to conventional software.
Peter: To get started, there's the concept of source code. And that's a term that I think is in the. General kind of parlance of the world. But I think people maybe sometimes are not quite aware of exactly how software comes into being. So most of the code that you use, if you use an app on your phone, or if you double click a program on your computer, it's running some kind of a program.
Okay. So people are familiar with this concept. When you go to a website, your web browser is a program that talks to a server and there's some program running on the server that then produces the content, which then goes into your web browser. Great. But most of the software that runs in the world. It actually has what's called the source code behind it.
And that is the actual programs that people write. And this is a really important thing, which we'll probably keep coming back to in today's conversation in the process of going between what the author, the software developer writes and what actually runs on the computer.
Those are oftentimes two different things. And when open source was a concept for started, it was in the Seventies, they were up about that timeframe and computers. Before, personal computers were very expensive things that labs or universities would buy, big companies would buy and people would share time on the computer and you would write this, you'd write the software and then you would do what's called compiling.
So it compiles down into a runnable or we call executable. Program and this process of going from the source code you could read into a binary encoded executable program that you can't read that process called compiling and what people discovered was that not discovered, but there was this concept that we should actually be able to read the source code.
Behind the programs we're running for a variety of reasons. It might be for security. The open source movement itself actually started from a perspective of commons and sharing and actually win-win this idea that, Hey, if I'm a researcher and you're a researcher, if we share our stuff, then, if you show me the source code for the program that you gave me, then I can make improvements.
I found a bug and vice versa. And so it's that very simple concept, right? Rather than everyone shipping around. If you think about cooking, there's recipes and there's like cakes. And everyone eats the cakes. You can't eat the recipe, but without the recipe, you don't know what's actually in the cake that you ate.
And maybe you're allergic to something. Maybe you could tweak it and make it a little better for you and your family. So open source was a movement that actually started in response to computer manufacturers, trying to close all this down and lock all these things down and say, look, if you buy my computer, which was a massive, maybe multimillion dollar thing that you installed inside of, big space inside your lab, then you can only get programs from me.
And people were like, no, we could write our own programs. We share them with each other. We can email them to each other. And that's how the open source movement started. It really is an aspect of if I paid for the hardware, I have the freedom to share and collaborate and develop software for this computer with other people.
That's really the origin of it.
Liv: I see. So it came as a response to the attempts to commercialize software.
Peter: Yes. Attempts to lock down what people already paid for. They feel like they already paid for the computer. Why shouldn't I? The computer's programmable. I can write programs. And who are you to tell me that I can't share that program with somebody else?
And this happened even before the personal computing revolution. When that came around, people started trading all these source code programs for personal computers. And then actually Bill Gates got pretty, Pretty angry about this because he wanted to sell commercial software. He wrote a letter to hobbyists, a very famous letter that he wrote saying, look, if you guys just keep copying and pirating our software where we can't sell it anymore, then there'll be no software ecosystem at all.
And we can't make money as software professionals. So it's a famous letter. It's actually a Wikipedia entry about Bill Gates' famous letter to the hobbyists in the 70s. And is that true? The world has changed quite a bit since then. What we found out is that both are somewhat true. If you provide an economy, if you provide a way for people to make money off what they do and what they love, then they're more likely to keep doing it.
But at the same time, the PC era and most of the software ecosystem that developed Was around proprietary and closed source software. But then what ends up happening is that the knowledge that gets captured inside the proprietary software frequently becomes commoditized as the open source community, they share with each other and all of the people in the world collaborating together to make things out compete a few people inside even the wealthiest and largest companies.
And so the most popular software systems today are all powered by open source software.
Igor: And in the nineties or when the personal computer started coming around and people started using it then for example, windows, which is a closed source operating system still allowed for other software creators to use it on windows.
So like where you previously talked about, Hey, I built the hardware, I built the operating system, I want to sell you software. Was that later merged with other people producing software in response in
Peter: part of the open source? That's a really good point. So at the time before PCs, which are technically called microcomputers, there was an era called mini computers.
And those are large cabinet refrigerator size things that people install. They were still cheaper than mainframes. Okay. Which would take an entire floor of a building. Is that how the name Microsoft came? Yeah. Microcomputer and software
Igor: together. I see. I didn't know the micro.
Yeah. Yeah. PCs were really first known as
Peter: microcomputers because they would fit on a desk. And so that when the mini computer first came out is when you started having more people adopting this technology and there became a user community that wanted to program these things. So only at that point, I think the manufacturers still were very much hardware oriented.
Like we sell something very expensive. Computers that shift on pallets and that's where the value is. It didn't even occur to them that a software ecosystem would emerge on top of this. And when that emerged, then I thought Bill Gates was a genius. Or his big vision was when he licensed the MS DOS operating system to IBM to run on the IBM personal computer.
So IBM came out with this personal computer, the IBM PC, and they needed an operating system for it because they needed some way to run basic programs. I think if they understood what a big opportunity was, they would have done it internally. But the early PC era was like a lot of hobbyists. And so in any case, it was a category.
They want to get into it. Lots of people are getting into it. People were looking at IBM. Like IBM was like today, though, we think of Apple, Google, Microsoft, you put them all together. That's what IBM was in the seventies. There was simply, they were just the behemoth. And so when. Jobs had that ad, his famous ad where the guy, or, the girl runs through and she throws like a sledgehammer through the thing.
That was Apple as a revolutionary PC thing, counterpoint to the IBM monolithic giant gray, blah, blah. And so I think the cultural moment of what the PC was a bicycle for the mind, empowering the, every man, every woman all of that stuff, it was really a cultural thing at the time. As well as a technological movement.
So Bill Gates had the vision that you would have one of these personal computers on every desktop, hence they were called desktops, right? These desktop computers. And he saw that they could, that you need an operating system and people would run programs and he saw a platform opportunity. And so when he releases a program or when you release an operating system, you make it an open platform in the sense that anyone can go and program for it. Now most of those early pro operating systems, you would have to still pay for the operating system. Of course, you still do pay for operating systems. And then oftentimes people charge for the software development. Toolkits to even build something and having open and relatively cheaply available software development tools for the platform was a new concept that came around.
Yeah. It's really weird to talk about this stuff. Actually. I'm glad you asked about all those kinds of nerdy, tech history, but this is in such contrast to today when open source is so standard, a kid can pick up a Raspberry Pi or Chromebook. They go online, they find all the source code. They expect to develop things for free.
Software development libraries are free. That was not the case. People charged thousands of dollars for those software to own libraries back in the day.
Liv: And when you say back in the day, what 80s and
Igor: 90s? Yeah. Is that when you first got involved with the open source community?
Peter: I started using open source in, I would say proper open source.
I started using it in the early nineties with Linux. So the Linux open source operating system came around. And my parents finally bought me a computer powerful enough to run it. Right before I got to college. So 94 timeframe, I started using it as a user and then I didn't get started really contributing seriously to the open source development ecosystems and community until I would say the early two thousands with Python and scientific Python, I got more involved in that community around then.
What was Linux like back then? Really primitive. Yeah. I went to college in the fall of 95. Windows 95 came out. Around that time it was well, August or so. I was selling Windows 95 as a Walmart retail clerk while using, while yeah, installing Linux off of like floppy disks at home for all my friends.
It was exceptionally primitive compared to, of course, windows, which was a much more polished user experience, but I loved it because I showed up at Cornell and Cornell was great because they had really fast internet to every dorm room. So I could get on the internet. I had a Linux server. I was doing all these things, learning so much.
Windows was roughly more locked down back in those days to change your IP address, you had to restart the computer. Windows didn't even have a native TCP IP stack. You had to install this trumpet windsock stuff. It was very early days for Windows even at that time.
Liv: So you got into it in the nineties and then you developed something novel for Python, which is a programming language. Can you explain what it was specifically that you developed in as layman terms as possible?
Peter: So for most of the two thousands, I was involved in a sub community of the Python language called scientific Python. So what we found was that Python was being adopted more and more by scientists, engineers and people who.
Traditionally would have used tools like MATLAB or Mathematica. They, it's a niche area of computing, so to speak. It's not like a traditional programming language thing that you would do. And then I realized that the scientific Python ecosystem was building libraries and tools that are being used in businesses more and more.
And towards the end of the two thousand, my co-founder and I were being called as consultants to go to places like JP Morgan and other kinds of hedge funds. And so we had a thesis that Python was ready for primetime Python and this niche scientific Python stack of tools was ready to actually do Python for business data processing and business data analytics, even machine learning, things like that at scale for businesses.
So what I actually created was essentially a marketing pivot for an entire open source community, as well as some technical tools. But what we did is we took the stuff that was Developed in the scientific niche, somewhat disregarded by most mainstream computing and CS people. And we push that hard as a disruptive technology into business computing at just the same time that big data, if you remember that term, was like blowing up in business as well as cloud computing.
So we realized people will gather more and more data and they could rent supercomputers on the cloud to crunch that data in novel ways, but they needed a software tool to do all that. And traditional business analytics tools just fell over when you got to more than a few million rows of data.
Or they got very expensive. So we use Python and the scientific numerical Python stack as a bunch of disruptive low cost or free open source tools to disrupt this entire category of business data analytics software. So I created a user community called PI data. I created it. A company now called Anaconda, but we found it as continuum analytics.
And inside Anaconda, we create a number of different tools for making Python better and easier to use. Probably most prominently is our package management and installer software, because all of these, because it's an open ecosystem, it's all these little pieces and tools written by thousands and thousands of different people.
And for any regular person to install them, it's very difficult. So if you buy Microsoft windows, you get a CD or DVD, you install it. And you have windows, but if you want to use the Python scientific or data analysis stack, it's just, everybody's got all, everyone's making all these things. How do you get it all?
So we made a kind of all in one sort of distribution so people could install it, have everything ready to go, and then they could also keep updates and they can install new things. And that's the big innovation I delivered there was the Lego ground plate and a set of regular stud spacings.
So all these pieces could snap in together.
Igor: And the updates that would happen is because people constantly contribute to the libraries and to the entire stack, and then the thing that you sold, we'll just notice when those updates will choose when they want to do an update. And who maintains what are the good updates?
What are not the good updates? That's
Peter: A really good question. So in general, in an open innovation ecosystem, there is no gatekeeper, right? The people who are doing the innovation there, it's a common innovation model. If I'm the maintainer of one of these libraries and I say, Oh, I added this new feature, I think it's great.
Who are you to tell me that it's not great? I'm going to go and add it and say, here's a new version of this library. Now, as a user, you might say I don't need that. And all my other code depends on this other version. So I'm going to stay on this version. So by providing a system where people can manage which updates they want to take.
And developers can push updates out. Like it's actually, it's kind of a low level detail, but it's a really important thing in order to allow for end users and the innovator creators to have a kind of ground plane to, common playing field or whatever.
Liv: So Anaconda has been successful in both the open source community, but then also like the commercial enterprise sector.
So explain to me how you. Balance those two things. Cause they seem at first glance, like at
Peter: odds. They have divergent values. They're not necessarily at odds. Actually, they're fundamentally not at odds. And I think this is going to be a theme as we talk today about win-win is what I realized. So I got into this stuff.
Just because I was following technical values and technical merits. I liked what I was seeing in the Python world. I took up Python as a language. I was a C plus programmer. I took Python as a language because it was easy to use. It was nice. It was pleasant. And then I started getting into scientific Python and I liked what was being built there and I did my own innovation work there.
And then over time, as I created the company and I realized, Oh, here's why businesses are using this kind of stuff. Then I realized that there is a fundamental tension to your point. There is a fundamental tension between these things and they don't have to be intention, but here in the open source community and building a successful open source business, I have a unique vantage point on how this generative gift gift economy, this kind of human ecology that produces innovation at massive parallel scale in the open.
The open innovation community in the open source world has almost no overlap in values with the industrial labor economics of software development, proprietary software, all of the VC funded exponential growth at all costs, a finite game, zero sum mentality. They have so little overlap as to be almost tangent to each other, those circles, but they don't have to be, there's some deeper values that they actually both could share.
And there's a common innovation model that could be developed if they both could be aware of. Of what was actually involved in here, because all these people are doing this open source innovation. They're also paying rent. They have to feed their families. They need money as well from the actual economy, which is driven and derives a lot of value from what they build.
So my hope, my goal in being more articulate about this stuff is recognizing that I have this unique vantage point. I respect the value systems of both of these things, and I see some of the limitations and I try to bring them closer together and have more overlap. What are some examples of those shared values?
Examples of shared values? So the business users, they want high quality software, very simple. They want software that is correct, that is performant, that is secure, that is developed by people who actually give a crap. And over here on the open source developer side, they want the same thing. If you are making some library for doing visualization or some library for doing, Really deep, gnarly numerical simulation of molecule binding.
Like you have no, a lot to be a scientific software developer for anything of value. Civilization needs those people to really care about that, but civilization also needs those people to be able to make money and so they can keep caring about it. And so all of these people are aligned on that merit.
The problem is that the, what I've seen is that traditional software development and while the traditional business way of paying for software, they only know how to pay for things that are proprietary and closed over, they don't actually know how to contribute to any kind of a commons thing at all, because that's not a model that capitalism.
Has really talked to people about MBA school. You don't learn about commons and what it does for you. You care a lot about the moat. You care about moats. You care about the fruit. You care about the apple on the tree. Sometimes you learn about how trees make apples. Very seldom. Do you learn about soil and water and erosion and the microbiome and the mycelium?
And I, that's the level I work in. Even as an apple vendor, I work at that level. Cause I know that's important to make good apples and good trees. But unfortunately I don't teach enough of that. I think in modern business.
Igor: That said there, there were a few examples, where businesses have chosen to contribute to open source as a play against a competitor.
That's arguably what Meta is doing right now with Lama. Distributing that because they were behind Google and OpenAI, et cetera. And now they, through that fostered a whole community, which is also great for many other reasons, but it also had the effect, I think, to slow down the total amount of effort that would have otherwise been put into OpenAI and Google.
Yeah, I,
Peter: I won't. Pretend to understand what happens in the mind of Mark Zuckerberg. But I would say that observationally phenomenologically, that would not be an inappropriate description of a potential strategy for why they did that. And Mark seems to be
Igor: a business minded person. I don't think you end up with a hundred billion dollars without being somewhat business minded.
And he's been described as quite an aggressively business minded person as well. So anyway, I, I would, I, again, I didn't. There, I imagine there are also other motivations. And if one listens to Jan LeCun, I'm sure he also has additional motivations. He seems to be, for example, very much in favor of open science in general.
And therefore I do buy that's his motivation. But other examples in the past, like I heard that IBM invested like a billion dollars into Linux in the late nineties. In part to reduce Microsoft's standing and the like monopoly, not monopoly, but like I,
Peter: They ended up buying a red hat for 30 something billion dollars, which is to this day, I think still the largest.
Software acquisition for open source. And Red Hat is the most successful open source company. I would say that we are in that mold and that model as well as an ecosystem of open source play. And they were acquired by IBM for 30 billion, 30 something plus. But, and I don't know if that billion dollar investment in Linux from IBM at that time was, I think it was maybe somewhat of a rear guard action.
Linux was already happening and I absolutely see it as being a competitive action. But I think the broader question of. Can you use open source as a forward looking and competitive, aggressive action? Absolutely. You can write because the idea of open source is, and it's not because you believe in the win-win infinite game, humans working together to do whatever it's that if you have a competitor in an upstart competitor, let's say, and they have some core value because there's a bunch of smart people coding on something, it's software.
So it's smart people coding. And there's this much value. If you produce, if you find the development of a free thing that commoditizes 70 percent of the value, then at a minimum, you've reduced their upside by, by, by, by two thirds. And then that's the kind of thing we've, I don't want to name any names, but there's definitely this kind of thing where people can use open source to do that.
Another thing that happens is big companies in particular, the idea of using open source as. I guess it's all in the category of using what is a commons, using a mechanism of a commons based approach. But in a zero sum scarcity mindset as a weaponizing tool of the commons in order to serve really the ends of the zero sum finite game.
And that's when basically they show up to an open source community. That's thriving. And they essentially try to hijack it,
Liv: right? Is there an example of that? I don't want to name names,
Peter: But yes, what I would say is this. One thing that you will see is one of the interesting things about these comment approaches is that most people there are just trying to get by, everyone is really they're not luxuriating in, like yachts and stuff, they're really trying to get by.
So there is this economy. Of resources, which I think produces higher quality, more modular, cellular sort of approaches to innovation. Now, what's nice about that is that this sort of emerges a set of patterns, a set of what might call them standards or protocols, which are immensely valuable for everyone, because you have even for end users, if you're like, look, I don't care about all this theory, Peter, I just want high quality software.
The thing you probably also want is the freedom in the future to choose your own destiny. When to get off of it, when to stay on it. And even for business users, not just individual end users, but for business users, businesses are like if I use open source, then the vendor can't tell me when I have to upgrade.
They can't do a forced upgrade because they need to hit the revenue numbers and I have to buy a new version. I can say, nope, screw you. I can hire contractors to keep me going on the older version of this thing until I'm ready to upgrade on my own time. And then and also I know that if new people come along, new smart people come out of the water all the time.
When they innovate, they can build on the last sort of rung of the ladder. It's an open ladder and everyone can do this wonderful branching sort of thing. Those open standards are also really useful though. If you want to hijack an ecosystem, cause you can just say, look, I can build something that's compatible with all of these standards and they're really nice.
And in fact, if you use these standards on my platform over here, Then there's all these like bells and whistles and all this other kind of stuff. And this kind of hijacking of communities or in some cases, people call it strip mining. That is something that has also happened where it's, some startup team that builds a really cool thing.
And then a big vendor comes along and says that's very cute. We'll take that whole piece of open source cause it's open. We can use it too. And we'll build all these compatible APIs and all of your users will just come naturally into our platform and do this.
Igor: So it's a trade off though, then, right?
Because while it sounds like it could be described as hijacking at the same time, they are providing genuine value. Like the users are coming because they are now getting. Bells and whistles that they weren't otherwise getting. For example,
Peter: I love this. Cause we're starting to see the tail and we started to catch the scent of Moloch.
Okay. Because these big companies, when companies get to big scale, usually they get to big scale by having, they hire more and more people and teams get bigger. Things just, things get bigger with more people, these tech companies. And so you start hiring more administrators, more bureaucrats, more politicians, more people who make their way up the ladder by being good at.
Doing the job of business versus actually doing the thing of making the thing of value. And that is a model of institution, or let's say corporate institutions that comes up again from the industrial and industrial era sort of theory of organization and management. But when we apply it to tech businesses, what it means is that you end up with teams where you have maybe 10 engineers, a hundred people who have no idea about the tech doing all these things.
And so then if you think about it from the end user perspective, that's I'm paying a markup for all of this stuff. And a tiny portion of it actually goes to delivering engineering value for me. And in some cases, maybe it's useful in other cases, it becomes very much rent extractive. And that's where it gets, you're not wrong Igor in the sense that yes, if someone provides higher value, they have a right to charge for it.
That's the nature of this thing right on. But the way that this typically goes is that's not really the motivation of the people who do that kind of work.
Liv: Has the community developed any kind of defenses or, Better detection methods of when, those hijacky type of people are getting involved.
Is there a way it can defend itself?
Peter: This is an interesting question because when we say the community, there are thousands of open source communities, right? And almost all of them. Are successful and thriving. The only reason they faced this problem is because they were good at solving a technical problem.
Oftentimes the people who are good at solving technical problems are the risk takers, the innovators to go build these new things. They're also, they're not also at the same time really good at doing governance and thinking about all the stuff and there's just a lot involved, right?
So you're going to take a unicorn and find a unicorn of a unicorn. It's very rare. So these communities are generally oftentimes not set up to. To even talk to their user community about why they shouldn't go do this thing or what will happen to them or whatever. And when these tools that they build reach a certain level of scale there's a reason why the big companies come hunting.
It's because there's a lot of users there while those users are also not steeped in these values. They don't really care how the stuff was made. They just want to get something that works. And you can't blame them for that either. So this is an emergent thing. Like you can't, there's no, you can point your fingers.
No, there's no single person who's like the evil mastermind. Hey, I want to rip off all these open source communities, but the emergent effect of how people make short term trade offs, how there's not really the words or the knowledge or the education about what's lost when mined because of the lack of these things.
There's really not a defense against it. You'll find people griping about this stuff sometimes every now and then, but those are all individuals and in the face of Moloch, individuals are relatively powerless.
Igor: Yeah. It comes at a trade off, not having such a dictatorship or like a gatekeeper, and in the first place yeah, where you, otherwise you reap benefits in some areas, but it hurts you and some other ones as well. And you can't necessarily prohibit or put a line on how you are not allowing them. People can come in and strip the comments. It sounds like that approach is particularly good to solve for increased innovation, more secure products,
Peter: et
Igor: cetera.
Peter: I don't know about security. Actually, that's one of the weaknesses of it, but it does allow paralyzation of innovation.
Igor: Oh, interesting that you mentioned that. I would have thought that the programs that, or rather the software that gets many more eyes on it and it's of which the source code is open, it's going to just.
Naturally have a higher chance of high security because of that
Peter: it can be Transparency is one component of having greater security, but it's not sufficient this is an argument that actually one of the early famous people in the open source community, Eric Raymond, he had this quote that, many eyes render all bugs shallow and he wrote the book, the cathedral in the bazaar.
It was like an essay. It was pretty seminal in the nineties as a sort of a piece of the open source movement, and he made this argument and unfortunately time has proven that's not the case because when you build something really, that's not sufficient. I, it's like a reason it's a bit of a.
It's hard, right? When you build something that is of a certain technical sophistication, then relatively fewer people will really understand how to get into the details of it. It's not wrong, but it's not totally right by itself. I guess I don't want to be all against it or anything like I do. I think transparency and software gives us better hope of having more audits, more verifiability, things like that.
That being said , Open source development communities are now being actively attacked by state level actors. And this is something that's now come into the foreground from just in the last month or two, like this, we've now had an attack come to light that specifically preys upon the, how beleaguered some of these open source devs are, how under-resourced they are.
So state level actors are doing long term multi year human psychological ops against targets. Software maintainers and taking advantage and exploiting the open source nature of that, of the innovation commons.
Liv: What's their goal in doing that? What are they trying to do? The
Peter: specific thing is what I'm referring to is the LibXZ attack, which happened, which may not have gotten out of the technical sort of arena into the broader world, but it's a really shocking moment for all of us because it was a multi year attack where there is a library called LibXZ.
It's a compression library and it's relied upon by a much more important library or program. That is how people use the more important program is the one that people use to connect to remote servers. It's a foundational piece of technology for human civilization. And it has this upstream dependency, which is this little Library that was under maintained, the maintainer was burned out and all this stuff.
And a new person showed up just submitting helpful, like fixes here and there, participating a little bit. And then they had some additional things they were going to add and fix some bugs and write some better tests. And then someone else chimed in and said, Hey, I want this feature merged. Can you guys merge it?
And the guy and the maintainer is no, I'm too busy. I don't have the bandwidth. And then this other person says, Hey , what about this person? He's been helping and everything. Why not just let him also be a maintainer and commit the stuff. And the original maintainer says, okay yeah, maybe, it's accurate.
This person has been quite helpful. So then they merged us in, turns out both of these people are acting the same team. They may even be the same person. And so what got merged in then was a backdoor exploit and this was a multi year attack to earn the trust of this one maintainer of this one little critical piece.
It's like finding an exhaust port on the death star. And it's also like a cockroach. When you see one cockroach in your kitchen, there's a hundred more hiding behind the stove and the fridge and everything else. So this is just a really amazing moment for the open source community.
Because, I say it's a state level actor. I think people mostly agree. It must be a state level actor that is paying someone in a black hat hacker employment scenario to have someone do this kind of attack. And if he's attacked this one thing, he's that, they've attacked Who knows how many more?
Igor: So that's what you pointed out that security is not necessarily higher because if you're talking about very sophisticated software, then it ends up being the case that individual bits of it, which are critical, but somewhere deep in are actually not being, they don't get that many eyes anymore.
They don't get that much maintenance anymore by that many people. It's in this case, basically it was one person who was responsible for this one critical piece for this one critical piece. That's pretty nuts. Yeah. That's a massive attack vector, of course.
Peter: And when you look across the open source software world.
Most things are actually held up and propped up by just a few people. You'd be shocked at how few software developers it actually takes to make something that is or how few there are involved in maintaining some of these really popular, well known things.
Liv: And presumably that's something that commercial software is not so vulnerable to because they have.
Great. More direct incentive to protect against. No,
Peter: because they starved the development because there's only so many people who can commit to the source code at all. But the way the commercial software works is that the only people who can update the software are the people employed by the company. So there's simply not that many people employed by the company to maintain it. Even some of the most powerful and critical pieces, not powerful, but the most critical pieces of software for really big companies often times can be maintained by just small teams of, five, 10, 20 people. I heard it. I heard anecdotally. This could be apocryphal, but I heard anecdotally that at one point in time, Internet Explorer was being maintained by essentially one dev with a stack of product managers on top of that poor one dev, but it was like down to essentially the equivalent of one dev.
Now that could be not true, but I just heard that, but I do know like the core Oracle, Oracle database, their massive company, the core Oracle database engine team not that many people. For how much money that thing makes, maybe dozens, maybe a dozen, it's not that many people on the core engine.
Same thing for Linux, for the Windows NT core kernel, the heart of the Windows operating system, that core, like the thing that actually runs all the processes, not that many people. So software is unbelievably productive.
Igor: Yeah. It seems like one of the massive downsides, if you have truly only one person, for example, working on the.
Maintaining internet Explorer, and then someone finds a vulnerability. They're like, Hey, you have to update the internet Explorer. And then the person is looking at his task list given by all of the product managers on top, probably as well. And it's okay, I don't know when they get to it. So this vulnerability then exists.
And I thought that the advantage of open source may be that basically. You posted various people now have the chance to update, add some code, propose code, even to fix that vulnerability and the user also the end users have the option to I suppose maybe they don't, they have the same option as with a closed source software to decide whether they run it in the meantime or not with the vulnerability, but Yeah.
Is it the case that when we look back at it historically, that such updates of relevant vulnerabilities occur faster with open source with the, with actively used open source softwares?
Peter: I can't say if that's true or not, because it would be hard to benchmark that against closed source software, where they also do have instant response teams that are very skilled and can respond very quickly.
Igor: We could compare like, for like windows versus Linux, for example.
Peter: Yeah. But is it really a, is it an apples to apples comparison? Like for the windows release process, they have to test for compatibility across all these different things and updates for all these languages simultaneously and all this there's just, the products are very different, right?
So it's not really necessarily an apples to apples comparison. If you're solving for the question, if you're trying to answer the question of development efficacy, what I do know is that for most of the common kinds of exploits that you find, they will patch them pretty quickly. But it's also not the case that all exploits are trivial to fix either.
Some things may require engineering. Some kinds of bugs may require, Oh, we have to rethink how we're doing this whole thing. And when we change it now, the thing works somewhat differently. And end users may not want to take this update. They may just want a hot fix for the way it used to work, et cetera, et cetera.
Software is just of such incredible complexity. That 's hard to say in generality, right? What is, and isn't true for some of these kinds of things.
Liv: What are your views on open science? Because I've had a bee in my bonnet against companies like Elsevier and some of these big publishing companies.
They don't pay for the research they receive, the papers, but they then charge The scientists or anyone who wants to read the papers that they receive, like 32 papers including technically the person who wrote the paper. If they want to look at their paper again, they now have to download it.
I'm sure they keep a copy, but that seems absurd. And their profit margins are some of the highest in any industry, like 37 percent or something like that. And then quite famously this girl from Kazakhstan, Alexandra Elkabyan. Created something called Sci Hub which is basically like a pirate bay of science papers to allow people around the world to access them for free.
She then got sued, et cetera, by these big companies. I know where I emotionally stand on this. I think it's quite awesome what she's doing, but I'm really curious what your views
Peter: are. We are remiss not to mention Aaron Schwartz, right? Who. Poor kid was hounded by the federal government to the point of suicide because he was trying to, he was trying to basically push for this kind of open open access for publications.
And anyway yeah this also is a very interesting topic for the podcast, because at the end of the day, We actually have to argue, we have to say what is an appropriate amount of profit margin. If, because if we could answer that question, then maybe we would just be able to dovetail this question of is Elsevier or Springer or any of these kinds of like nature, like all these guys, is scientific publishing, what is the appropriate margin?
What is the appropriate tax for scientific publishing to charge civilization for what it does? I don't think it's intellectually honest to say that they do not work. I think there's a lot of value they add. When you have, let's say very passionate academics fighting with each other about who's right about whatever, and their careers on the line.
Yeah. It's useful to have a third party that everyone trusts. There's this intersubjective, you know, There's value in having a judge as a, from a point of intersubjectivity. Okay.
Liv: There are coordination mechanisms. Yeah,
Peter: it's a coordination mechanism and we understand we had to pay a tax for that to work, but what's the.
Are we being overcharged that tax and is there a way for us to peacefully revise or argue for a lower tax on that? And that is a general, I would say central banking has this issue, like all sorts of things. If you look at the profit margins and what portion of profits are captured, By the banking and finance sector.
Is that an appropriate tax to pay for the advantages that we have of a market economy, right? Because that's ultimately the question of capitalism. Like it's 40 percent of profits, too much of a margin. Is there too much of a percentage? Is it too low? What's appropriate? So the thing with
Igor: the appropriateness is that I'm very near to the idea that not the case that there should be someone from up on high that says this is the right margin and that's what's allowed and you would want the market to let decide, but then the market has an, has more ability to adjust those margins. To allow a competitor with a lower margin dependent on what the field is.
That's why they're one of the things that is the good version of like antitrust is like when monopolies emerge, because then they now can increase the margins too high and you can have them in particular when you have either Like very strong network effects that are hard to jumpstart right or when you have like this inadequate equilibria so to speak where like the prestige is just so associated with it, which is what which is what that seems to be the case with elsevier and nature, it's like it may be the case that Someone could run a successful with only 10 percent margin or 20 percent or something like that.
Because I'm a successful competitor, but how do you jumpstart it?
Peter: If you can build a more efficient highway system to the American highway system, how would you even start that competition? You could not, right? Yeah. Boring. Yeah. But I would say that's probably not more efficient in the near term because that's quite expensive.
And this is not just us like I think theorizing in the abstract. This is actually the central problem of coordination that mar, we have what a few modalities we can imagine. One is the top down coordination saying we've solved it. This is how you do it. And the second one is allowed. For some kind of emergent thing to happen in a market or something like this in a network but networks and markets ultimately can settle into an equilibrium that is not the optimal that is a local optimal.
It may be even globally optimal for its time, but then as the world evolves, it's no longer the goal, no longer the optimal, how do you do it? Infuse a world seated at that little optimum to jump over its hurdles to settle into an even more efficient state. And that is a coordination problem that we, I think, don't even have the mythic infrastructure to describe to all the participants in that network, like what that would even look like. And yeah. And so all we know is however, when you blast it with commoditization, where it drops the cost of everything, then people have no choice, but to go and try to reemerge. In a sense, we kill it with the wildfire of commoditization and see what emerges out of the burnt forest.
And that's about our best model. And it's possible that is the only model. I don't know but I think this is the general problem we have to solve to allow for a peaceful transition and upgrades in the punctuated equilibria when we do settle into efficient states for a network or a market.
Igor: Yeah. The thing you mentioned is also not only. Is the case that it might be a local optimum state, local, most efficient state while there is somewhere a global one, but it may be that it's global at the point in time, which changes is actually relevant and often forgotten because. Markets optimize, uh, dynamics and prices, et cetera, given the currently available resources at the time.
And sometimes those change quite a bit. Like we're. I think an example of that is also like the reason OpenAI is developing currently Everyone is developing LLMs That are initially chatbots is that we have this large database that is not large database But it's the amount of text that's available on the internet, right?
It is If the goal was initially to say, Hey, we're building AI to improve medicine and all of those things in no way. It is actually the case that, Oh yeah, to get there, you first have to build LLMs on the basis of the internet. It's not the direct path. It just happens to be that. In the pursuit of trying to get revenue or like use for AI is first, you're going to look at what is freely available somewhere.
What can I use very beneficially? And then,
Peter: yeah, it is an open exploration, and they happen to find that, Oh, text to text translation tasks can be generalized. And Holy crap. We have a thing that, I think GPT, GPT three came out. A year and a half or so before chat GPT came out.
And even then people were like, Oh my God, this thing is amazing. What you could do, but you have to build all this infrastructure around it. You had to use it as a programming library, to talk to it as an API. And then when they actually released chat GPT with this, like one tweak, with this one, one weird trick, they were able to produce something that captured everyone's imagination.
And it's got, people started talking about consciousness and all this other stuff. And it's yes, but let's just keep in mind, it was all still a trick that is based on the invention of the transformer model for doing translation between languages for actually doing vision models. And for doing what, what does a recurrent neural network do?
What could it actually distill from images and graphics? We were starting to have really amazing results in the 16, 17 timeframe. And if you remember those things of the generative adversarial network GANs and style transfer where they can make a picture in the style of Picasso. That for me was the moment of okay, we're off on a different world.
Like we're on trajectory to a completely different world right now. I'm not sure how we're gonna get there. What's going to happen? But that was a visual, that was a vision task, right? Not a text translation task, but backing up to your sort of your broader point that, yes when we do, when the market settles on an equilibrium, it is a point in time.
But the thing I want to say about markets though, is that for us in the West in particular, and I'm a, Refugee from communist country. So I very much appreciate markets, capitalism, and a lot of the nice things, the neoliberal order. But one thing that is interesting is that a market lets you diffuse the question of values.
You can pretend that you don't actually have to answer the question of what this community of people participating in the market, what are their values? If you have a top down person. Who says, no, you know what? Everyone's going to wear this style of color today, or no, we're not going to build these things because those are bad, those are unholy and against the will of God.
You know what the values are, and you can point to one person at the top and say that's the guy, that's the reason we can't have nice things because that guy says this is unholy, right? But when we make a market, you sort of diffuse culpability. About making a decision. And this is, I don't know if it's good, but it's certainly bad in many ways.
It could be good in many ways too, but I see the bad in it, right? The bad thing is that when many people make little decisions that all have some net external cost that they can't easily sum up, then we have this net external cost that no one wants, but you can't find any one person to blame. You can't say this, I want to do that.
Igor: When you say bad do you mean worse than the top down version? Or do you mean just, it has, I said it has also negative effects. I'm sorry. I'll be very clear. Yeah, I'm not
Peter: In any, at any point, just starting with refugees from camping in this country, right? If any of my words can be interpreted as to support an authoritarian sort of approach to the world, then please, you have misunderstood me.
And I approached, I probably have mispronounced myself, right?
Igor: No, it's a, I just wanted to clarify. I think
Peter: That's that. Are a lack of culpability and a lack of a clear articulation of values in a market that is part of the neoliberal trick. It's Hey, by simply shopping your preferences and sufficing for your own happiness, maybe in the near term or even the long term you're absolved because you get to choose it's multipolar, it's multicultural.
It's dah, express yourself, be yourself, love yourself. And then it plays very nicely into the sort of hedonistic trap that then degenerates, of course, nowadays into narcissism. And then when real bad stuff is happening in the world, and we actually have to like all get together and Hey, we got to do something about this.
We've lost the ability to even imagine what collective kind of coordinated value system drives. Action could look like and I think that's this really weird thing, not weird, but it's an unfortunate thing that's been lost here because we have a lot of these external allies. We, I would say this whole, like at the time I thought it was quite brilliant, this articulation of think globally, act locally.
But I think if people are not acting intentionally locally. Somehow the aggregate of our local actions did not equal the output we wanted, the outcome we wanted, the global level, turned out to lead to, giant rubbish heaps in the middle of the ocean and NGOs funded to the tune of billions, achieving absolutely nothing in a lot of developing countries where they're supposed to be, doing things.
It's just this is this somehow, and the Sigma did not yield the integral, somehow,
Liv: Is that, is it, Just because of the improper transfer of information, is that what it is? It's just because people can't see the aggregate outcome and thus.
Peter: What is the moral structure of humanity where there is a.
A spectrum of both information propagation and ability to act on that information. Because we would like to, we would like to think that if people just had the right information, then some aggregate of that information. Of some kind of longer term horizon like rationality will lead them to not burning up the earth or polluting in the ocean or whatever,
Igor: which is why we believe in markets as well, because they are an attempt or the best attempt we currently have so far to aggregate if we could price in the externalities.
Action relevant so like carbon capture and other kinds of things are. Or that's why I'm saying it's the best current attempt is because other ones also have externalities and the market obviously also does. It's just that it still achieves. Yeah, it achieves a lot of things while still having.
Peter: Yeah, the question is really I think where we go from here Right and if the paths we see forward to better and wiser collective action if they require everyone to all be at this level of Information agency, rationality, et cetera, et cetera.
If your site only works, if it's 8 billion high elves, then it's probably not going to work. But at the same time, how do you then have a moral theory of governance and of a practical theory, maybe that you say, look, We give everyone the door to becoming better versions of themselves, more agentic, more, whatever, but at the same time, recognizing when they're not able to make those kinds of decisions, but without us being a holes that tell them what they should think, what is the right scope for them to figure out their pieces?
And, there's some, I think this is where the liberal theory ends a little bit in terms of once people are free to make certain kinds of choices. Is there a way to go from the outside in as a boundary value problem and say, Hey, you own a portion of this bad thing that happened.
Here's your portion of the externality and we're gonna do about it. And is the market still the best mechanism for communicating those kinds of things? Is there a version of markets where we can communicate the total cost, where we can actually have people surface their needs versus just buying once something is priced in after production?
Is there, are there these kinds of things that we could imagine that would respect the agency and the sort of the inviolate sort of the primacy of the individual, but at the same time, recognizing that we do have a, a spectrum of individuals are trying to aggregate this over and some of them.
If we just let them make the decisions they're making right now with limited information they have, we end up in really bad places. That's the big looming question.
Igor: That's where the government usually is meant to step in, or that's one of the solutions that we have, that's the solution we're currently employing, right?
It's that's why we're arguing, someone is arguing for a carbon tax or that's why we're having social welfare systems because some people can't actually participate in markets if they don't have capital or labor force available to them at present. And. Yeah, I think that's one solution, but in the end the government solution is still a, okay, they have the bigger guns and that's why they get to impose the solution.
It would be neat to also have one that makes the market wiser.
Liv: Or we need to give a way to make the commons have its own like personhood, its own agency in some way.
Peter: Those are two fantastic different directions where you take the conversation, because I agree and disagree with both, right?
That I think there's something in both of those things. But what I would say about the government thing, maybe as a question is, I think that you can really only have freedom in the classical sense, if people have awareness of their environment, otherwise you're just putting different colored lollipops in front of children. The Simpsons lampooned this actually, there's an episode where it's like, Mr. President, you have a decision to make, here's five options. And then, he reaches for option a, he's you reach for option B, and it's this kind of thing that we make fun of what happens when we have ignorance at scale.
And so a motivating question would be this: could we imagine, would we imagine that the nation would run well, if we only get, if we gave people from the ages of five to eight, You know, five year olds to eight year olds. If we only gave them the right to vote, that seems like maybe not so great.
But check this out. Why not? Okay. It's because we would say they probably don't have the information to make good choices and they're easily swayed. You literally throw candy in front of them. And they're like I'm gonna vote for the guy. Give me more candy. Okay. So we'd say that the five to eight year old.
By eight years old, you do not have enough information to make high quality decisions about the world. And so your choices then, not that they don't matter but this is not a good arrangement of human affairs. What about by the age of 10 or 13? There was a time we'd sent 13 year olds into battle.
What about the age of 18? Why is that sufficient? Okay. Why do we believe? Because if you think about the last 50 years, how much more complexity we've added to the world, how much more infrastructure and if you think about the world, that's coming, we have people in. Congress who barely know how computers work or how the internet works.
Okay. That is, I would say the equivalent of going to 1950 and putting an eight year old into Congress. I think we have to actually do this. And I'm a physicist. I would say we have to actually measure the quality of our governing. No matter what your theory is, you're like democratic, Republic people vote.
Great. I don't care about the architecture. What I care about is for a civilization that exists at a certain level of complexity. Is there such a thing as a minimum level of information and education that a person has to have to actually be achieving personhood in the sense of being able to.
Make valid decisions about themselves, their families, their communities, and their country. That, I think, is a perfectly valid question, and we might not like the answer we get, but the answer could be of the form that, honestly, we need to provide free education for everyone to the age of 28. Because we're adding whole new centuries of humans, we're adding equivalent.
How many 1800 to 1900 equivalents of knowledge are we adding per month to human knowledge? And once we have AI going, foom, what is that going to do? So I think this is actually the key question. Because I, yeah, anyway, that's got a bit of a rant, but that's my sort of framing of this question.
Igor: Yeah. It's, yeah, it's certainly the case that In the ideal version, one would like to see everyone with who's making relevant decisions that affect many other people to have the prerequisite knowledge and capability to make those decisions, but at the same time, one can't, as you increase complexity as you said maybe they need to learn until 28, but then it increases further until 38.
So to which point do you push it? And probably you will reduce it. You may, we make it domain specific and maybe figure out how representative voting exists as an idea. Which is actually neat where you on per issue, you decide that you're giving the, you trust someone to make a decision that you would believe affect you positively on that issue.
And you make a per issue representation of yourself and then they can further pass it on potentially. But yeah, it's an interesting question of what amount of information is. Necessary to make such decisions. Do you have a guess of how to deal with the truly ever increasing complexity that we will have?
Peter: I think we have to actually create. Collective sensemaking like collectives. I think this is, when you look at nature and biomimicry is not necessarily the end all be all of ways of knowing, but certainly nature gives us a hint that at a certain level of complexity at a certain level of difficulty, Hey, if you get together and you special, then you can specialize, but you have to really trust the interiority, right?
And then your boundary surface is smaller and you can all specialize and do your thing. I think for any single individual to want to be a polyglot, To be experts on law and history and finance and technology. And all these things are asking way too much and that we're now maybe forced with this uncomfortable thing where we can't just trust everyone to be a generalist.
Oh, I read the paper. I check the news. Now I'm informed to go trade some stocks and do some other stuff. It's not going to be enough. So if we actually think about a future society that's resilient to you. Moloch would look like, Moloch emerges from misplaced institutional trust, ignorance, fear, fog, all these things.
And people dissipate energy into what then gets sucked into the engine of Moloch. That's what we're actually, we're not really quite sacrificing our children directly, but we're sacrificing our own potential futures in our agency. But if you were to actually form resilient collective tribes of sense making, Where actually every individual has financial stake, real stake in the decisions and what's, what we decide to do now okay, people have some chips on the table.
And now you can actually maybe get together and say look, you are the finance expert. I'll be a software expert. Someone else will be, whatever geopolitical expert, and we will form a high quality tribe and maybe we do some of this stuff. This is maybe where we end up going.
Cause I just, to your point, you. The world's only getting more complex as time goes on and our ability to actually do modeling and sensemaking is increasing. So we're going to be competed by other groups and other human collectives that can maybe do better than just us individually.
Liv: How would you recommend, cause I think a lot of the viewers listening to this are the type of people who would be interested in building such a community.
Have you tried doing such a thing? Like, how are you navigating the sense making crisis right now?
Peter: Making as much money as possible. At the end of the day. No, that's half tongue in cheek. I'm trying to do it by by thinking through things like this, having conversations with great folks like you all and others and really looking at what is the best, what is on the best discourses available and best thinking available on, collapse, metamodernism, a lot of these like fun topics and then but also amassing a certain amount of, personal, Resources to then when I do make a call, I can put a lot of chips down on it.
Or if I see, okay, it's down to two or three solutions. I split my chips and put them on those. So I think there's, that's what I'm doing. And also just part of it is we are. I would say for the last six or seven years, right? Many of us have been talking about just being in this liminal zone where maybe even prior to the pandemic sense global sense making is actively being worked upon.
And as an individual trying to do sense making, it's very difficult.
Igor: Yeah. When you point at the last six, seven years, what are you in particular pointing at?
Peter: In particular, sense making. So intellectual, actual, like rational, trying to figure out what's going on has been more and more turned into a cultural mode as opposed to an intellectual mode, so it becomes more about.
What is your tribe? And that tribe dictates your filter of what comes in and has happened. And there's been many different kinds of events where when you really get into it and you look at the objective okay, what are the actual facts? Or, and we can look back on many of these events, we can look back through the lens of history and be like, okay more facts came out or there was a trial and we actually looked at what happened and facts on the ground.
Should I have interpreted that event to be what it was at the time, right? And this applies to events from many of the things in BLM, this applies to things around Trump. This applies to things around COVID. This applies to a lot of these kinds of things. I'm someone who's classically, left leaning progressive whatnot.
But I've also read a lot of libertarian and it's conservative Republican kinds of things as well in my time. So I don't have any. I really do try to do my own kind of sensemaking around certain kinds of things. So when COVID happened, I was very strongly on the lab leak and I still am, quite vocal about having supported the lab leak.
Hypothesis and at a meta level, how quickly the Overton window snapped down on it was extremely suspicious to me. So anyway, now in the fullness of time, because it was interesting for someone who has some visibility in the scientific community, because my software is used by many scientists.
Someone who's literally from Wuhan as well. So I have connections to the Chinese Wuhan community through my parents and we chat back in the day. So through a lot of these things, it was interesting to me to see how many people I would say in the center left came down really hard and in the science sort of mainstream trust, the science kind of folks came down so hard on skeptical inquiry into this particular topic, because, I'm classically trained as a physicist.
Skeptical inquiry is my calling card. Like I get to ask questions about whatever. And I'm going to ask those questions until I get a satisfactory answer. And it was interesting to see Oh no, you're like an alt right conspiracy theorist. And here's your Pepe the frog thing.
I'm like no. I'm literally just asking some questions about how unlikely it is for this thing to have been like, whatever, a bat spit on a Pangolin or something. No, like really serious people, let's look at this for a second.
Liv: End of 2019. There were around 39, 000 wet markets in China.
Presumably let's call it, there's some, let's say there's a hundred or so in Wuhan. There's probably fewer than that, but like the, what are the odds that the first emergence of this is In the wet market, just down the road from the BSL level for virology lab,
Peter: which was, which literally was funding coronavirus.
I did, I was, I was really trying to do some legwork on my own and gathering some of the papers and looking at some of the stuff. I actually went to the internet archive and found a prior version. On the Virology website, where somewhere like page 13 or so, there was a publication from 2015 when they were talking about some work they were doing on like human optimized ACE2 receptors based on the SARS CoV 1, the SARS genome and all these kinds of stuff.
Scroll forward to 2020. That particular publication got yanked. So you can actually go look at internet archives and you can do the delta. You're like. That's not a smoking gun by any means, but it's something these people are aware of.
Liv: I could see a world where they take that off, where they're innocent.
That's
Peter: But then there's, but there's lots of other stuff. We don't have to get into that particular topic. The point just being that because the topic here, the question here is what are examples or, what are some things that, that, that. Our hallmarks of this challenge to sense making now in the last six or seven years.
And I think that's an instance of where some of it is. A lot of it, social media, a lot of it, I think is people feeling my tribe is an online tribe and the way I signal who I am, my online identity. I have a pride flag or I have a pepper, the frog thing, or I've got a, like the wall street bets, like when it was GameStop and to the moon kind of stuff, like people become tribal at an iconic avatar level in these places.
And because it is a soundbite market. Who is holding who responsible or how much accountability is there for someone making a bad call? You guys are poker players. You get held accountable for bad calls instantly, right? Versus like in these areas of discourse, not serious forms for discourse, the performative sort of as if discourse places, and there is simply no, almost no consequence for people being wrong.
How high of a quality of a signal are you gonna get from that? And yet, because of attention. But this is one of those points that I feel very strongly about: attention is actually a scarce resource for any individual. Of course, it's not particularly controversial, but the synchronous or the joint attention of a society is a scarce resource.
If I listen to new source a and you are, I'm paying attention to phenomenon A and you're paying attention to phenomenon B and you're paying attention to phenomenon C. We're doing our own little thing, but if all of us together are getting three different looks at a single phenomenon Z, we're going to have, I think, a better perspective on Z, but Z better be worth our joint attention, right?
So this joint attention being this commons resource, the way that a piece of pasture land in the Swiss Alps is a Commons resource that is, I think, underappreciated in all this discourse about social media, that it, what it did is not only evaporated. It's sliced, then sliced everyone's attention. So everyone gets their own little algorithmically, compiled little bits of pieces of stuff.
But when we even do have joint attention, It only shows up when it's something particularly outrageous. Oh, a black person got shot by a cop or, Oh my God, can you believe Trump did that? Or blah, blah, blah, and so even these joint synchronous moments are only convened in a sense when there's something outrageous, not when we have a moment to contemplate and go deep, you think about church traditions, you get together on Sunday, not to get, it's not the.
15 minute hate or whatever the thing is from 1985 and hate, whatever two minute hate, whatever it is, I should know. But you get together for several hours of contemplation and meditation and reflection. That's very different. So I think the sense making has gone downhill primarily because of the rise in primacy of social media as a way of signaling as if sensemaking as if discourse is of deep confirmation.
Liv: So is that maybe why the Superbowl? The market seems to still dictate that, the, having an ad on the first break of the Super Bowl is the most prime time re, attention real estate, because it is this synchronous event where actually whatever, the largest percentage of America is watching a particular thing, possibly even more than a presidential inauguration at this point, because like people, I, half the country hate whoever's getting in at this point, whereas everyone seems to love the Superbowl. But it's an example of actually a happy thing. It's not an outrageous thing. People are tuning in.
So maybe that's where you can do it, maybe we should buy sense making ads.
Peter: Yeah, the synchronicity of that. In a world that is more and more online and indoors and has less and less bumping elbows and rubbing shoulders in the outdoor, in the real world, not IRL in a world that's more and more online, synchronicity and time and that kind of stuff that dictates more of what is our actual shared.
Reality, right? Society is ultimately fictive. Kinship is such a critical concept, right? There's no way, everyone in America, there's no way that anyone in America even knows more than a few thousand people. And so the idea that we are part of a country of a hundred, 300, 400 million is fictitious.
But it's really important. We believe in that fiction, right? And it's really important. We believe that we are part of a nation of several hundred million as opposed to two, a split nation split between, 180 million or whatever. And I think that fictive kinship and the radical destruction of that fictive kinship in this kind of evaporation of shared synchronous sort of moments, that is a deep sort of rot that's happening for us.
Igor: Yeah, it's also interesting that actually as we were talking before about the collective sense making, the reaction to go to very tribal versions around a topic. It's an attempt in part of that, right? Overvaluing Oh, I'm just sticking with whatever my tribe believes ahead of necessarily what's there to be true.
It's the reaction we would, we just described you would have to a complex world where now you have to go into a collective so that you can make sense of it. So you will actually rely very often on the collective that you imbue with trust. So that's what is happening with them. the world gets more complex and everyone is just sticking with my tribe here and that's why it's like I Yeah, it's very understandable as a reaction.
It's just then it becomes the question I suppose between Like how do you trade off tribal versus truth like or not truth, but your interpretation of the credibility of something versus the sticking with a
Peter: tribe Man, this is all of it. This is the key thing, right? Because an actual person, if you think about the brain you've got in your head this is something I realized the day after Trump won the election cause I saw all my friends, all my Democrat liberal friends. They were just like, like in free fall. There was like it, the world didn't make sense to them. And I realized, Oh, you know what? It actually is this very simple thing that the brain really wants.
The brain is seeking for a sane world. The brain is solving for coherence. And it wants things to make sense more than it wants the sense to be true. The sense that is made to be true. And a lot of my. Liberal friends were just, and even in the wake of like for months afterwards, there's a lot of this talk, even now, a lot of talk about disinformation and the truth, if we just get people to understand that, and I appreciate, I really understand deep in my heart where that comes from, but there's unfortunately, I think a lack of recognition of something that is true about human nature, which is that we really do want the world to make sense.
And we will sacrifice a lot. In order to at least have that first boundary, like our ground floor substrate, we at least are not insane. We should actually have some story. That story of what's happening in the world has got to actually be coherent for us. And that story is a cultural story. That story is a personal story.
We have to have that as our firmware. And then that firmware has to be the substrate and it can be load bearing or could not be, it can be very shallow. And then we have to keep going back to our cult leader to keep giving us more validation and for us to do our little things, but, or it could be very self sustaining and we could put a lot of weight on it but that is still the fundamental, Like thing that your brain needs, I think.
And so when it comes to sense making collectively, like you were saying. People are looking for social validation, especially in times of confusion. They're looking for am I, wait, am I the only one here? And what social media did was it could let connected people be vastly different. Regimes and locations to being part of some of these kinds of intellectual or cultural motifs, which is why all of a sudden you have lots of literal Nazis in America.
It turns out there were a lot of not like proto Nazis, but once you found a way for them to all find each other, now they're doing peer based sense making whoa. I can't be some weirdo. Like all my friends tell me I'm weird. I don't even say these things in public at the bar with my friends, but I'm now connected to a bunch of friends online who are all like, they're my people now, and we're all in agreement that, yeah, like being an American Nazi is like a good thing to do to save our country, so this kind of cultural based sense making thing is part of the operating system in our brainstems. If you have wrong models of the world, you can either
Igor: update those models on the basis of the input that you've received, Or you can ignore the input in part and when we describe that you can show someone who Sticks with the tribe despite new information coming in that is a reaction that is of the type of ignoring simply the input so that you don't have to update the model of the world because it is a um yeah, lower stress response basically for the
Liv: The internet has mediated and made it easier for you to retreat into that bubble where everyone tells you're safe.
Because
Igor: now the model has become even more enshrined. Beforehand, your model had less certainty. Now if you're going into and find little conspiratorial groups that often turn out to be right, but sometimes also not. Then. You now calcify that model more. So now the sensory input needs to be all the even stronger for you to update that model in any way.
Peter: I do blame the algorithms and some of these kinds of things, but at the same time the internet is also, you can go and find what you're looking for, right? And this is a problem because there's way more stuff out there. This is the whole point of being in the liminal, like who is actually prepared to really encounter the liminal, where do you really know, is this really true is this, It takes actually quite a bit.
You have to put on an EVA suit, a space suit with some oxygen to get into all that and be like, Oh, am I supposed to update my models on this fundamental thing, or should I not, or whatever, like it's not something you just do casually. And this is where I do take issue with some of my, again I seem to be crapping on my liberal friends a lot.
I'm actually fairly liberal myself, but I feel like they're the ones who are most. Put out because they tend to be college educated, intellectual, they value intellectualism, all these kinds of things. But then they talk about updating models as if it's an easy thing to do. And it's Hey, I got some center, and slightly, maybe more alt right things to update you on. Would you be willing to accept these things? Here's some information about Kyle Rittenhouse. Did you know this? Or what about this particular thing about this particular shooting, which we were all very angry about. But then when you look at some of the details wait a second, right?
So updating one's models is not a trivial thing. And I want to, this is where I want to be empathetic to like, we have a world where all of us can just hit a button. Like you type of, you click a link, literally hit a button and you're teleported to the edge of not just your own liminal boundary, but maybe society's liminal boundary, and and you're like what the hell do I do here? I don't even know trigonometry. Like, how do I know if the moon landings are real? There's something about gosh, it was as you were talking, there was a thing I thought about, which was the guy, you remember the whole, like the, whatever the pizzeria the whole queue, pizza gate.
And the guy literally showed up with a rifle, with a semi automatic rifle. And what I wanted to do I wish I could figure out. Find a recording or something of what was the look on his face when he went there and there was no basement because that's when reality comes and your models are all wrong.
And you think about how deep of a stack of models are wrong there.
Igor: Yeah. If you see the person, the pizza Yola, they are making the pizza and they look fine. And there are no Democrats there yet. And they're all like Italians. You're like okay. There's a hidden trap door. Very good.
Yeah. There's a trap door to the basement. Hillary Clinton's down there. Literally torture a small child. At some point the inputs are just too strong in your country. And this is the same thing. I think some of
Peter: The January 6th protesters, they were like, we are, we're part of a huge client, we're here to save the country and all this stuff.
And then when you actually listen to some of their testimony, like when they're put on trial and everything, it's this whole collapse of sense making. And cohering back to something of a consensus model of reality. And this is a question, right? Whose consensus?
Liv: You actually talked about in a medium post of yours, a blog about this idea of like, colorful dystopia and virtuality.
Can you explain how that feeds into it and what that is?
Peter: Yeah, I guess it's related to this in the sense. So I use the word virtuality to just refer to the phenomena that are Of it's us experiencing a world that is more, more manufactured. That is designed in fact, to deliver a loaded payload into our senses.
Maybe it's the best way to encapsulate that concept that we inhabit the virtual. And this can be quite, maybe the best example of this would be, or the most accessible one is when you watch the matrix and the guy's the steak and the steak tastes good to me, right?
So many aspects of our world, when you learn to look for it, it's You realize, Oh, wait that's made up like this. It's Disney world. It's Vegas. It's designed to make me feel a particular way. And it's just a virtual experience. Can you contrast it with a non
Igor: virtual experience?
Peter: Yeah, a lot of, maybe the most accessible sort of non virtual experiences are when people are out in nature or when people have a deep connection with each other, absent a lot of the dressing, absent a lot of the.
The, all the accoutrement of what might tell us or of the social environment that tells how we should think of each other when you're not an avatar. I'm not an avatar. We're literally just people trying to have a conversation about something like being real. And usually that involves actually exposing part of yourself, being vulnerable in some way and being vulnerable, not to being attacked, but to being, To updating your own models, to updating your heart, right?
To actually be sensitive to something new you hadn't experienced or thought about before. And we get this when we go to the great outdoors. I don't know how much you guys like being outside, but if you go and you go to a national park and you're in front of a mountain, the mountain is right there.
And you feel the breeze. You might smell the grass and you hear things. Chirping and whatever else is happening, but you're there in it. And that presuming that being present outdoors, that is all there. Or when you jump into the water and you're just, you're in this thing that is definitely a full body experience and presence, we can do that in social environments as well, but less and less.
And so virtuality is the opposite of that. It's a world. And I'm borrowing this term. There's many other philosophers and people, social psychology written about this.
Igor: Maybe can you contrast it a bit more as well? Because I think you're pointing at a different thing than in that world where we're just hanging out and like meeting each other, etc.
There's still cultural or memetic versions of we will value the things that we value due to the Not some intrinsic thing, but because it was communicated to us through cultural memes, but you mean an even further stuff.
Peter: Yeah. I think maybe the thing about the best way to talk about virtuality is the opposite of it, right?
What is a way to experience non virtual and a non virtual experiences to me are when you are experiencing the substance of something or you're relating to something on a substantive basis, based on your own intrinsic values, and you can update those values and you can update your and most of the, so much of the world, I feel like we have more and more things that are made and presented to us, ready, made, ready, digested for us to consume in a mimetic sense, or even an informational sense.
And this touches on something which I wish more people would appreciate. And we don't have to talk about it right now. We can talk about later, but this idea that Gregory Bateson said is beautiful, like information is a verb. It is the act of sense making, not the sense that's been made. Because real sense making the census has made a sense to you and the idea that even at a metaphysical level, when we say information as this artifact, that's this golden artifact free of context is floating here Oh, if we can only get the Trump voters, the information, then they would vote the right way.
No, that's actually the wrong way to think about all of this. Because if you had gotten the information the right way, you wouldn't be telling me I was a conspiracy theorist for saying that there might be a lab leak, right? This information is not this golden. And scientists also know that science is a process for putting a bound to infinite error.
It's not to mine some golden truth, right? But I think a lot of people are. On the outside, who only know science as an institution that has produced the modern world, they can think of it as this abstract thing that could be brought down from a mountain like fire, isolated as an artifact, alienated artifact.
And so this is the same thing with Oh, I'm valuable or this is beautiful. Beauty is abstract some like. Absolute thing, we would never say that's obviously it's something that you experience in a kind of a relational way. So virtuality, I guess a world of the virtual is this a manufactured world, it's a world where we allow other things.
We allow other agents to intermediate between our direct experience and relationing and sense making.
Liv: That raises this question of, are technology values neutral? Because a lot of people. Claim that when, particularly technologists claim
Peter: that now
Liv: I know, but they're like, I don't know.
It's because they can wash their hands of responsibility or something, but they're like, Oh, it's not up to the technology. The technology is not good or bad. It's just how the people use it. It's actually the medium often is the message and it's increasing. We're in, we are, someone goes to a view.
If you go to a beautiful vista these days the majority of people will get their phone out and start recording it and taking pictures of it for Instagram, for example. 20 years ago, that did not happen. People would take it in. Maybe they would have a camera, which they would then get out. But the primary thing was to absorb the mountain through your eyes.
Now the mountain is a backdrop for this projection of this avatar of myself. I put it through. And so, that existence of that technology is increasingly setting the tone of how we view the world. So it seems absurd to say that it is values neutral.
Peter: I try to do my best to steal man's arguments that are in opposition to mine so I can better refine my own point of view on things.
But. I don't see how anyone in this day and age could argue that technology is value neutral.
Igor: I suppose the point here would be to say that, maybe let's go over it, that the it's a technology that exists and who are we to say that it's a better or worse outcome that are now with the new phone as well as Instagram technology we have shifted away from experiencing to showing others what we have seen or something that like the Relative values that The mountain provided have shifted towards that like why are we making a judgment in the first place about saying that?
Yeah this is now worse and the technology therefore has worsened the value.
Liv: It's shifting the locus of attention away from the present to the future.
Igor: But the strong liberal kind of stance would be to say That doesn't matter. No. Yeah, exactly. That it doesn't matter or that it's in the end, it's you have to optimize for the freedom.
Liv: But the question is, are they making a choice? Because if you have grown up in a world where You're born now and you grow up in the world of Instagram. You've never seen the former.
Igor: I think that leads us back to what we were talking about before with the market often finding a local optimum or an optimum at the time.
And sometimes it's not even an optimum. It's just kind of, driven into you, what you want, even though for some short term benefits rather than.
Peter: And there's so much wrapped up in just this example of a person who goes and takes a picture of a beautiful mountain, at some overlook. We build an overlook because we all generally agree.
This is a gorgeous place to look at these mountains, right? Hey, that's great. At least there's some consensus. It's a beautiful view. You go and you see the beautiful view. There's an interesting thing about taking pictures, because I like photography as well, and I certainly take a lot of photos with my camera and my phone camera.
And someone pointed out like, part of the reason you do this is because it's your own life stream. You look back and say this is my point of view that I had, and this was the shot, and maybe it's got, maybe it's got my kids bouncing in the corner and my wife is hunched over digging, like some snacks out of a backpack or something, but that's my moment.
No one else can take a picture of that mountain with my family in it, or it's the corner of my van. It was that first van trip we did with that van. There's aspects of this that are a little bit personalized for me. But even if all those things were removed, the reason I take a picture is so that when I look through my curated stream, I'm like, here's what I did.
And so there's the aspect of collecting a memory to add to my Better to have a better memory for my own recollection. That's a different motivation than let me do my hair up and then get the selfie stick and pose just this way and do a little like Instagram thing. I, you can tell I'm not really doing any Instagram stuff.
Cause I'm like, I can't do whatever the latest dances are, but that is a performative thing. That's letting the camera push my identity onto me.
Liv: Or the audience specifically? Yeah,
Peter: It's my guess at the audience. 'cause if I could actually see how little the audience gives a flip about me, I wouldn't do any of it.
This is the arbitrage, this is the crab trap that we were talking about earlier. This is actually what's evil and unethical about the way that an illiberal inside it is a sort of an exploitation of a loophole in the liberal philosophy because these things give users choices, but we nudge them, manipulate them a little bit.
You see who's posted the most. You don't see who hasn't posted. You see how many likes there are, how many hearts you get notification when there's likes, you don't get notification when someone really has engaged with us and thought about a long time. What if you didn't get any, what if it didn't count unless someone dwelled on your post for 15 seconds?
What would that look like? See, the thing about the technologists who think technology is value neutral is I don't think people who really work in user facing areas can really say that because the only trick, the one cool trick of most social media versus prior communication mechanisms is it is. It was designed without two fields.
It's a messaging system without saying two, like when you write an email, there's a two, two colon, who are you trying to talk to on social media? All of them by definition are broadcast by default. In fact, it is sometimes not even possible to limit. Who you want to see this thing in other places, it's possible, but it's a massive pain in the ass to configure it.
So imagine going out into a town, you stumble upon a medieval town and all the only volume was 11 and literally everyone was just screaming at everyone else all the time. That's insane. That is not a human way to interact with anybody. That's Whoa, why are you getting, why is everyone shouting literally from the pub down to the mill?
Why is everyone shouting at each other? We want to be in case we have something funny to say, everyone will like what we said, you're crazy. Like what are you doing? But that's the one trick we learned from email lists. Cause there were chain emails and people trying to go viral with emails with these massive forward chains.
And they were funny and there's some good stuff and ha. But then when we flipped to this broadcast by default mode, that technology used user interface choice. Change fundamentally changes the nature of human communications. Imagine a car. Imagine the next version of Tesla comes out and it only goes 65 miles per hour minimum.
How would we design roads? If your car is out of the garage, or maybe as soon as it comes out of your driveway, it could only go 65 miles an hour. Holy shit. Wait, why can't we slow this down a little bit? Can we, but no, they get compensated based on how many miles are traveled by the vehicle.
And that's what we have for these communications technology companies. And at a regulatory level, we don't have policymakers who even have the language or the philosophy to back this up to say, number one, the joint attention of a society is a commons, it must be defended as one of the most sacred artifacts, number two, communication technologies are part of The infrastructure of a country, the way that the nervous system is a deeply important part of a body.
You can lose toes, you can lose chunks of flesh, but when you lose nerves, you are like part of your spinal column. Holy crap. We need to defend our communications infrastructure. Every society needs to defend that. And it used to. It used to, right? You couldn't just put a pamphlet up.
If you're Martin Luther, we know Martin Luther because he nailed a pamphlet up to the door, right? The ability to actually disseminate communication at scale was a sacred thing and was heavily policed because of the impact it would have on human minds. But now we have this thing of laissez faire : everyone does what they want.
They say what they want. People are making choices. And who are you to say that someone's making an invalid choice? The only way to push back on the liberal sort of perspective on that is speaking about the commons or, Hey, this is somehow hurting us together collectively. So then this choice has to be mitigated in this way.
But even then people making choices, young girls deciding to spend a lot of time on Instagram and having body dysmorphia issues, is that a valid, We have to have some philosophical sort of scheme to actually talk about when people can when we can build technologies that outrun people's choices, then are they really making that choice?
Igor: A new kind of bad version of that, that might happen is will the internet get polluted by a bunch of LLM tech such that it can later not be? Necessarily used and noticed anymore. What's real text versus what is generated just for getting your attention on the website text is right. Like already.
Like that, do we have any infrastructure in place to reduce that from happening?
Peter: We don't, but I don't know if that's the framing of the problem that I would choose because in a sense, Google and Google being an easy place as an index, like Google took over a search by just completely dominating. I, and I remember the earliest internet, there are many choices for search, Google just cleaned house.
And then because there's that one control point, it became a scarce resource. The listing on the top, not even the front page, but the top half of the front page of Google became an extremely scarce resource for society and. So when we talk about the internet, I think if you look at how people experience the internet today, there's a very, people who experience it through apps. How many websites, DeNova websites, have you typed?
Maybe you guys type in a few more than the average person, but for the most part, these aggregators, it started with things like dig and slash dot and read it right. And they came out behind, but then This idea of the internet in the early days, it was not actually that anonymous. An email address was a thing.
You had an identity. If you posted on usenet, sure. There's some garbage posts and stuff, but like usenet was a forum, a global forum for people to talk about all sorts of things. That's how I got to interface both Linux as well as the Python community through these usenet things. So people had an identity.
Now you could be anonymous if you wanted to, but for the most part, you would keep posting to this place and people will know who you are. So speech mattered. But then it became the cesspool of anybody setting up a webpage for whatever. Hopefully the Google SEO engine indexes you and hopefully makes a few bucks before your site's taken down.
And that's not actually what a discourse of commons looks like where there's accountable speech.
Liv: Yeah. I've heard it was, I think Daniel Schmachtenberger was talking about it, like with this idea of technology, not being values neutral. So if you have, if you order your stack. With the technology first, which then drives the sort of social structures, which then drives the memes, you end up with destruction.
You end up that's Moloch basically. Whereas if you flip that and you have the memes or the philosophies, which then drive the social structures through which you then build the technologies in service of those. That's the inverse, that's the like the win-win solution. So I'm curious what, are there any if you could start designing such a stack from scratch, what would be the kind of philosophies or memes you would want to imbue in it?
Peter: I should expect that question and yet I don't have a ready-made answer for that. I think. I'm most in agreement with Daniel on this kind of idea that we should use the hardest technology in service of what we actually want to achieve and to recognize that the values come first and we can build technology however we want to suffice those values.
In some cases we do that. But for the most part, we are led by technologists building new kinds of, shiny toys and we just Oh we'll move in here, but but I would, I think there's some aspects of, I don't have any ready made solutions, but I think there's some things we need to explore, which is what does, what is necessary given the modern technology have that we have what does it mean to be a digital human, to be a human with digital technologies that can enhance our memory, enhance our perception Really, if we were, if we were all just billionaires who could just come, have an army of software developers, build us whatever, or hardware developers as well, build us whatever the perfect thing would be to help us be the most we, that we are what would that look like?
And if we start with that for each person and then say, okay, then the phase two of that question or part two of that question is for us together as a group of friends or for us together as a family or for us as a community, what would we like, what would we want the technology to do for us? And I think you have to start with those kinds of first principles.
Because those really put, I think a lot of times when people ask those questions, they come somewhat from a Luddite perspective, let's block out the technology and we want a school where there's no tablets allowed. You know what my kids for all the hate that I give, like a lot of the stuff, my kids learn a lot through a lot of the great content that's available online today, it'd be great for them to be able to engage with that content more, it accelerates their learning.
But of course I don't just wire them up to a tablet and we'll set it and forget it. So there is this intentionality around it, but I feel like a lot of the Yeah, it's very much an either or thing right now where it's let's just do all the human stuff and then we'll figure out the tech on top of it.
And I think this should work together. Because otherwise what's gonna happen is you're gonna come with someone with an impoverished view of what the humans, the philosophy and then the socialization layers. You have an impoverished view and then the technology just completely blows right past all of it.
So you have to infuse the technology into each of these different things. And then build that to build those layers together or not holistically. You have to build those layers holistically with technology in it.
Liv: Are there any sort of technologies that can, that you think have potential to help people develop essentially better psychosecurity, psychological security against manipulation?
Because we're in this sense making a crisis, an attention crisis essentially, where it's so hard to, it's just so chaotic, therefore it's making people more vulnerable. We need to bolster people's psychological defenses against manipulation.
Peter: I think that for some folks who are, let's say older and they just find themselves like, I don't want to be doom scrolling, but I'm just, what else am I to do?
Like I'm in it. There are some apps that I see emerging where, you know. Fire with fire, but some of these apps that try to remind you like, Hey, you spent this much time doing this thing, or you can even tee up beforehand. How much time do you want to spend each day doing these things and how well are you tracking that?
So simple things like that can nudge behavior because we are talking about psychological technology and behavior kinds of things. Obviously there's little tricks of going to a monochrome screen, black and white screen setting hard rules for yourself Hey, no screens in the bedroom.
So I'm just reading a book or I'm not. I'm not very good at that rule myself, but I can appreciate how wise of a choice that is.
Liv: We try and do the whole no phone in the bedroom. I call his phone that bitch because I'll sometimes. Turnover in the morning and
Peter: he snuggled up with a feeling.
Liv: No, that bitch is lying in between us.
I'm like, Oh, she's in our bed again. Huh?
Igor: It's a rule that you agree to rather than we both agree to
Liv: sign up for a three way relationship.
Peter: There are technology tools like from the apps and things like that for people who really do want to mitigate this, but they just find themselves sucked into the dopamine cycle of that.
I think more broadly for younger people, it's harder, right? Because they have a little bit less control over this stuff.
Igor: Their experience with all of these apps leads to many more connections that they in part of their life are defined as meaningful as yes, these things have really ingrained themselves as the substrate for their social identity.
Again, as we go back to the kind of market mechanism, which is not only a financial market, but also a, in life, you're more likely to be in a local optimum and it's harder to get out of if it has much more, the trap is much larger.
Peter: It's a tech cult. It's like the same reason. It's hard to leave a cult.
When all your friends are in the cult, right? It's hard to leave a nap of all your friends and the jokes and the like, Hey, people you actually genuinely so for younger people, I think and I'm a parent of both a younger teenager and then a 10 year old daughter who's just coming up into now the age where it's very dangerous for girls, with technology and I I think about how important it is for us to have good connections with other parents, the parents of like her friends.
And then we're all on the same page about how they, as a friend group, Interact with technology. So it is, this is a set, this is to some extent, a little bit of a worked example around collective action against Moloch is if only one parent is my kid doesn't use tech. Guess what? First sleep over your kid now addicted to really cool apps.
They're gonna come back and be like, I want that app on my phone. I want that app on my tablet, whatever you actually have to get together. As a group, we're going to parent our kids in a way that actually encourages them to have experience with each other. They can be playing board games. They can be doing art together.
They can play instruments. They can run around outside, do sports, swim together, whatever. We're going to really limit their group activities. We're limiting the screen time thing. And so I think that is the kind of way for younger people. Kids before they get really sucked in for older kids when they're once they're sucked in by Man, that's hard.
That is really hard because you do want to give your kids as they become teenagers. You want them to make good choices on their own? You gotta we want to respect their privacy and their agency. You also just there's a lot of other hazards. You want to be able to police as a parent in a digital world And there's really, I would say not a lot of great solutions out there.
Unfortunately. I think you have to sit down and talk to them, have this level of conversation to say, Hey, I'm sorry that you have all the discussion about extended teenage years on the, fortunately you're all sitting early onset. Adulthood through a digital mechanism, like there's just a lot.
You're going to see on the internet that you may not really understand that you're not really processing very well or other kinds of things. Tell us when you're having these kinds of interactions, but yeah, it's hard because parents themselves get sucked into this, right? Yeah. And you've got to teach
Liv: them about dopamine essentially and regulation.
And it's very, It's
Igor: especially hard if I would imagine if they're at a phase of wanting to find out for themselves rather than listen to you about everything, but yeah, you, without the internal motivation, it's very hard to build a habit that they can defect off that just all the time by themselves because they're constantly with the phone by themselves.
So like they need to, they have it internal as motivation.
Peter: For those teenagers, though, I think it's part of the value systems. If you can find who they look up to some of the people that are, whether it's a pop icon, whether it's whatever, most famous people also hate social media as much as they have to use social media.
And so if you can curate them going off about some of this stuff, that might be a helpful thing to talk about. I know like I, in one of my blog posts, I cited fairly famous people just signing off Twitter. Cause it's like, whether, Yeah. Cardi B. It was like, you're trying to make me into some Disney person.
I'm not some Disney star, like F you. Like I'm off this thing. Yeah, there's others as well. But who is the dark side of fame? Micro slices like this and pushed on people. You do have people who are, who, who talk about what dangers and what bad stuff these habits are for them. So you might be able to find whether it's NBA stars or as other people talking about, look, I, this creates a whole dynamic.
Social media creates a dynamic and I don't want any part of that dynamic. It's just it's so over, or something like that might be a way to connect to a more teenage kind of
Igor: crowd. I don't think we've talked about how society grows or innovation improves if you publish and you talk about stuff and people see things, et cetera.
Peter: We think we're okay if we're bad at science right now. But actually, what if we're not, what if we have to really get our shit together in the next 50 years before the alien invasion fleet shows up another Omamuro or whatever do you mean AGI? No, not even AGI. Just all the, yeah, which is yeah, AGI could be the thing that actually reaches out on the EM and then gets influenced.
Maybe our AGI is actually really bad. It's like the right flier. And then aliens have really much better AGI in the ether. That then hits our shit. What do we have to really get our stuff together on science before we make first con, before our AGI's make first contact.
Liv: So would, do you think having fully open science would facilitate that?
Peter: I think if we actually did scientific collaboration at global scale as if it mattered. Right now, all we can do is think about when lose scarcity, who makes a better Evie, blah, blah, blah, some other, it's like happy horse shit, but it's no, actually, what if it's an existential thing for humanity to get really good at science like now, right?
Because part of the dark forest is when you first open your eyes and realize the dark forest and you're like, wait, that means everything else is hiding. What are they hiding from? Oh, shit, I gotta get smart really fast, right? There's some of that and then the decline of the West being, we think we've peaked.
The moment you think you've peaked, right? The fact that we don't see ourselves in competition. Oh, no, Putin invaded Ukraine. That's all bad and everything. But your competitor isn't Putin. Your competitor is some like an alien asshole. Thousand times worse. What about that?
These are the kinds of things that we should be challenging ourselves with, but we don't. Because in the West everyone gets whatever, a chicken in every pot, an iPhone in every hand, like maybe you're good and we're not good.
Liv: What is your epistemic status on aliens?
Peter: I'm pretty sure there's aliens out there.
I think it's quite naive to think that we're alone now. I have no evidence for it. So I'm not going to say that I think people who think we're alone are idiots. I think they have every reason to rationally believe we're also alone. My feeling is that basically, how long have we known about neutrons and electrons? Neutrons, about a hundred years, up until 1929, was it? We thought of the Milky Way, we thought that galaxies were spiral nebulae. Like we don't know anything about anything. We don't have a working theory of gravity right now. Why do galaxies not fly apart? Oh, dark matter. Okay. No, seriously. Do we have a working theory of gravity?
We do not. So if we don't have a working theory of gravity, how do we think we can even see anything? We don't really know anything about the cosmos out there. We've only known about the Milky way for about a hundred years. So I think we have a long way to go before we can declare.
Whether or not our local neighborhood has stuff in it. We look in a very narrow slice of the spectrum. We have, I mean, we have a telescope now finally far away from the earth. It's a nice big scope, it's really small. We know James Webb. Yeah. It's really small compared to the orbit of the earth.
Around the sun, we could totally set up, we could totally set up more advanced instruments that have much longer baselines that have incredible aperture. What could we see then? James Webb just imaged a protoplanetary disk. Beautiful, but it's very blurry. If we actually had a long baseline, advanced instruments.
Liv: Wait, what do you mean by long baseline?
Peter: So basically if you. Are able to have multiple instruments, then you can actually do interferometry between them. And so you have a synthetic aperture. I see.
Liv: Like what is it? The large array radio telescopes. Yes, the VLA
Peter: or the square not square kilometer array, but the very large baseline, whatever.
Because you have one on this, one
Liv: in, This side of the earth and one on, half the earth away and they effectively work as one giant radio.
Peter: Exactly. If we were to do multiple optical telescopes, having interferometry across multiple telescopes in space is very hard for radio.
It's easier, but anyway, those are the kinds of things we can't even begin to do right now because we don't have the tech, or are just too expensive to do. But I have a fun little thing cause you're obviously very knowledgeable about astronomy, but just for people to have a sense of how small we are and how little we've seen if the Milky Way were the size of the United States, continental United States, coast to coast, right?
California to Maine, something like that. If the Milky Way were that big, how big is the solar system? How big across the solar system?
Igor: It's less than, it's less than one in a hundred thousand of it. It's like one, 10 millionth or something, I would guess. The size of the Milky way relative to the solar system. I think the solar system is less than one 10 millionth.
Peter: Is it like 100 millionth? Depending on what you define as the size of the solar system, the orbit of Pluto is about one and a half inches in diameter.
So if you go from the center, so one orbit of the edge of the orbit of Pluto to the sun, and then the same distance, the other side are. Our solar system now to talk to Pluto, at least is an inch and a half out of the entire United States. That's one galaxy, right? If you go out to the heliopause or something, maybe it goes out to seven inches or something, but it's not very big compared to the United States.
Liv: One of my favorite analogies is if you shrink the sun down to the size of a tennis ball here in Austin, the next nearest star, our nearest neighbor, Alpha Centauri, a mere four light years away would be another tennis ball in Los Angeles.
Igor: That's fucked up.
Peter: It's fucked up.
Igor: And the Milky Way is 25, 000 times that.
Yes, that's right.
Peter: Back to the Milky Way being the size of the United States. If this, if our solar system is an inch and a half in diameter, Alpha Centauri is two football fields away, right? The sun is about. One, one and a half, one and a half, like a billionth or trillionth of an inch. It's you, it's beyond microscopic, right?
That's how small we are. We can't see out more than a few hundred parsecs in terms of parallax distances. Beyond that, it is all using Astrophysics and galactic ladder scale, things like that to even see how far things are apart. But that's just our little star. And we're not in a, we're on the spur.
So we're not really in one of the dense arms. And if you think about the kind of energies and the kinds of different kinds of chemistries and electromagnetics that are available inside that kind of environment, Like we, again, we are living in a very cold, small, little place right now, looking out at a very big world.
The idea is though that,
Igor: Maybe we would have been visited, right? And then we would have noticed that with aliens, or we, when looking out, should identify a certain set of signatures. And some of those are, for example, if we look at another planet, like Mars. Which has a lower habitability rating by quite a bit, than Earth, but presently at least.
But if we look at a high habitability rating planet, and we don't see any life there, and then we look at another thousand and we don't see any life there, then it's okay, now it's started. But we actually haven't looked at that many in so much detail, right? And we
Peter: I don't have that much information.
We don't know how to look for evidence of civilization, right? Or what we consider, you can say that potentially could give rise to life.
Igor: Yeah. What other types of signatures we're actually looking at to identify whether, yeah, something has. Life there. Is that what was it? Oh yeah.
The amount of energy that goes onto the planet is less of it is reflected than you would expect.
Peter: Or. Yeah. You could look for things like differences in albedo and things like that. But I would say that one of the things that people do look at is like chemistry, right? So if you look at certain kinds of, you would expect for an atmosphere of this kind of composition and getting this much energy, you're You expect certain kinds of processes might be available.
And there's certain kinds of chemical things that we could see that are only produced like short lived kinds of things that are only actively produced by an active life process. There's things like that, that you look at in the chemical signature, but actually looking for structures optically, or maybe you look at the radio frequency, but I think that's really hard, right?
That's just really hard. Even we are a nuclear capable civilization and for us to put out a strong signal, Out to, even 50 light years away, I think would take quite a bit of doing
Liv: just because radio waves attenuate and get, and
Peter: It's like against the background and just all this kind of stuff, the sheer number of stars and possible planets to look at is.
Unbelievable. I, when you guys come out observing with me next time, I'll show you with my night vision, when you look up at the stars and that's in the local neighborhood of a few thousand light years. That's all you can see. The sky is just completely messy with stars. So I think what we have to look at is even imagining beyond for aliens kind of right back on the topic of do aliens exist?
I think, in my flights of fancy, I think about, could there be sustainable energistic patterns, like in a place like the surface of a star, even, could you actually have something that we would consider semi stable patterns that give rise to other kinds of higher dynamical patterns? Could those exist at the energy scales of what we find available in a place like the sun, maybe inside the sun, even, I don't know, but that would be an interesting thing to model, right?
Because of the sheer amount of energy available. That kind of environment just makes this place look like a freezer. We're basically, we're Epsilon above Pluto in terms of the energy available on earth to do anything. So sometimes my flights are fancy. I think about the sun, really the stars just singing to each other, right?
Just beaming out with music and energy as the solar civilizations are actually what matter in the universe.
Liv: A friend of ours, Bill Perkins who was a previous guest. He has a theory that stars are actually Giving, it's like a programming language coming out of them, that they're the info they are the programming that then hits the earth and in certain ways and
Igor: by the great simulator, basically they're used as the distributors of the updates.
Peter: Well, an interesting thing people don't talk about very much is there's a strong correlation between the solar sunspot cycles and life expectancy at birth. Do you know about this? If you were to chart life expectancy for humans, and you'd also chart the 11 year sort of sunspot cycle, there's direct correlation.
And it's like a lot. It's not
Liv: In what direction?
Peter: So the year you were born, basically when you were gestating in the womb Depending on if there were a lot or fewer sunspots. I think it's direct. So if you're
Liv: like born at a peak of a solar cycle, when there's something like it's, I
Peter: forget now, but there is a serious yeah.
You can look it up. You can't do anything about it now, right? Yeah,
Liv: But we might be having children soon. We're in the pretty hectic one. I think when there's
Peter: less sunspot activity is when you have a longer life cycle expectancy. But don't take my word for it.
You have to look it up, but there is this, The periodicity is amazing across the whole globe. So there's definitely a thing there. So I don't know if it's a, if it's a simulator, like programming us or whatever, or programming the simulation, but there is definitely a lot of stuff going on that we don't really talk about very much in terms of the energy and the patterns of energy coming from the sun.
We're really early in our understanding of solar processes, because again, we've only had, we've only known about neutrons for a hundred years. So come on, like this is, we're really new to all of this, right?
Liv: At the same time it feels like physics, went through it, some real glory days in the early 20th century and.
You know, we had quantum mechanics, we had relativity, et cetera, all these big breakthroughs, and then things have really stagnated. Uh, would you agree with that? And if so what's the main driver?
Peter: I think, yeah, we're not getting, we're not getting the kinds of revolutions that we were once getting.
I think physics. I think in the areas of applied physics, we're getting some really amazing and interesting things happening. But it diffuses into these other areas, right? So in areas of chemistry, material sciences, and I think as we get better and better at computer simulations and with some approaches of applying AI to some of these kinds of things, we'll get really interesting stuff coming out.
I think about all the physicists that graduated with me in my year, I think maybe only one is still doing physics. Everyone went to different fields, someone to software, someone to finance, others to biophysics, other kinds of areas. So I, if I were to, I hope I don't make any enemies with this, but I feel like, But when physics became I don't know I feel like there's a lot of, physics has to be very experimental science and you have to be able to ask big, bold questions and you have to come up with really shockingly new models.
Like physics is just one level above metaphysics. There's not much else between that and math and that's, and then it's just in the world of pure theory. And I think what happened in the 20th century is we've built off the successes of theoretical physics in a nuclear, in the nuclear physics area.
So the approaches of pen and pencil and being a theorist as a full time job. Became part of the institution of physics. And then physics became I don't know there's a lot of, and we can get into the whole Molok and institutional decay and institutional capture and all that kind of thing.
I think there's some of that going on. Eric Weinstein, there's nothing I can say that Eric Weinstein has already said about this particular topic, not to say I particularly buy into his theory of geometric unity or any of that, but just to say that the critique of The institutionalization and the sort of the theorization of modern physics, it becomes this vortex.
And so a lot of the people I know who, in the proto stages, if you go into academia, you have to go down these paths. But we make breakthroughs by asking weird off the wall questions and actually supporting people doing those kinds of explorations. So I think someday, assuming I make my millions, I would like to fund some of that stuff myself and just put money behind people doing really interesting, weird work on the edges of experiments and all that.
Liv: So obviously there's a big debate playing out right now about whether or not AI development should be open sourced. I don't think the answer is binary. It's probably some kind of spectrum and in some cases it's good to do, and in other cases it's probably not. But I'm really curious to hear your thoughts on, as an expert in open source software, how do the same arguments apply to AI?
Peter: Yeah. And I would say we, we sh we should look at the fundamentals of what makes open source software a good thing or a powerful thing, and what are some of the ways that it Ceases to then manifest those values, when certain other kind of dynamics come into play and in the realm of AI in particular, though, current implementations, let's just make sure we put some context around this.
When we have these big models being built around primarily transformers, LLMs run on lots of GPUs by these big companies with lots of funding for that particular model of AI. Okay. Is different from software in that the artifact itself is a combination of freely available source code.
Most of the source code is well known. Sometimes it's readily available. Other times the algorithms are pretty well known. Anyone can implement one and then training data. And so you run this, these, this code on a lot of what's called training data on a large number of machines. And so then you have to have a lot of infrastructure to coordinate those machines.
You put some parameters into the training run and outcomes, this artifact, which is. What's called a model really when we call a model, those are the model weights, and that's what takes 1000 million dollars to generate is those weights from all this training data So there's multiple different pieces we could talk about when we talk about open and we talk about source Okay.
So it's no longer just here's a recipe. Here's a cake. There's a lot more involved here. So is the source code itself available and open in some cases? Yes. Some cases, no. And then if that's yes, then what about the training data? Is the training data available? Some cases? Yes. Most of the time. No. Are the different scripts and the parameters you use to invoke the scripts that run for this long on these things and all that stuff.
Is that available most of the time? No, sometimes. Yes. And sometimes it doesn't make a difference. Like a naive script could still actually, probably, train something successfully. And then at the end of the day, the weights that come out, which is literally a multidimensional matrix of numbers, is that freely available?
Most of the time, no, but in some cases, yes. So you actually have this multiple different aspects. So it's no longer just, can I look at a text file that has source code? And can I use commonly available compilers to then turn that into an executable that I can run? That's the old school, open source, new AI.
When we talk about open source AI, there's almost. Effective. I think there's no effective, leading or say of the art kind of foundation model that's actually open source by that metric, because then the most, they will make some of the weights available, but they won't make the training data available.
So it's free to use and you can look at the weights, but good luck making sense of the weights. It's a pile of numbers. You don't know what training data they used.
Igor: It's probably helpful as well to point at, let's just start with it. What was the initial? Values and benefits that society and individuals receive on the basis of having open source software rather than closed source software and then looking at yeah, how can we get those?
And do we actually need all of these components or maybe not?
Peter: In a sense it's Yes. The value to society was that many people could make software, but they were gate kept by the cost of getting the tools to make software compilers or SDKs, software development kits, things like that as more and more of the world as more of the open source community made these tools, then people could then make their own. It's a little complex but, if you want to have carpentry, before you can have a really good carpenters workshop, you have to actually have jigs and you have to have things like lathes and saws that are accurate and all these things.
So the very first projects that were done in the open source world were things like this, the compiler itself was open source, like a new compiler, GCC, GNU C compiler. It's some of these basic tools. So then once you have an open source factory floor with all these tools, then you can make other open source things and everyone could always build on the open source and improve it.
And that's the open source software world. So the value, the benefit to society there was that many people could come in and improve things. And then as an individual, when you buy a computer, you could then be empowered and enabled to do whatever you want on your computer. If you buy a car, You could drive it everywhere.
Now, of course, that's our expectation as the buyers of cars, because we know how to drive, but you could imagine a world where, Oh, you know what? You lease, you only lease the car from the dealership and the car, the basic subscription only lets you drive to certain grocery stores or certain restaurants.
And the premium subscription that lets you then drive to certain other restaurants. That'd be like a crappy way to have a car that would suck. But if the roads themselves were all toll and you could only, your, all your car was only licensed to drive certain roads, that's the way that the world has closed over on software because To use certain APIs and servers, you have to then pay a subscription or to do, so there's a sort of a rent capture network value extraction way that the world has evolved to, even on the basis of an open source, open development software ecosystem, because the software is no longer where all the value is.
It used to be that very few people could write software. And so the few people who could do it in the open, we're adding a lot of value to everyone. Once everyone could write software or one software ceased to be a valuable place to capture. Economic value. It was the user's eyeballs. So the source code for all of the source code for things like Instagram or Reddit are not really available.
The source code isn't the value. It was the network of users. It's all the content that was uploaded. The value was in the content and the connection to the phone and everyone having that on their home screen and using it all the time out of habit. So the source code and the software, even though it started as a way of enabling user freedom, the user's walk themselves right back into the crab trap.
And now they don't really care. They just don't care. You can actually run tools to extract all your data from Facebook. Zero, roughly 0 percent of Facebook users ever do that because they don't care. They're using Facebook. And so I think with AI, the reason that dovetails and open source AI kind of stuff is all the training data that goes into making these LMS effective.
A lot of that is scraped through the open internet. Some of that is proprietary data that the companies pay for to get access to. But the data, yeah, one of them has paid. Yeah, but that data is actually where the value is, because actually what provides what produces the valuable AI model in the LLM weights.
It's really just a reflection. It's a condensation, a compression, and a structuring and feature extraction of what's in the training data set. So if you have crap data, you get a crap model. If you have really high quality data, you get a high quality model. And this is an important thing for everyone to understand that even though AI is made using software, the value in AI mostly comes from the training data going into it.
And most of these companies, even though they produce free models that are available, they're not really open source because they're not giving you that training data.
Igor: In some cases, I imagine they also then can't give you the training data if it was the case that they licensed data from the New York Times because they may hate to deal with the New York Times, they can't then afterwards post the New York Times data, right?
Just publicly.
Peter: And also there's legal liability if they admit to using some of the scraped data.
Igor: Yeah, exactly. But then also Yeah, how you process the data is also very important, right? If it starts with just open internet, a lot of crap data, and then you want to bring it down to a better data set that you're actually going to use to train the model, that's a very relevant effect.
And I think there's probably some secret sauce that they also don't want to give up in some cases. And I think with open source AI, they're actually currently still debating and figuring it out and run workshops to trying to understand what should be the standard that
Peter: will be defined.
So I have been in dialogue with OSI on some of the stuff I'm in some of those workshops and the unfortunate thing is that, yeah, when you really look at it, most of the models don't comply with all of those openness definitions and they have a little bit of, I think and hope I'm not speaking out of turn, but I think there's a little bit of a political problem in the sense of if you come out with a definition, open source, and literally none of the models out there can fall under that definition.
It's do you want to be that guy and say, Hey, all y'all are actually not doing this. It would be helpful if there was at least one model that did all of these things. And so anyway I'm, I think there's tremendous value to society. If we could build one that was fully transparent and all these things were available to show here's what is totally available.
All of these things available, all these things, transparent kind of model could look like, and maybe the performance is not as good as one of the vendor produced somewhat open or freely available models, but it's important for them because otherwise they would suffer the critique I think of you're creating an impossible definition, no one can meet this bar.
But then again, that's why I
Igor: wonder about which benefits do we hope for openness to put out, give to society, right? So maybe it doesn't matter that in a puristic sense, it doesn't apply. If it still has the benefits, if say Lama still ended up fostering a community of Tons of people that were able to contribute and develop like their own AI projects and just help jumpstart a bunch of developers to look into it, which it did, right?
And so this, I
Peter: Think gets to the crux of it, which is that openness is more of a consequence and not. I think the core value was about freedom and enablement was about this idea that individuals, we have a technology so powerful, a personal computer is like an incredible piece of technology, man.
Holy crap. Your average, even a five year old laptop is an unbelievable piece of technology. And we have. Rather than leaning into educating and leveling up a society of individuals who know how to harness that to the maximum of its potential to help themselves, we instead end up with basically a captured society of just use it, just users.
And so how does this reply apply to open source and, openness of AI the, again, coming back to this idea that open source emerged as an enablement. Argument, right? Those individuals who buy as a user, I buy a computer, just like I buy a car. I should be able to drive it anywhere, right?
It's like hacking the map system of that captured car to say, no, screw you. I bought the car. I get to drive it wherever. And I don't care what the dealership people say, or the lawyers say, I'm going to drive this car because I bought it, I bought a computer. I get to use it for whatever. There were a lot of arguments in the early internet days, like in the late nineties, early twenties, early two thousands, a lot of arguments against DRM, digital rights management, and the people who are for it against it, like streaming video, MP3s, all these things are technical capabilities that exist on the machines.
And then the copyright holders really came down hard on this and they got in cahoots, with the telco providers and ISPs to make sure that, Hey, you know what? You can use your computer for anything, but not for pirating content, because we've got to make sure we pay Hollywood and got to make sure we pay whatever.
So I think with AI, it's the same kind of thing that if it's about enablement, if it's ultimately about, I buy a computer, it's 32 gigs of RAM. It's got high end hardware to be able to do all sorts of machine learning. I should be able to use it to do whatever I want. And if I can do that, then great.
But if I can't. Then why not? So for instance, can I use machine learning on my machine to automatically navigate through my Facebook and extract the most meaningful posts that I might care about and offer me a new interface, right? Because all of these web apps, all these two web 2. 0 softwares of service B2C consumer apps, they all are based on renting eyeballs.
Okay. But if I use an LLM, whether it's copilot technology from Microsoft or the browser, if I use these things to actually scrape all of these different apps and prevent present to me a really cool, wonderfully human attention serving completely stripped of domain, dopamine addictors interface to my information environment.
That's a very humane technology. And you know what people like me and a million other people like me want to build that kind of technology. Now, here's the question. If I use an open model from a company who makes its money, literally billions and billions of dollars every month from renting out those eyeballs, how long do you think I will be able to continue using that model for that purpose?
Igor: Lama has a restriction on the. Use by saying that, it's pretty high, but they say if over 700 million users start using something that is based on your work with Lama, then at that point, Facebook meta gets to choose whether they license it to you or not.
Peter: But I'll bet you by the time that you get even 10 million Western, cause Western users of Facebook, Instagram are worth a lot of money, right?
By the time you get even 10 million of those users to change their Browsing behaviors as a result of this, you're gonna have a different conversation, right?
Igor: Presently they're just trying to walk Google. Exactly. The ones that already have the distribution. Yeah.
Peter: And by the way, this is nothing against any of the people who work at Meta or the Facebook team or anybody who's one of these things, like they're just doing their jobs too.
I get it. And I have good friends who work there and I, the work by the Pytorch team is fantastic. There's a lot of good people there doing good things for good reasons, but I'm just saying We exist in a capitalist world. Moloch puts its tendrils everywhere. And if we actually use these kinds of open technologies to cut off one of the strongest growth roots, one of the tap roots of Moloch is network extraction of the attention economy.
If we chop that thing off with LLMs and AI, you're going to have a lot of people coming down on what you're building. Like just realistically speaking, you have to model that kind of response.
Igor: So you said that. One core value that open source tried to enable was the enablement of people and freedom that come with it.
I thought there was also a large part in that we touched on before around collaboration and like this, like collective work together. Is that just in the. Is it just a means to enable empowerment or is there something else more about this kind of collaborative way of building a community together that then ends up contributing to a project and edits and modifies it constantly?
Peter: Yeah, I think it's, it doesn't have to be either or right. So the many great projects are just one, maybe two people building something very interesting. Actually, Clay Shirky wrote a book. Call about crowdsourcing, right? And he's very well known for thinking about crowdsourcing. And he talks about all sorts of crowdsourcing, starting with step one, being my first collaboration.
So you're actually building something for yourself. So you are the best end user product testing. Person for your own needs. And when you use something like the internet to connect to other users, who have a similar or other people who have the same need. Now you've built a great product for all of them.
And so I think there's something about the network effect of being able to find lots of other people who have the same need you have, who then can adopt what you've built and then they can make internal improvements. So that network effect is also real. And makes it valuable for you to give and make your thing open rather than just having it be close to yourself.
But you don't have to do that in order to be effective at building a thing for yourself.
Igor: On the topic of open source AI, one point that's often also raised is that if we look back at the past, then historically open source has also contributed to making systems more secure in some instances and like intelligence agencies partially run, I think some of their tech stack on open source software.
And their point then is if you care about AI security, AI safety, then you should also have it open source because open source historically led to it. The thing that I then, as I thought about open source more, and I wonder if you agree, is that it seems to me that the benefit of open source software security improvement was from noticing vulnerabilities and reducing those.
So improving defenses, basically, whereas the thing, when we worry about AI safety is more about the ability of the AI to attack or someone else to use it to attack someone. So it seems like a category error actually to apply the same thing, because it is not the case that if someone puts out sorry to use the example, but like Lama five and Lama five has incredible virology or like pandemic making abilities.
And we noticed that and now Oh, let's patch that by fine tuning it. It's okay, cool. You fine tune it. And the next version that Meta releases. It's now fine tuned and doesn't have that ability anymore. If you were able to do that, but that attack vector now exists because someone downloaded probably the Lama 5 pre fine tuned version or could even fine tune it away.
Yeah. So you're not, so
Liv: you're not democratizing security like with OSS, you're democratizing the ability to add security, but with AI, you're democratizing the ability to attack.
Igor: At least that's your, there's also this other factor where using AI models as part of your tech stack will also create some vulnerabilities and others noticing that will make you able to reduce them.
But the thing that often I think people in the kind of Hey, there are risks around AI worry about is more its ability to be used as an attack vector. And I think that I don't see how. Open source can actually help that very much by putting out a piece of software. In this case, like a model that has this attack.
Now everyone can have access to it. How is, yeah, I don't see how the
Peter: security has improved. Many different intersecting concerns here, right? On this topic, but you're absolutely right. And that, when people talk about AI safety, they're generally talking about the use of even a proper. Or a non specifically designed to be harmful AI model.
If they, someone with the best intentions builds an AI model, could you still, does it have latent information in it that could be abused for some purpose, right? So I think a lot of the AI safety is around that, but also could you embed, could you sneak some side channel thing in there? So it does something nasty down the road.
That is the topic of AI safety. I think it's those kinds of things. And that's a very different set of concerns than when we typically talk about software security with open source. And this comes to a very crucial distinction between these things, which is that with software, ultimately, when you have source code, we talk about open source a lot, but we're really, we're just talking about the word open.
We're not talking about all the connotations of the word source. Why is source such an important term there? It's important because the transformation from source to binary is a well defined procedure that we use a compiler. We use an interpreter. We might, whatever we use, we can put tests on it.
It is a well defined and well characterized transformation. So we care about it, but it's a one way transformation, right? Reverse compilation kind of works, but oftentimes you really can't tell what's going to happen. We care about that one way transformation, but it's still a well defined transformation.
So if you can see the source, you can be somewhat guaranteed that the binary that comes out is going to be well characterized. So in the software realm, and this is the term to key in on, correctness is actually a sane and reasonable and coherent concept. With LLMs, That goes out the window. You can do all the correct things, but if your model has something in there about people putting glue on pizza, like your training data has that has like glue on pizza as a thing, your model is going to tell people to put glue on pizza and that's just what you got.
And even more directly than that, because these models, they're not, I want to go out on a limb here. They're not intelligent in the way we typically think of intelligence. And when we hit them with a prompt and they emit some string of tokens, that sequence of tokens. Is, um, it's very much in response to the prompt.
So if you fine tune, this is the Gemini debacle, right? When they first released Gemini, the image generator, and they had to find it, they had a system prompt in there to say, Hey, be diverse. So when you generate, they like to generate pictures of black Nazis. And it was like, Holy crap. I didn't, I didn't want that, but that's literally what this model does, because we're not really good at operating these models yet because we don't really know quite what the LLM model constitutes.
We have some ideas now. But I think this, the fact that correctness is underdefined for LLMs is where a lot of the AI safety conversations and all this kind of stuff, social impact conversations, they end up like. Spiraling around each other because of that. But that's a completely different universe than software correctness.
And so we talk about the value of open source software in, making certain kinds of vulnerabilities, more transparent, easier to find the track down, that's all correct over there. That same stuff does also apply to the AI models, but the AI models have another dimension of other stuff beyond that.
Liv: Why do you think it is that so many open source advocates are being so almost tight? like religion about the same, that all AI development should be open sourced as well.
Peter: I may not be one of those zealots, but I'm in strong agreement that as a world, because I believe so much that we are on the brink of creating technologies that really can augment human cognition, I fundamentally believe in open access to that for everyone.
If we can, if it doesn't take. A civilization level effort to build just one, if it actually can be a non scarce resource that everyone can run, which I believe it can be, then we should give many people that if we give it to every school child, wow, could it accelerate their learning? If we give it to every individual and we actually have a well designed system to help them do sensemaking, holy crap, that would be a civilizational upgrade.
Like we couldn't even imagine. So there's aspects of this that are for me. A deep aspect of really fundamentally liberalism, it's human freedom doesn't mean much if you've chopped off a part of the human's brain. So they limit what they imagined freedom to be. But likewise, if there's the ability to put an extra brain onto a person, they can have an even better version, even better vision of their potential.
We should totally do that. That's like the most humane thing we could possibly advocate for doing. And I think the current incarnation, the current conception of LLMs is these giant data centers and Jensen gets us cut and it's like just a billion dollar thing. I think that's completely bogus. It's completely in the wrong direction.
We are going to be able to build more effective models, architectures. We're going to have cleaner training data sets that lead to extremely performant, very narrow models. We're going to build new architectures, sampling from lots of small models. We'll have an explicit chain of thought.
We'll have more observability, more explainability, more I think that's where we're headed with all of this stuff. And I think all of this will be able to run on an average home computer. And so with that vision, that's where a lot of us open source people are leaning towards is that there's no reason this is one of the biggest anti Moloch.
This just chops his legs off because so much of Moloch emerges now from capitalism saying, I need exponentials to extract. Oh my gosh, web centralized, client server architectures. I can extract exponentials from here. That is. Destroyed a generation or two of people's sense making and whatnot.
It's psychology. We can go and just cut all of that off by actually empowering people to have their own tools for this. So it's very much aligned with the core ethos of open source.
Igor: We need to build the defenses. Once we have built the defenses, then I'm more happy to distribute the attack vectors to everyone because now they're not that effective anymore.
So I think time actually buys us something around some things. So for example, if someone was to develop a, yeah, again, like very built on a large virology related training set and pandemic capable with like novel various pathogens that it could like design. And then maybe order on a DNA synthesis printer somewhere.
Having the capability distributed immediately versus knowing that this capability exists and now Oh shit, we need to act against it, to build the defenses, because we know in three, five, 10 years, everyone will have it. I feel like those three, five, 10 years really might matter for some of these kind of large scale
Liv: defenses,
Peter: Large scale attacks, taking a civilizational view on this conversation.
Right now, I agree with you that there's a lot of polemics and hyperbole being thrown from different extremes, right? There are real dangers if you actually just give everyone the ability to figure out how to do certain kinds of things that could have a large scale negative impact. And the installation against that is not to gate the knowledge necessarily, but to ask how do we design society so that people don't actually want to do that, and this gets to where, Western liberalism gets us to a certain point in terms of fighting as feudalism, fighting against tyranny and the basic ways, but how do we build a scaffold to support people in desiring To build positive things on the way up, that's different than, trying to dig ourselves out of a hole.
And maybe that's an abstract metaphor but my point is that when you, if you think about what some of the people did with WhatsApp. Okay. So they had this was in, I think in India, there was like this genocide thing that was happening there.
I guess the Rohingya and basically they have there's one instance when someone just had a backpack full of like pre phones, flip phones, preloaded with WhatsApp or whatever, and they, or maybe not flip up, were very simple phones and they dropped a backpack of this stuff off.
It's like a remote village where they'd never seen this kind of technology before. And people were like, Oh, this is great. Because we can message each other. Oh my God, this is like alien technology to us. And then messages started coming down to these WhatsApp groups about, Hey, there's a village over there, where they're like, they just beat up and raped some, Indian woman, like you need to go take out those people.
And like the ability to, when you have that level of technology and it was not a very high level of technology compared to what we have, And you drop it on a society or microculture micro group that doesn't have the ability to absorb that technology, then it is literally text messages from God. Why would they, why would they question it?
And so just with text messages on WhatsApp, you can actually incite genocide. Okay. So should we ban text messages and WhatsApp? No, we actually, in most places where people have WhatsApp, they also have some understanding of what the technology is and isn't right. So I think the similar kind of thing with this, which is, and this is the civilizational point, maybe for us to really get to the next level, we need to have a civilization of people who are really good at using AI technologies and augmenting themselves and building better versions of themselves to do that.
We actually need to have a culture and politics. Sort of social infrastructure that supports people in that. And if all we ever do is we say we started the building block of a person being possibly just a sociopathic Simeon, and we can't give them LLM because they will then just either kill everything or rape everything and be done with it.
Yeah, you're right. We could, we just couldn't then that's very dangerous. But what if the actual skill test for us here now is not just building an LLM or building an AGI, but being a civilization that can equip lots of people with augmented intelligence like this and the governance hurdle is actually the hurdle, not the technological one.
So I think it's worth the government's hurdle,
Igor: the
Peter: governance hurdle. How do we keep from killing each other with this stuff? How do we build a society that is able to have a culture that is able to find people with the beginnings of sociopathy and, or support people's development so they don't become sociopaths and psychopaths.
How do we do this in a way that doesn't require a homogeneous culture for a hundred million people, but actually support small groups and tribes where people can take each other, take care of each other, build real relationships and build real real culture coming from this kind of thing?
It's almost like we have to get past that industrial mode of relating. We have to get past the capitalist mode of production and commoditization and alienation. Need to be actively thinking now collectively about how to actually protect some of these kinds of infrastructure things. It's one thing to have custom printed viruses and pathogens in which case we know where those labs are.
We know some of that, like the people who do that kind of work, people who sell precursors, all those kinds of things. You need, you guys need to get your, you need to level up your act. And then on things like critical infrastructure, one of my friends was like, you know what? I was given a task of spending a hundred thousand dollars to take down the entire Eastern seaboard's power grid, and I could do it, or maybe it's a quarter million dollars, but we let's make, let's actually take that as a real task to defend our infrastructure.
So that some random smart dude with a quarter million dollars could actually take down the entire Eastern seaboards power grid. Because we need to do that anyway. So these are the kinds of things that it's not actually about AI. It's just AI being a forcing function for we need to get serious about this other stuff. And maybe that's, again, the thing of Wouldn't it be easier if we just stopped the AI from being open source in the first place, right? Rather than, or at least matching the timelines of securing these things, which I can appreciate, but here's the problem. We don't have a choice because our adversaries will be more than happy to release models that have this information in them.
So we don't have a choice as to whether we should do this or not. And so, I feel like that's the hard conversation that we have to have where some of the people think like we can, especially I get, I see people talking about trying to stop open source AI and whatnot here in the United States.
Great. How's your meeting with Xi Jinping go? Because he stopped developing models and releasing certain versions of them. I don't think he's going to stop. Vladimir Putin can throw a few billion dollars at somebody, some proxy actors. It doesn't take that much expertise to train one of these models.
Igor: Is Xi Jinping motivated to release open source models or is he just more motivated to have?
Peter: The Chinese are already releasing quite effective open source models. Built on llama, I thought in large part as well but there's, I don't know exactly what all they're built on, but look, half the authors on these AI papers are Chinese.
Anyway, the Chinese Americans work in American labs, but clearly the country has people there that are capable of building these things. And if you look at the latest they have, what was the name of the one? The movie one is it's yeah, there's no shortage. Of talent and computation there to do some of these things if they have state level interest.
So I think it would be naive for us to imagine that we really are the only ones in charge of our destiny on this matter. I think we have to think very proactively about what will happen.
Igor: What do we need the world to look like at the point when basically there is distributed access to these Mass casualty tools, unfortunately.
And that is though a different perspective than to say that those mass casualty tools just won't exist. So I think there's a, I feel that often in this conversation, there is the, Oh, it's just it's like a doomery perspective to think that there will be these capabilities, but my take on that is you're just.
pessimistic to what AI will be able to do with more effort by the smartest people in the world with the most funding it currently receives in the world. I really believe in human ingenuity and we will figure out many more capable things to come out of AI tools. Yeah, I think the question of what do we need the world to look like when these bio capabilities, for example, are distributed.
We are protected and frankly, though, there is a bit of a choice there because it is still, I don't know, OpenAI has a thousand employees, DeepMind maybe now, okay, DeepMind had 1, 500 now with all of the other ones, many more, but it's still like an amount of people in the thousands that do this work.
And whether 10 of them or 500 of them work on these kinds of defensive biased protection mechanisms for the world afterwards does change the trajectory a little bit in relevant ways, right? Like you can either choose to, okay, let's just a hundred percent accelerate all dual use capabilities, because like we're just hoping to balance over time, or.
Or how about 80 percent dual use and 20 percent defense bias work or something like this? To the degree that it is possible to separate them out. But in some ways it is sometimes. I
Peter: I think the most immediate concerns that we can look at are in the area. The most. Dangerous asymmetric kinds of things where information in the form of an LLM could lead to mass casualty kinds of things that I think those are mostly in like bio and chemistry kinds of areas.
And if you can look at those areas, okay. Some of the basic chemical things that can do a lot of damage, you're more like kill yourself playing with them than, actually chemicals,
Igor: bio is really bad.
Peter: Bio is pretty bad
Liv: because it's self replicating.
Peter: It can be self replicating and it, and yeah, and there's a lot. An actual living human being operates on a very knife edge balance of equilibrium and it doesn't take a whole lot.
A few milligrams of neurotoxin, you're gone. That's where, yeah, there is more danger there. But I think that, to your point about the researchers on safety at places like anthropic or at an open AI deep mind we are working on, we're getting much better. I think now tools to look at.
Interpretability and what's in a data set and doing it. I was like, what's in a, just a weight. If we just get some weights, we can actually evaluate and look at what the feature matrix looks like? Can we extract some kinds of things? Now you could absolutely imagine a national effort to build a very good expert model with a lot of really good chemistry in it informed by some of the best chemists or sorry, chemistry and bio and, bio weapons researchers and whatnot.
And we would have a, that would be a private model that we run in secure government infrastructure. To then drive evaluations of other models to look at are these features present? Do we see these things mentioned and whatnot? I think that's the kind of thing that we could be building from a safety perspective.
Right now, the fact that these are black boxes is part of the danger is that, or not dangerous part of what makes it difficult to talk about this, but we're coming, I think, to a point where we will start being able to extract some things out of them and see that there's another ecosystem dynamic, which is once you're able.
To put open models out there that don't have these kinds of things, but that do a lot of the fun stuff or help people learn geometry and help people, reason about philosophy that could outcompete some of these other kinds of things. And the people who are doing those models that have maybe more dual use kinds of things and whatnot, it makes it a little bit easier to find some of those kinds of places.
You actually, the GPU is available for people to fine tune their own stuff. Maybe be more used by these open models. There's a lot of stuff you can do once you actually have ability to say, here's the problem. We're whittling down on this particular set of it or focusing on it. So maybe these are all very hand wavy arguments and I'm not the deepest on the biggest doom versions, the biggest doomer versions of like gray goo and, nanotech, whatever or bio but I do feel like we are coming up with some tools that we could do some evals on this stuff that could really look through it.
And and we should be securing the actual tools anyway, like the actual physical tools for biochemistry and things like that, knowing the stuff is coming,
Igor: But then on the conversation of openness again, like if we were, so one thing that is being researched on is interpretability of the black box, so to speak, and then also, okay, so now you did identify where it's like virology understanding sits potentially.
Can you. Specifically, only turn that capability off, like in a targeted way where, and not that you trained it without the capability, because if you reduce the data set, it's going to be probably less of a good brain and all sorts of ways. But after the fact that you've trained it, now you potentially can turn it off.
If you released in an open way, that entire process though, then again, someone could just. Do the whole thing without turning it off. So then you again gave the capability, right? So there's, I think there are just certain things where yeah, like prior to having the defense in place, I would still like to, okay, this is one aspect that shouldn't be openly distributed.
Like we don't have open access to every single technology of any kind, right? We don't have open access to nukes. I know it's always the example that people worry about, but we also don't have Open access to even though in practice we do have it to smallpox, even I suppose, but we shouldn't have open access to smallpox. Yeah, so I think, yeah, and unsurprisingly, like a little more subtlety around which capabilities are we, which few capabilities do we feel good about?
Peter: Yeah, I think certainly it's better if you can actually have access to the training data to say it didn't learn anything about virology, so it's not going to have biology content in there, right?
Doesn't have the concepts of what some of these organic molecules are, what some of these like protein sequences do, just doesn't even know. And it wouldn't have to know if it's going to help me write code, right? Coding assistant, whatever. And when we are looking at putting those data sets together or people put those data sets together and then create a fine tune that does this, then yeah, that certainly is possible, but this is, not just, I guess it will certainly be possible and I do think this will be done, but that's why it's so important for us to invest in building these interpretability and model monitoring and valves that are able to actually gauge how much.
Knowledge does this particular model have of whatever and start tracing some of the stuff down? That's not an foreign concept to us, to regulate certain kinds of things, like with guns, we have certain kinds of regulations with airplanes. Yeah. You have to get rated for certain kinds of airplanes of a certain complexity.
And so for models, most of the applications we can do, and most of the things we can do that will make society better, we could run with models that have no knowledge of virology.
Igor: Yeah. And undoubtedly we want given that it is just an enabling AI, it's just intelligence that you can use for pretty much all sorts of, in all sorts of ways and that you would expect will increase productivity across all sorts of domains, you do want it to be distributed, of course, ultimately.
Yeah.
Peter: And, but that's just for the current incarnation, which is not AGI, right? When you get to really like superhuman intelligence and what it could do for people there, like this is just all baby steps. In preparation for what happens when we have access to that kind of capability. But we, I think we need to get good at this.
Like we have to start getting good at this. And it starts by actually being able to have a conversation about values and technology.
Igor: You mentioned super human abilities and you also described a version where you will have a lot of narrow AI that is helping people that may be able to run. Yeah. What's the world you're picturing in say 10 years, what do you think where we'll be?
Peter: I would hope that in 10 years, we've gotten into a mode where people really are solving for what's meaningful for them and that they feel supported by technology and doing that. And the technology is there to help them both do that for themselves, but also connect with their family groups of friends, whatever their individual cultures and doing that kind of thing.
I think we, we are, I really am hoping that we can lean on some of the emerging AI capabilities to break free from the way that Molocha hijacked people's psychology in the current mode, especially with some of the stresses on global finance, some of these kinds of, people being more conscious of climate change and overproduction, overconsumption, all these kinds of things.
And actually even just more and more. Podcasts like this, like yours talking about, win-win and not selling for zero sum, more generations coming up through social media and overconsumption and fast fashion, hype consumption, whatever, and getting the other side of that and saying, I want something more.
In a sense, we're all trying to rebuild meaningful civilization from the shadows of whatever the, Fuck it was the boomers left for us. And so I'm hoping that LLMs, not elements, but AI, or some of these kinds of technologies can help us be more thoughtful, more intentional in our interactions with the world, with each other, and really how we regard ourselves, because all of this stuff we've built a lot of this early deployments, PC technology and smartphones, and all these amazing.
Technical feats we've built our ways of creating more black mirrors or colorful dystopias. They're ways of making us into happy little clockwork oranges, right? But really we want to be humans expressing our full humanity and we want technology to really, We'll put the horse in front of the car.
We'll put humans in front of technology. I think we can do that. But to do that, we have to be intentional about doing it. It's not going to happen automatically. And it certainly is not gonna happen if technologies keep Pretending that technology is value neutral, which I'm just flabbergasted by in this day and age, anyone can make that argument.
So it sounds like
Igor: you're, yeah, I like that vision as well. And I think at the point when AI is initially used were like advertisement improvement, et cetera. Like for the algorithm, it's well at the point when it actually improves truly, as you previously said, like for us to be more enabled at that point, and really helps the collective sense making.
One could really see how added intelligence where some tasks of the collective sense making could be outsourced to trusted AI versions, which you can be sure of that they actually optimize correctly for the thing that you want them to optimize for you around certain topics. That'd be a pretty good future.
Peter: Yeah. I would say that this is the first time we really can have, I want technology that helps us discover for ourselves what's best, as opposed to this treadmill of always trying to get us addicted to what's new. Yeah. So moving what's new to what's best and giving us agency in that decision.
I think. I think really this will be the first technology in a long time that will come out and tell us, Hey, it's time to actually turn off the computer, step away from the app, you need some of your time, like you imagine your assistant, you need some of your time, touch grass, right?
That'd be great. There'll be none of the current apps to tell you that, right?
Liv: Definitely not. Awesome. Thank you so much. Thank
Peter: You so many guys. This has been absolutely a pleasure and a blast. Thank you, Peter. That was great.
Liv: So there we go, folks. Huge. Thank you to Peter for letting us dig into his amazing mind.
If you enjoyed this, then I highly recommend you check out some of Peter's writing. I'll link to it in the show notes. He's written some really incredible blogs digging into a lot of these topics in more depth. He's also a regular on Twitter, or X, whatever it's called, so give him a follow on the link below too.
And as always, if you enjoyed this, please share it with whoever else you think might do so too. And until next time, may your next two weeks be full of win-wins.
The Win-Win Podcast is an exploration of the games that drive our world. Created by poker champion and philanthropist Liv Boeree, it explores solutions to humanity's biggest issues through conversations with leading thinkers.
Incentives make the world go round. But as the stakes get higher, are the games we're playing taking us where we really want to go?
We can never know for certain which path the future will take, but we can change the likelihoods of the paths we want. How can humanity harness the power of both competition and collaboration to unlock truly abundant and sustainable futures?
Liv is joined by top philosophers, scientists, gamers, artists, technologists and athletes to understand how competition manifests in their world, and find solutions to some of the hardest problems faced by humanity.
Win-Win doesn't just talk abstract concepts and theories; it also digs into the guest's personal experiences and views, no matter how unusual.
This isn't just another culture war podcast. If anything, it is the opposite; seeking better coordination mechanisms through synthesis of perspectives.