Beauty At Work

The Promise and Peril of AI with Jaron Lanier, E. Glen Weyl, and Taylor Black - S4E7 (Part 1 of 2)

Brandon Vaidyanathan

Jaron Lanier, E. Glen Weyl, and Taylor Black join Beauty at Work for a wide-ranging conversation on artificial intelligence, innovation, and the deeper questions of meaning, faith, and human flourishing that surround emerging technologies.

Jaron Lanier coined the terms Virtual Reality and Mixed Reality and is widely regarded as a founding figure of the field. He has served as a leading critic of digital culture and social media, and his books include You Are Not a Gadget and Who Owns the Future? In 2018, Wired Magazine named him one of the 25 most influential people in technology of the previous 25 years. Time Magazine named him one of the 100 most influential people in the world. Jaron is currently the Prime Unifying Scientist at Microsoft’s Office of the Chief Technology Officer, which spells out “Octopus”, in reference to his fascination with cephalopod neurology. He is also a musician and composer who has recently performed or recorded with Sara Bareilles, T Bone Burnett, Jon Batiste, Philip Glass, and many others.

E. Glen Weyl is Founder and Research Lead at Microsoft Research’s Plural Technology Collaboratory and Co-Founder of the Plurality Institute and RadicalxChange Foundation. He is the co-author of Radical Markets and Plurality and works at the intersection of economics, technology, democracy, and social institutions.

Taylor Black is Director of AI & Venture Ecosystems in the Office of the Chief Technology Officer at Microsoft and the founding director of the Leonum Institute on Emerging Technologies and AI at The Catholic University of America. His background spans philosophy, law, and technology leadership.


In this first part of our conversation, we discuss:

1. How aesthetic experience shapes worldview, imagination, and intellectual vocation

2. The historical rivalry between artificial intelligence and cybernetics

3. The danger of treating AI as an object of faith or a replacement for human meaning

4. The psychological and spiritual costs of assuming people will become obsolete

5. A tension between two different modalities of beauty


To learn more about Jaron, Glen and Taylor’s work, you can find them at: 


Books and Resources mentioned:


This season of the podcast is sponsored by Templeton Religion Trust.


Support the show

(intro)

Glen: One phrase I've been using a lot recently is: be the super intelligence you want to see in the world, you know?

Brandon: That's great. That's great.

Glen: Like, you know, corporations are super intelligences. Religions are super intelligences. Democracies are super intelligences. Like every definition of super intelligence that's been given, they're all super intelligences.

Taylor: I think this technology show throws into sharp relief the ability for us to understand really how we think. We found a lot of the success of using this technology in productivity ways ends up having certain meta-cognitional strategies on how you use it as a helper, rather than have it do your thinking for you.

Jaron: The great fallacy of believing that computers can become arbitrarily smart is this idea that, relatively, people will not change, will not be creative, will not move. What a horrible thing to believe.

(intro)

Brandon: I'm Brandon Vaidyanathan, and this is Beauty at Work—the podcast that seeks to expand our understanding of beauty: what it is, how it works, and why it matters for the work we do. This season of the podcast is sponsored by Templeton Religion Trust and is focused on the beauty and burdens of innovation.

What drives the human quest to innovate, and what makes technology beautiful or burdensome? In this episode, I am joined by three remarkable thinkers—Jaron Lanier, Glen Weyl, and Taylor Black—to explore the beauty and burdens of innovation through the lens of artificial intelligence.

Jaron Lanier is a renowned computer scientist, musician and author, often described as the founding father of virtual reality. He's known for his critiques of digital culture and social media. His books include You Are Not a Gadget and Who Owns the Future? He works as Prime Unifying Scientist at Microsoft's Office of the Chief Technology Officer, a title that spells out the acronym "Octopus."

Glen Weyl is Founder and Research Lead at Microsoft Research's Plural Technology Collaboratory, Co-Founder and Chair of the Plurality Institute, Co-Founder of Radical Exchange Foundation, and Co-Founder of the Faith, Family and Technology Network. He's also co-author of two books, Radical Markets and Plurality.

Taylor Black has a background in philosophy, law and entrepreneurship, and is Director of AI and Venture Ecosystems at Microsoft, and the Founding Director of a new Institute on Artificial Intelligence and Emerging Technologies at the Catholic University of America.

Now, I want to be clear that each of my guests is speaking solely in a personal capacity. In this podcast, their views are their own and do not represent Microsoft.

In our conversation, we're going to ask: What makes AI beautiful? When does that beauty turn into idolatry? What happens when technology becomes a religion, and how might ancient wisdom help us to design technologies that serve human flourishing? Let's get started.

(interview)

Brandon: All right, guys. Welcome to Beautiful Beards, I guess. Glen, sorry, you didn't get the memo.

Glen: Oh, no.

Brandon: For our listeners, we got three bearded guys and Glen. No, welcome to Beauty At Work. We are exploring this season the beauty and burdens of innovation. I wanted to have the three of you all on this call because you've written and said some really insightful things about this topic, and I think it's really crucial for us to explore that.

But before we jump into talking about innovation and AI and the beauty and burdens of AI, I want to ask you about beauty. Specifically, I want to have you all recount a memory of beauty—anything from your early lives, anything that comes to your mind. Is there a memory of a profound encounter with beauty that you recall? Perhaps, Taylor, I'll start by asking you.

Taylor: Yeah, certainly. So when I think of beauty, of course, I think in the natural world. I was grateful enough to grow up in the Seattle area, so I had a lot of that growing up. But the thing actually that came to mind when you asked that question is actually Tolkien's writing with regard to kind of almost immediate experience of beauty, particularly in the beginning of The Fellowship of the Ring, when he's talking about the idyllic nature of the Shire in particular. In fact, reading that growing up, with all of the tangible examples around me of course of natural beauty, it kind of helped shape my worldview in such a way that I studied philosophy later in life in order to find an understanding of the world that was as rich as Tolkien's writing about the natural beauty of the world. That's my answer to that. There’re several different places in Tolkien where he talks about that rich beauty as mediated by language that I had already encountered out in the world.

Brandon: Wow. It's interesting because there are some interesting tensions with technology and Tolkien's own views. Maybe we can get into that. Glen, how about you? What strikes you? What comes to your mind?

Glen: I remember when I was in my early teens, I went to Berlin, to the Pergamon Museum, and I saw the Ishtar Gate of Babylon. I think what I remember most vividly about it was that I had gone to various historical sites and seen ancient things, but they had either been sort of ruins or sort of very imperfect recreations of various kinds. I think this is the first time that I came to grips with the notion that people in very, very distant times and places had sort of things of profound awe and encounters with awe that would touch me had I been there. And so I sort of felt like an empathetic connection to their sense of awe and beauty, that I really had never quite managed at that age to get through the imagination I'd had through other pathways. I think that definitely engaged me with history more profoundly.

Brandon: Wow. It seems to us to resonate with your work on plurality and that recognition of the diverse ways in which we can all be attuned to something beyond, right? Jaron, what memory comes to your mind?

Jaron: The one that came to my mind when you asked the question was the first time I heard William Byrd's Motet: Ave Verum Corpus, which, if you're not familiar with it, go listen to it. There's a lot of words, so I'm not sure which one to recommend.

William Byrd was a composer who lived in London at the same time as Shakespeare. Although, apparently, they never met. But he was the other sort of renowned artist from that milieu. He was part of the underground Catholic scene. And so motets are chamber choir pieces designed to be soft enough that they won't be heard by passersby. So it's just six voices, not a whole choir. There was a school of Catholic composers at that time who just — I don't know what was going on with them, but they achieved some kind of incredible synthesis of serenity with the stirrings of this western tendency to swell and build, to have a structure, not just a constancy — which was, to tell a story in music, is a particular thing that started to happen in Western classical music. It's also just, I don't know. You have to hear it. It's the most luminous polyphony that's ever been written.

Glen: William Byrd's Motet, and what did you say after that, Jaron?

Jaron: It's called Ave Verum Corpus. It just happens to be the text that the motet was set to. But give it a listen. Yeah, six parts.

Brandon: What struck you about it when you first heard it? What makes that resonate with you till today?

Jaron: So let's say there are some types of spiritual music that are trying to — I say 'trying' because I don't think anything human is ever perfect. Maybe nothing ever is ever perfect. They're approaching some sort of serenity, some sort of still place that's outside of time and process and yearnings. But then there's another kind that's very earthy. Like, oh, I don't know, Yoruba ritual music or something. A lot of our Jewish music is like that.

What's amazing about Ave Verum Corpus is it's both, which is not something that you come upon that often. Like I say, it's got this very human sense of swelling and yearning, and yet it also has an unmistakable calm center. Also, there's a kind of purity. In western tradition, what we do is we combine structure that's really unique to the west, which is things like polyphony. Particularly, we have multiple lines, multiple things going on at once that go together, and chord changes. All of that stuff is kind of the unique signatures of Western music. What it tends to do is, it pulls the music away from being perfectly in flow and perfectly in tune, because you have to reconcile these abstractions of structure with the musical flow. That's our problem here in the West. I don't think any piece of music has ever succeeded as well with that, until maybe some things in the jazz tradition. I'll say there's kind of interesting things in the jazz tradition that do it. But Ave Verum Corpus, check it out. It's just wonderful. It's short. It's a radio-length piece.

Brandon: Right. Yeah. What strikes me is, I suppose, that kind of integration or maybe unity that you're alluding to there, which is, of course, part of your title at Microsoft, which is Prime Unifying Scientist. I'm curious about this.

Jaron: Yeah, I think Glen came up with that. It's long story. But yeah.

Glen: Did I come up with it, Jaron?

Jaron: You might have. I mean, alright, so the idea—

Glen: I came up with the idea of being octopus of some form, and then I think you figured out what it stands stood for or something like that. So, yeah.

Jaron: You know what? Okay. Yeah, what happened was, I report to them and Kevin Scott, who's the Chief Technology Officer. So I'm in the office of the Chief Technology Officer. Kevin had, at one point, said to me, "I would name you chief scientist. But we already have our chief scientist, who's Eric Horvitz, and so you need to be something else." And then Glen was saying, "Well, since it's OCTO—I've been interested in cephalopods, and I studied them and what not—so you should be octopus." Then there's this question, what is the "PUS"? There were a bunch of candidates, and Kevin chose prime unifying scientist.

Glen: They call this a backronym in the trade, when you come up with the acronym first and then what it stands for.

Jaron: Backronym, yeah. But I think prime unifying scientist might have been yours. I mean, Kevin chose it. I'm good with it. I think a lot of my thing at Microsoft is being sort of both in and out of it, and having a weird title is good for what I do.

Brandon: So it seems pretty apt then in that sense. I mean, unity is interesting, aesthetic, ideal, you know? I mean, it's a transcendental and so on. But it's also behind the grand unification theory. There are ways in which it is something that absolutely—

Jaron: Yeah, the grand unification theory does not exist, by the way. So we have to be careful.

Brandon: Right, right.

Glen: It really hits Jaron in the gut.

Jaron: I've worked on that one. It's very—

Glen: When you hear people talking about it, it really hits you in the gut, right, Jaron? G-U-T.

Jaron: Yeah, the thing is, you know, I also work in that area. People have been trying to do that for more than three quarters of a century. It's just a tough one. We just haven't found it.

Brandon: Yeah, I know. Yeah, but it is a powerful ideal. It does seem to be something that motivates a lot of people and has disillusioned a lot of people too. Well, I want to ask you, Jaron. I mean, you've been involved in this field. If we could jump into talking about innovation and technology, particularly AI. I mean, you've been there since the earliest days with Marvin Minsky and the others who helped define the field.

Say a bit about what the atmosphere was like in those early days. I suppose, what was your experience of this field? I think you've had qualms about the term 'artificial intelligence.' What was your relationship like to some of those early pioneers, and how did the field evolve in your sense?

Jaron: Well, this is a whole long tail we don't really have time for. But the briefest version is, I was very fortunate when I was quite young to have Marvin Minsky as a mentor. I wasn't his student. Actually, he was my boss. I had a research job as a very young kid in a research lab at MIT because I went to college early, I just ended up. It was a weird thing. But at any rate, Marvin was part of a sort of a gang, an academic gang, with a certain idea about what computer should be. That was very informed by his interactions with Golden Age science fiction writers, especially Isaac Asimov, with also some others. Marvin was a real believer in computers as these things that would come alive and become a new species. A lot of the mythology and terminology and just the personality of AI culture really stems from Marvin as the prototype.

But the term AI actually had come about as part of a rivalry between academic gangs. In the early '50s, there was an intellectual and computer scientist named Norbert Wiener, who is incredibly prominent and was considered one of the really major celebrity public intellectuals. He had used a term to describe where he thought computers would go, which was "cybernetics." The idea in cybernetics is that you don't think of the computer as a thing that stands and has its own reality, but you think of it as part of an interactive system. He is saying that the best way to think about computers of the future is not like the Turing machine, which is this monolithic thing that's defined on its own terms, but instead as like a network of thermometers, a network of little measuring devices that measure the world and measure each other and form this big tangle.

Mathematically, the two ideas are equivalent. But the Wiener way of doing it doesn't give the computer its own separate reality, but instead considers it as part of a connected thing. Cybernetics comes from the Greek "cyber" which is navigation. The idea is that, by interacting with the world, this thing would navigate itself and the world. So that was cybernetics. He was very concerned with what effect that would have on people. He wrote a very prescient book, I think, in 1950—could it be that early? I think so—called The Human Use of Human Beings, which was about how, as soon as you have devices like this in the world, they'll change people. People will use them to change people. It'll bring about this new age of mass behavior manipulation that was never possible before. So he saw that right at the very, very dawn of computer science. It was kind of like—

Glen: 1950, Jaron, yeah.

Jaron: Yeah, 1950.

Glen: One of my favorite quotes from that book is—it actually came from an earlier version of it—he says that there are some people who believe that studying this science will lead to more understanding of human nature than it will to the concentration of power. And he said, while I commend their optimism, I must say, writing in 1947, that I do not share it. That power is by its nature always concentrated in the most unscrupulous of hands.

Jaron: Oh, my God. So look. Yeah, so Wiener, he just got the game. He cracked the game at the start. This is really only a few years after Turing. Von Neumann had defined what their idea of what a computer was, which was the thing that stepped our — so the first and, by far, the dominant abstraction for the computer came from them.

Now, in the '50s, Marvin and a few other of his compatriots were — obviously this was kind of like in physics these days, the string theorists versus the quantum loop gravity people or something like that. They were just like these rival gangs, right? They were like, "Cybernetics is taking over. We need our own term." Artificial intelligence was actually initially defined at this very famous conference that happened at Dartmouth, and I believe '58—

Glen: ‘54

Brandon: McCarthy or something, right?

Glen: I think it was '54, yeah.

Jaron: '54.

Brandon: Was it McCarthy?

Jaron: Yeah, McCarthy coined it. I mean, McCarthy, too. Not as well. I mean, Marvin was really the personification of that more than anyone else, but McCarthy. So now the thing is, the Wiener way of thinking about computers as this giant mess tangle of things measuring each other, today we'd call that a neural net. I don't know. In those days, it was called connectionist often, which is actually a term I kind of like. So because of this rivalry, Marvin and the other people were like, "We have to kill it." And so Marvin and this other guy who's great, Seymour Papert, they wrote a book called Perceptrons. The idea is that, "We're going to mathematically prove that these guys are hopeless." And like, "Screw them. It's Turing machines from now on. We're just going to double down on the thing of the computer as its own thing." And so they proved that, in a certain absolute sense, their mathematical limitations to what you can make out of that style—

Glen: Out of single layer neural networks, yeah.

Jaron: Yeah, and it's a funny thing. Because, yeah, sure, it's a valid proof. But it's so narrow that it really served more as a rhetorical and a political weapon than an actual tool for math, or engineering, or physics, or anything. But anyway, it destroyed those people. All the people working in that area were like very out of it and underground and unfunded for decades, you know. And so, a lot of what the Marvin people worked on was called symbolic. Because their idea is that it's this abstraction, but it's abstraction made flesh. This thing will become real. And so there was all this stuff about formal logic, and we're going to describe the world.

Anyway, so then, of course, much more recently in this century, just when computers got big enough to have larger versions of that stuff, everything turned into neural networks. That's what the current AI is all about. For the most part, AI is this rubric term that's just applied to whatever. It's a marketing term for funding computer science. It's not actually a technical term that excludes anything. But most of what we call AI is exactly that stuff. But now it's called AI. So it's kind of ironic. It's sort of like the conquerors. It was sort of like they colonized their enemy and absorbed it into their own rhetoric. But the enemy they absorbed actually had a more realistic and fruitful, in my view, overall philosophy. So there's something that went very wrong.

Brandon: I mean, you have a provocatively titled piece, There is No AI. Right? Could you say a little bit about what your argument is there?

Glen: Jaron and I also were doing this called AI is an Ideology, Not a Technology, which is, you know.

Jaron: Yeah, that's right. We wrote it, yeah.

Brandon: Yeah, say more about that. Because I think both of you share, I think, that the idea of this is: it's not a thing. It's not an entity. You're talking about a system.

Jaron: Yeah, I mean, a lot of people in the AI world, especially the young men who work at AI startups and what we call frontier model groups, a lot of them not only think that AI is really a thing that's there, but that it's like an entity that could be conscious, that it'll turn into a life form. Maybe that life form is better than people and should inherit the earth. I run into these crazy things where like some guy will say, "I think having human babies is unethical because it takes energy away from the AI babies. We need to really focus on. And if we don't do that, the AI of the future will smite us." It becomes very medieval. Also, a lot of times, at the end of the day, you realize, oh, this person has a girlfriend who wants a baby. They're going through the age-old male attempt to avoid having a baby as long as possible and using AI in the service of that, which is fine. Whatever. It's their problem. I'll stay out of it.

But anyway, the thing is, you can think about AI equally in two different ways. If you think about figure-ground pictures, there's an artist named M. C. Escher who's famous for this. Most people have seen an optical illusion, where you either see two vases or a lamp and either are equally good. Just like with any big AI model, like ChatGPT or something, you can either think of it as a thing by itself, which is the sort of Minsky AI, original AI concept. Or you can think of it in the Norbert Wiener way. Pardon me. The Norbert Wiener way would be, it's a bunch of connections of which people are a part. And if you think of it that way, what you end up with is thinking of AI as sort of a version of the Wikipedia with a bunch of statistics added. Basically, it's a bunch of data from people. It's combined together into this amalgam, but with a bunch of statistics as part of it. The statistics being embodied in the little connection, these pieces, if you like, of the neural net. And if you think of it as a collaboration, I think there's no absolute truth to one or the other.

Just like if you want to try to use absolute logic or empiricism to talk about whether people are really conscious, good luck with that. You can't. That's a matter of faith. God is a matter of faith. There's a lot of stuff that is not provably correct, either through logic or empiricism. And yet, the thing about consciousness is — I mean, I don't know. If we weren't conscious, we wouldn't be situated in a particular moment in time, or even there wouldn't be macro-object. There would just be particles. I mean, I think consciousness, in a sense, like the Descartes, I think therefore I am. But it's not about thinking. It's just about experiencing. You experience, and that is the thing we're talking about. But if you want to deny experience, all this talking could also be just understood as a bunch of particles in their courses. So just that there's anything here is consciousness, as opposed to just flow without stuff. But anyway, let's leave that aside. All of these things are matters of faith. Anytime there's a matter of faith, you can go either way with it. You might think an animal is a person or not, or a fetus is a person or not. These are really hard-edged cases. Anyway, when you can't know for sure, I think it's legitimate to rely on things like pragmatism, intuition, faith, even aesthetics, since this is a beauty broadcast.

Anyway, what I would say is that believing that AI isn't there, that the AI is a form of collaboration of people, more like the Wikipedia than some new god or something, if you believe that, there are some benefits that are undeniable.

Benefit number one is, you can use AI better. If you keep in mind that that's what you really have, you can design prompts that work better. I've been telling this to Microsoft customers, and it works for them. Like instead of saying, "Oh, great Oracle, tell me what to do," say, "What has worked for other people?" All of a sudden, you get a clearer answer that has less slop. I mean, just actually work with what it is.

Benefit number two: there's a widespread feeling, because of the literal rhetoric coming from us, coming from the tech community, that people are going to be obsolete. Especially among young people, there's so much depression. It's just crazy talking to undergraduates now, how many of them feel like life is pointless and their generation is the last one. They're just going to die when the AI takes over. They have no jobs. They have no purpose. Nobody will care about them. That's stupid. As soon as you realize that AI can equally be understood as a collaboration, then they can equally understand that there'll be all these new jobs creating new kinds of data. And what's amazing about that is, every time some AI person tells me, "Oh, but we have all the data we need. We can already train super intelligence," whatever the hell that means, which is nothing.

Brandon: It's another statement of faith.

Jaron: Yeah, oh boy, that's like a medieval statement of faith. That's, I don't know. Oh, the golden calf. That's what that is. It's older than medieval. But anyway, the thing is, if you think that people might create valuable data in the future, it means that you also think that there might be forms of creativity we haven't yet foreseen—which means that we don't have all the data we need to train the AIs, which means that we aren't the smartest possible people of all time, which means that there might be room for people in the future to do things that happens to create data that expands what the AI models can do, which suggests this open future of expanding creativity. And I love that vision.

The great fallacy of believing that computers can become arbitrarily smart is this idea that, relatively, people will not change, will not be creative, will not move. What a horrible thing to believe. I sort of feel like that's a sin. Losing faith in the creativity of people has to be some kind of dark, dark sin and almost like a form of violence on the future. A lot of people in AI are into long term-ism and like, "We have to think about the future." What is more harmful from the future than that fallacy? I can't imagine any more destructive thought of the future.

Glen: Audrey Tang, my collaborator, we made a film about her life. It's titled Good Enough Ancestor, because that's how she likes to describe herself. She likes to be a good enough ancestor. Because if you're too good of an ancestor, you actually reduce the freedom of the future because they feel the need to worship or exalt what you did. You just want to be good enough that you leave paths open to them, but you don't predetermine what they, you know.

Brandon: That's extraordinary, yeah. Jaron, thank you. Those were really, yeah, fantastic. Glen, I mean, your argument, I think, builds in many ways and parallels what Jaron have been talking about, in terms of you seeing AI more as something like capitalism, more like a system of collaboration between people rather than a thing. Could you talk a little bit about just your own journey into this? I recall you grew up in — you're also in the tech industry, or your parents were tech CEOs, if I remember correctly. I'm just curious to know your path into this world of radical markets and radical exchange, and then how that vision of pluralism that you've been developing with Audrey Tang is now shaping your sense of AI and its future.

Glen: Well, yeah. So I grew up in a neo-atheist family in Silicon Valley, raised on very much the same type of classical science fiction works that Jaron was referring to. I was involved in sort of Ayn Rand world for a while. I was involved in the socialist world for a while. I became an economist. All of these things are very abstracted from sort of faith and real, grounded communities. But the thing I found that was kind of surprising to me is that most of the other people who sort of went from one apparently opposite abstraction to another, and were like alienated from any sort of grounded community, were mostly Jews raised in secular environments by grandparents who had fled the Holocaust, just like I was. And so I thought, "Well, maybe I'm not actually escaping my past. Maybe I'm just finding my own way to it." It was at that point that I decided that I needed to learn something about where I came from and connect with Israel, connect with Jewish history. I ended up on a faculty of Jewish Studies briefly.

Then I had the opportunity to meet Audrey Tang, which really changed my life for a couple of reasons. One is that I think that Audrey is an incredibly spiritual person. There's this character in the Tao Te Ching, which is her holy book, that is sort of like the Buddha and Buddhism, or Jesus in the Christian tradition, called the "shungren." It's like a mythical sage. Audrey really embodies that. And yet, she also just intellectually has some of the highest horsepower of anyone I've ever met and knows so many different things. I think I wasn't ready to meet someone with that kind of spiritual depth and to accept them and to understand them, until I met someone who was also at that intellectual level. Because I had come in this intellectual way, and so unless someone was there intellectually, I wasn't able to accept their wisdom. And so that was one thing about Audrey.

The second thing is that she was from Taiwan. Taiwan is a very different atmosphere. The division between technology and science on the one hand and religion on the other that exists in the West, it's just not a feature of the Taiwanese environment. It was really interesting to encounter a culture where those things were synthesized rather than in conflict. That really gradually made me come to feel that the disjuncture between religion and spirituality on the one hand, and science and technology on the other in the Anglosphere was an important root cause of many of the problems that Jaron was getting at.

So let's take his example of cybernetics. So why did the AI thing win out over cybernetics? It didn't do it because it was the first there. Cybernetics was way more dominant in the '50s. It didn't do it because of its explanatory power. Because, as Jaron points out, on the actual apparently falsifiable points, clearly, the AI people were wrong. I don't think anyone would dispute that. Now, all the AI people would say that the AI people were wrong. It had to do with the way in which the rhetoric worked in a particular cultural milieu.

Brandon: In a secularized world that is deprived of any sort of, you know.

Glen: Everyone is like, economics, agents, utility, you know. That's the way that everyone likes to look at stuff. Cybernetics is like, it's got a lot of just weird, mysterious shit going on, you know. I mean, there's all these things flowing. There's kind of these things that can be thought of as like an agent a little bit, but they're actually just part of the — that's what complexity science is. That's what cybernetics is. That's what's like actually going on in these systems. But if you try to explain it in a scientistic reductionist way—briefly, casually—it just comes off as like mumbo jumbo, and nobody can understand what you're talking about. So I think the only way to describe it sort of briefly and intuitively to people is to use some kind of spiritual framework.

Brandon: I mean, just to have more resonance in Eastern societies then.

Glen: Yeah, and I think it's because of the integration with spirituality and science in those societies. Like for example, quantum mechanics is another thing that's very much like complexity science. It's one of arguably the first real complexity science thing. Quantum mechanics has this weird particle wave thing. Nobody can make sense of it. Like Richard Feynman was like, "What the..." But for Taoists, it totally makes sense. Because in Taoism, there's air slash water, and then there's Earth. They have totally opposite principles. Like Earth collides with something, and it stops. Air goes faster when it encounters an obstacle, right?

Brandon: Right. Yeah. Neil Theise has got this great book Notes on Complexity, where he argues, from a Zen Buddhist perspective, quantum mechanics makes perfect sense because it has those similar kinds of—

Glen: Exactly. And so I think that if we're going to try to have a discourse about technology without bringing in religion, the natural consequence of that is: we're going to end up defaulting to really bad and harmful metaphors that come out of econ, rather than to the sort of thoughtful perspectives that Jaron was trying to welcome us towards.

Brandon: Or to build golden calves, right, which is so I think the tendency.

Glen: Yeah.

Brandon: Taylor, if I could ask you. I mean, you've had an interesting path from, well, Tolkien to philosophy and law, and then business, and into AI. How has that journey shaped your sense of what this thing called AI is, and what's beautiful about it, and also what the seductions are of this particular kind of beauty that people are seeking after and building this thing?

Taylor: Yeah, certainly. Well, actually, kind of to riff off of what Glen was just saying with regard to bringing spirituality into a more explanatory understanding of things, my love of Lonergan kind of led me to epistemology, of all places. Which is, at least in the classic tradition, when you understand something new or grasping being, which means an understanding of reality and understanding of truth in some fashion, which also has analogs into beauty, of course, because you can recognize it as beautiful. In many ways, I think that understanding epistemology in that sense, where, if you actually understand something, you're grasping a metaphysical reality, necessarily throws you into the spiritual conversation. Because a lot of spiritual traditions have very strong understandings of what that means, along with, of course, the scientific tradition.

Where I see this kind of reflecting back into AI and ends up being our own understanding of our understanding versus this thing that seems to understand, at least in some ways analogously to how we understand, particularly kind of at a more surface level, if we haven't spent a lot of time thinking about our understanding. And similar to Jaron, I found outsized ramifications of working with our product leaders in differentiating the way in which we know from the way in which AI works in order to create better product experiences for our customers and our users. Because we understand what understanding is, and AI is not that. And being able to shape that ends up being just having outsize impact on product building and customer satisfaction as part of that as well.

Brandon: Yeah, I think it's really remarkable. I mean, there's something about understanding this. I've spent the last few years studying scientists, physicists, and biologists, mainly trying to get at what drives them to do the work they do. Many of them see themselves as primarily being in the business of chasing after a certain kind of beauty. They call it the beauty of understanding, which is that grasping of the hidden order of things, the inner logic of things. There is a profound aesthetic experience of unity or harmony or fit, without which one does not even know that one has arrived at understanding something, right? And so there's something to that experience which is very hard to then sort of replicate with machines and so on.

Taylor, you've also written this fascinating piece on beauty. You call it, "Beauty will save the world," from Dostoevsky's The Idiot. You draw on Balthasar and Goethe and Peeper, and argue that beauty is a transcendental, and the world speaks to us in symbols. We need to contemplate beauty rather than grasp at it. I'm curious to know how that understanding of beauty relates to the kind of beauty that perhaps might be driving the pursuit of something like AGI. There is a certain kind of seduction it seems in the kind of quest that going to be in a world in which we can eliminate all human suffering, get rid of cancer, and climate change, et cetera. It seems like there's a tension between two different modalities of beauty there. I wonder if you could speak to that.

Taylor: Yeah, certainly. I think that the directional ability to aim at those big things is a pursuit of a certain "sort of beauty." But the pursuit of it is not the finding of it. I found in all of my innovation work that the best innovators are ones who are actually able to open themselves up in a humble sort of way to the experience of the customer, to the experience of the world around them—such that the conditions are set for that understanding of beauty or for that moment of insight. Without that kind of part and parcel of the humble pursuit of our unrestricted desire to know, you aren't able to set the conditions for an opening or an experience of beauty. Because you aren't looking for it. You've gone past it. You're building frameworks of abstraction, rather than being able to live in the intellectual moment that needs to happen for an insight, for a recognition of beauty to happen.

One of my favorite concrete examples of this is, I have a three year old. The three year old will try multiple combinations of a particular thing in order to get at what they're trying to do. And when they get it, at that moment of insight that I did it, that delight that kind of comes through as part of that, is identical to their experience of a flower or experience of a plane with a puppy, where they recognize the goodness, the beauty of the thing in which they're working. I think that's the overlap of the of the transcendental as we understand them, right? There are different aspects of that same recognition of reality in some ways.

Brandon: Yeah. Well, I suppose the challenge is, how do we prioritize reality in this particular context?

(outro)

Brandon: Everybody, that's a great place to stop the first half of our conversation. In the next half, we're going to turn to the spiritual dimensions of technology, how it has become a kind of religion, what faith traditions might teach us about building wisely, and how we can recover the human face behind our machines.

See you next time.