Beauty At Work
Beauty at Work expands our understanding of beauty: what it is, how it works, and why it matters. Sociologist Brandon Vaidyanathan interviews scientists, artists, entrepreneurs, and leaders across diverse fields to reveal new insights into how beauty shapes our brains, behaviors, organizations, and societies--for good and for ill. Learn how to harness the power of beauty in your life and work, while avoiding its pitfalls.
Beauty At Work
The Promise and Peril of AI with Jaron Lanier, E. Glen Weyl, and Taylor Black - S4E7 (Part 2 of 2)
Jaron Lanier, E. Glen Weyl, and Taylor Black join Beauty at Work for a wide-ranging conversation on artificial intelligence, innovation, and the deeper questions of meaning, faith, and human flourishing that surround emerging technologies.
Jaron Lanier coined the terms Virtual Reality and Mixed Reality and is widely regarded as a founding figure of the field. He has served as a leading critic of digital culture and social media, and his books include You Are Not a Gadget and Who Owns the Future? In 2018, Wired Magazine named him one of the 25 most influential people in technology of the previous 25 years. Time Magazine named him one of the 100 most influential people in the world. Jaron is currently the Prime Unifying Scientist at Microsoft’s Office of the Chief Technology Officer, which spells out “Octopus”, in reference to his fascination with cephalopod neurology. He is also a musician and composer who has recently performed or recorded with Sara Bareilles, T Bone Burnett, Jon Batiste, Philip Glass, and many others.
E. Glen Weyl is Founder and Research Lead at Microsoft Research’s Plural Technology Collaboratory and Co-Founder of the Plurality Institute and RadicalxChange Foundation. He is the co-author of Radical Markets and Plurality and works at the intersection of economics, technology, democracy, and social institutions.
Taylor Black is Director of AI & Venture Ecosystems in the Office of the Chief Technology Officer at Microsoft and the founding director of the Leonum Institute on Emerging Technologies and AI at The Catholic University of America. His background spans philosophy, law, and technology leadership.
In this second part of our conversation, we talk about:
1. The idea that modern technology and AI, in particular, have taken on religious or idolatrous qualities
2. Why the Talmud offers a powerful model for collective intelligence without erasing individual voices
3. The dangers of excessive anonymity in digital systems and AI training
4. The idea of “superintelligences” as collective human systems like corporations, democracies, and religions
5. Vatican-led efforts toward algorithmic ethics and the protection of human dignity
6. Where Glen and Jaron disagree about human-centered AI
7. AI as a tool for metacognition
8. How imagination, storytelling, and shared meaning can shape the future of innovation
To learn more about Jaron, Glen and Taylor’s work, you can find them at:
- Jaron Lanier - https://www.jaronlanier.com/
- Glen Weyl - https://glenweyl.com/
- Taylor Black - https://www.linkedin.com/in/blacktaylor/
Books and Resources mentioned:
- You Are Not a Gadget (Jaron Lanier)
- Who Owns the Future? (Jaron Lanier)
- Radical Markets (Eric Posner & E. Glen Weyl)
- Plurality (Audrey Tang & E. Glen Weyl)
- The Human Use of Human Beings (Norbert Wiener)
- The Fellowship of the Ring (J.R.R. Tolkien)
This season of the podcast is sponsored by Templeton Religion Trust.
(intro)
Brandon: I'm Brandon Vaidyanathan, and this is Beauty at Work—the podcast that seeks to expand our understanding of beauty: what it is, how it works, and why it matters for the work we do. This season of the podcast is sponsored by Templeton Religion Trust and is focused on the beauty and burdens of innovation.
Hey, everyone. This is the second half of my conversation with Jaron Lanier, Glen Weyl, and Taylor Black. Check out the first half, if you haven't already. In this second half, we're going to ask whether technology itself has become a religion. Jaron argues that we've begun to worship our own creations, and calls for a new model inspired by the Talmud. We're going to explore how ancient traditions—from Judaism, to Taoism, to Catholic social thought—might help us restore meaning, plurality, and beauty in our technological age.
Let's get started.
(interview)
Brandon: Jaron, you've written that we're again in this context in which technology has become a religion. And it seems like there's a certain kind of seduction to our understanding of AI and new technologies that is a sort of idolatry. You've argued for the need to make our technologies more like the Talmud. Could you say a bit about that?
Jaron: Yes, so I mentioned earlier that you can think of big AI models as forms of collaboration between people, a little like the Wikipedia with a bunch of statistics. Okay. So an interesting thing about the Wikipedia is I knew the founders. I used to argue that there was this fantasy in the computer world, which was very more leftist at the time. It was very different back then. The idea is that we're going to help the oppressed dissident in a difficult regime. And so we want everybody to have pseudonyms. We don't want to know who the real people are. But the problem with that is that, when you forget people and you turn them into a mush, you concentrate the power on whoever owns the computer that runs the mush, right? And so very much, as Norbert Wiener warned, there are times when you want to do that, of course. But to do it as a general principle actually undermines humanity. And so there's no easy universal answer, which shouldn't surprise us.
But at any rate, the Wikipedia created this illusion of what's sometimes called the view from nowhere, this idea of the single perspective, instead of a multiplicity of them. And so then they would say as well, if you want to have a bunch of people collaborating, that's going to happen. There's no way around it. And like, there is a way around it. It's ancient. So in Jewish tradition, there's this document called the Talmud, which is one of our central cultural documents. The idea of it is that you have generation after generation of people adding to it. But at each generation, there's a particular place on the page, a geometric designation, where this is from ancient Babylon, these are the medieval people, and so on. And so you have this amazing amalgam across centuries and centuries in a single document, where it's very clear that these are different perspectives. They're all on the same page. And this was done when writing stuff down was expensive, you know? It would have been cheaper to just combine these voices. There was an absolute economic motivation to not do this. Brevity was a matter of severe economic motivation in those days. And so the fact that they did this is incredible.
Now, part of it is just that Jews like to argue, and we want to be individuals. So a part of it is just our character. But the point is: this is a proof of concept that predates Greece. I mean, this is like ancient. So what is hard about this? What's hard about it is just that this present ideology of creating this new kind of golden calf, this abstract thing, so everybody else will be subsumed by it, but will be the magic tech boys who get to run it will be the elite special ones. Which never is true, by the way. You always end up getting screwed by your own monster. That's another ancient idea that has been known for a long time. So that's a fallacy too. But it's a different fallacy.
Anyway, yeah, so the Talmud is a wonderful prototype for how to combine people without losing people, or combine human efforts without losing human identity. There is room for anonymity. People can vote. The voting can be anonymous, but you still know who the other citizens are. You don't pretend that there wasn't anybody. Money anonymizes. You lose track of where a particular dollar has been. It's not even meaningful. That probably helps people cooperate despite their feuds. A measure of anonymity actually can be good. But as a general principle, it's easy to overdo it and really lose people. That's kind of what we — we did it with the Wikipedia. All the AI things train on Wikipedia. It's sort of inadvertently legitimized this idea that losing people is somehow a form of productivity, when it's exactly the reverse.
Brandon: Yeah, because it creates that illusion that there is an entity that is able to somehow provide a synthetic answer, right? I think the challenge, I suppose, is once we've erased authorship, once we've erased the sort of individual sources of all of this knowledge, is it meaningful to talk about responsible AI, or ethical AI, or anything of that sort? I mean, Glen, I'm curious to know what you might think about. I mean, you seem very bullish about the prospects of AI systems in terms of fostering democracy and pluralism. I'm just curious. Given this context, how do you see us concretely being able to bring about that sort of recognition of the human collaboration that is hiding currently behind these illusions?
Glen: Do any of you guys know what the oldest document that, to my knowledge, has been made by human hands that looks like a recombinant neural network is?
Jaron: That's a great puzzle, Glen. That's a great puzzle.
Brandon: I think you've told me this, and I've forgotten.
Jaron: What is it? What is it?
Glen: So there's a diagram of how voting for the Doge of Venice in the 13th century looked. There was like 100 councils. Like each person who was in the voting population would elect members of 5 of those 100 councils. And then those 100 councils would elect another 100 councils according to similar principles for several rounds, until you eventually elected the Doge, which looks almost exactly like a recombinant neural network. Because you have lines from each of the voters going out to the net things that they do. And so it's like a whole neural network. I think that that is like a beautiful illustration of the fact that sort of like during the example of the Talmud, we have these actually incredibly ancient and sophisticated ways of thinking about democracy and agency and collectivity that actually massively predate anyone thinking about AI at all, and give us the actual insights that we need to produce effective systems like this.
My hope is that we can just stop being so thoroughly mugged by the enlightenment. I love the enlightenment. Enlightenment is all kinds of goodness. It's just when it becomes an overwhelming ideology that wants to erase all other meaning and truth and all of the past, rather than integrate with it, that it becomes sort of an excuse for really destroying itself. As Jaron was pointing out, you know, destroying its own foundations. And so I think that it's people of faith that give me hope. Because they just don't really want to do that. They're cool with modernity for the most part, and they want some of the stuff. But they also want to remember that there's more to things than that, and that there's history and richness. And if we can just integrate those things a little bit more, give a few fewer sideways, glances, or whatever that Taylor was mentioning earlier, I think maybe we would do a better job of building our tools, do a little bit better job of not having this ridiculous hype and bust cycles that are painful for a lot of people, and maybe get more quickly to the actual deployment and integration of these technologies.
Brandon: Do you have a sense of concretely — I mean, again, one of the other challenges, too, is even our language around AI has been colonized by large language models, and whatever is happening at a few, small players. What concretely do you think needs to happen, to change, in order to be able to actually transform things?
Glen: Well, I mean, culture is an inspiration. Like The Wild Robot film, I think, is just absolutely fabulous. I think it's exactly the way that we should be conceptualizing these things. That was very well received. One phrase I've been using a lot recently is: be the super intelligence you want to see in the world, you know?
Brandon: That's great. That's great.
Glen: Like, you know, corporations are super intelligences. Religions are super intelligences. Democracies are super intelligences. By every definition of super intelligence that's been given, they're all super intelligences. We don't bat an eye at those. And so why are we talking about AI as if it's this weird external conquer? I'm not saying corporations or religions haven't done any harm. They've done all kinds of problematic things.
Brandon: Yeah, I think we're maybe waiting for our charismatic Robo savior or something, right? Taylor, you've been working closely with the Vatican. You were just at the Builders AI Forum. Pope Francis called for "algor-ethics". Pope Leo's vision is emphasizing importance of human dignity and ethics. Could you say a bit about what's happening at the Vatican and what those efforts are inspiring in you?
Taylor: Yeah, certainly. In some ways, it's similar to what Glen was just articulating here. It's kind of, get over yourselves, and let's work together to have this technology serve humanity. Right? Every technology that we've ever come up with ends up being a result of our, from the Vatican's view, co-creative power with the Divine. And so let's continue trying to shape that towards human flourishing, rather than the other ways in which we're able to shape it as independent actors. Really interestingly, too, I think the collaborative nature that the Vatican is really asking us all as technologists to approach—technologists and academics, in fact—when approaching this, I think, is really a great direction that resonates with a lot of us as well.
Brandon: Great. Thank you. Are there any points of friction between your views, the three of you all? I mean, maybe there may be different points of emphases, but I'm curious if there are questions you all have or points.
Glen: There is one thing on which Jaron and I, I think, see a little bit differently. I don't think it actually ends up mattering in many cases. But I think Jaron's first inclination tends to talk about sort of the uniqueness of humans and have that really strong emphasis, on some level, on Mago Day, or as we would say in Hebrew, Shalom aleichem. But I tend to have a little bit more primary emphasis on diversity. I certainly see the importance of humanity, but I also see things in nature. I see things potentially in machines or in complex human systems or whatever. I'm not as focused on sort of the human individual as a focus in my mind as much. What I tend to resist about AI is sort of its totalizing narrow singularity. Like here's-the-thing attitude more than the fact that it challenges the Shalom aleichem, you know?
Jaron: Yeah, I've also detected that disagreement. But I think the reason for it is a matter of our professions, our disciplines. So I'm a scientist and technologist, but the technologist part is what I really want to focus on for a second. You can't define technology without defining a beneficiary. Because otherwise, there's nothing there. It just completely evaporates, unless it's for something or somebody. You can define math without a beneficiary abstractly. You can define the quest for knowledge and science. I think you can even define art as a kind of art for its own sake thing. Whether you should or not is different, but you can do all those things. But it's not even possible on any sensible basis to define technology without a beneficiary. There's just no way to even talk about it. It's gone. Technology is for doing something, for some purpose, you know? And so the question is, who is the beneficiary?
Now, I think sometimes a beneficiary should be Gaia or the overall ecosystem of Earth. I'm not saying it's exclusively people. But in general, if you underemphasize the human being as a beneficiary of technology, you very, very, very quickly slip into technology for its own sake, which is never what it actually is. It's always technology for the sake of the giant ego of somebody who owns a big computer server. So it turns into this kind of Gilded Age, unsustainable ego trip by a few people who don't acknowledge it. So you have to define technology as being for people. You have to really emphasize the specialness of people in order for technology to even be defined. Lose people, lose technology. That's the only way. So I think that's the reason that we have this different sensibility.
Brandon: That's great, yeah. Taylor, any thoughts on your own tensions?
Taylor: Yeah, I don't know. I don't know if we fought enough amongst the three of us to really have determine where I land on that side.
Brandon: Well, I think you should start, yeah.
Taylor: Yeah.
Brandon: So maybe, perhaps, if you all could leave our viewers and listeners with maybe one point each on what you see as really the beauty of this technological development, what it is that you all are working on. I know you're not representing Microsoft, but you are certainly trying to build something there in your various capacities. And so, where is it that you see the beauty moving forward in the work you're doing, and what particular kind of burden or obstacle do you think is really critical to overcome? Maybe, Taylor, we'll start with you, and then Glen, and then we'll end with Jaron.
Taylor: Sure. Yeah, I think this technology throws into sharp relief the ability for us to understand really how we think. We found a lot of the success of using this technology in productivity ways ends up having certain meta-cognitional strategies on how you use it as a helper, rather than have it do your thinking for you. And so, I'd say, lean into your own understanding of your understanding as you work through your use of these tools—both in order to ensure that you continue to flourish as a human, but also really to use these technologies where they shine most.
Brandon: Any obstacles or burdens that you think are really critical to overcome?
Taylor: If you don't do that, you're going to get dumber, and that's problematic.
Brandon: That's already happening, so, yeah. Thank you. Glen?
Glen: There's an image of science and technology that I think is sort of implicitly in the minds of a lot of people that I want to suggest we need to flip on its head. I think a lot of people imagine we're on the surface of the earth, and there's a deep ground of falsity and superstition beneath us. We kind of need to dig it out and throw it away to get down to the core of the truth.
I instead imagine that we're on the surface of the earth, and we're planting trees. And as those trees grow up into the infinite abyss beyond, we extend the atmosphere, you know? The biggest danger is that there's too much space. We don't get down to a point. Actually, it's how do you even allow the cross pollination across all those different things so that they can keep growing? But the further we grow out, the closer we are to having nothing at all that we understand because the more we see of the infinite abyss beyond and the more space there is to grow into. I guess that feeling of pursuit of a truth that recedes ever further, that by pursuing the truth, by extending our technologies, we see even more completely how little we know. That is what I take solace in, I guess.
Brandon: Yeah, Marcelo Gleiser has this book and an analogy called The Island of Knowledge. It's this very similar kind of thing, where you're on this island, and you're sort of expanding what you think are the horizons of knowledge. You think you're going to get to the point where the water has completely been conquered. Then you realize the further your island is expanding, the further the water seems to expand, and you never quite get to that end. For some, that is threatening and frustrating. And for others, that's immensely beautiful. And the burden, the challenge, the obstacle, Glen, that you see as a burning problem that needs to be addressed?
Glen: I think that, ultimately, it's a question of culture and meaning and vision for all of this stuff. I hope that we will come to a point in the Anglosphere where we do have peace and cooperation and integration between that sense of wonder and belief in things we cannot grasp that religion gives us and our sense of building. Because I think we'll be able to build much more and much better when we can do that.
Brandon: Thank you. Jaron?
Jaron: Okay. On the question of beauty, I think there's a common idea of beauty as platonic thing, that beauty is some sort of this abstract thing apart. And as you might guess, based on what I said about AI and all that, I think that's the wrong idea of beauty. I think the idea of beauty—as this abstract, still thing that's apart from people that we sort of try to access and approach—is it might have been functional in the past. But at this point it it doesn't serve us well. It just has terrible economic — because basically, the way computer networks work, they're very low friction. There's this thing called the network effect that's exaggerated, where all the power and wealth concentrates at the center. And so, basically, whoever owns the network becomes beauty, if that's beauty is. Whatever, all the artists who are trying to make do as wannabe’s on YouTube are really celebrating Google more than themselves, at the end of the day, which you can see if you look at the accounting.
And so, anyway, the issue is that we have to think of beauty as much more of a connected kind of a thing. Beauty is not a thing apart. Beauty is a thing that people do. It's a thing that is meaningfully created between people through shared faith. That has to be the idea of beauty. I like Glen's metaphor a lot. And I should mention that as an island gets bigger, there's more beach, right? That's the thing is. The more knowledge, the more mystery. Part of why I play all these weird instruments is, every time you start to play some instrument from another time and place, your body enters into the rhythms and the breathing of those people. That connection, that makes the instruments interesting. It's not any abstract, like, oh, this instrument solves a particular problem. Right? And so you have to think of the groundedness, real experience, and real connection as what beauty is—not as a subtraction.
Then the big unsolved problem. Here's the one I'll mention in the context of this conversation. Almost all of the kids — I say kids because, I mean, it's hard to find somebody in a frontier science or engineering group for AI who's not under 40. And they're very, very few. They're just starting to have kids here and there. But mostly, they haven't had kids yet. They mostly don't have a connection to future human generations. They're mostly, if we're honest, a little spectrum, either mostly male. They mostly don't think of family or continuation as much of a thing. That's an abstraction to them. Or if they do, they might, it's like purely biological. Like, "Oh, my genes are great. I'm going to make sure there are a lot of babies that have them or something," like our friend, Elon.
But the thing is, their ability to speak is through the stories they grew up with, as is true for all of us. The stories they grew up with were not the stories from, oh, I don't know, American mythology. They were not the stories from the Bible. They were not the stories from literature. They might have been a little bit stories from children's book. But what they mostly were in their formative years were the stories from science fiction movies. And so, if you ask why are all of the AI people so enthusiastic about saying, "Oh, we're building something that will kill everybody. Isn't it great? Give us more money. Yes, you should have more money, more money. You're going to kill everybody. It's great," well, how could that happen? What's the explanation for that absurdity? It's that the myths they grew up with that are the stories that form their vocabulary for understanding the world are not Newton and Einstein. It's The Matrix movies.
Brandon: Or Tolkien, yeah.
Jaron: Or Tolkien. Well, for some, it's a version of Tolkien, actually, because those movies were big too. But as far as technology goes, it's The Matrix movies and The Terminator and so on. Those are the stories that exist. And so when you can only tell the world through those stories, that's our vocabulary of dynamics. And so if the stories you know are limited to a certain kind of story, so are you. So I think there's a really urgent cultural problem here. The only science fiction that transcended that problem, the only positive science fiction that wasn't sappy, that was commercially successful, was Star Trek of a certain era of the '60s, perhaps. It was the '90s, definitely. The Star Trek franchise has turned into just a version of a Marvel movie for the most part. The Marvel movies, don't even get me started.
So the thing is, we're giving young people a profoundly impoverished and stupid set of stories to work with. That, to me, like Silicon Valley has failed people a lot. But Hollywood, maybe more so, and in a way more innocently. Because I knew the people who made some of the movies I've just referred to. It's not that they were bad people or even lazy people or anything, but it's just that they were working from their particular context. And as it translates into giant context, it becomes really dysfunctional. We have a big problem with that. That's the problem.
Brandon: Yeah, thank you. I mean, it is a big challenge with the shaping of the horizons of imagination, right? I think we're still prey to a kind of logic. In the last time I was with Glen, I was talking about this logic of domination, extraction and fragmentation that governs a lot of the development of our technology and business. I think moving to a different logic of reverence and receptivity and reconnection is really, which is what we see in something like Tolkien, right? It's a very different kind of logic. You see that tension and forming imaginations.
Jaron: Here's the thing about Tolkien, though. I read the books when I was little, right? I haven't seen all the movies all the way through, but I've seen enough of them over pretty good feeling for there. So almost everybody now knows them through the movies, right? And this is, oh God. So my very first gig as a musician was playing music behind. It's a long story. But anyway, I used to do gigs with — oh, who's the guy who wrote The Hero with a Thousand Faces? Joseph?
Glen: Campbell.
Brandon: Campbell.
Jaron: Campbell, yeah. And so when I was just a young teenager, like an adolescent, I was doing shows with Campbell. Because I was playing music behind this wonderful new age poet guy, and they would have double bookings. But anyway, his name is Robert Bly. I used to argue with Campbell, even as a kid. Like, "How can you say there's only one story? Your story is kind of like a nasty one because it's about this hero. The problem with heroes is that there's somebody else who the hero has to be. Like there's always this other side to the story. Doesn't this kind of bother you?" He was like, "Oh, kid, you won't know anything." And I'm sure he was right about that.
But the thing is, the Tolkien books have a certain magic and reality to them that is the best kind. They have a kind of a nobility or something. I feel like in the movies, it turned more into a Marvel thing of like, "We're going to go and kill these horrible demon things. We're going to go fight. We have 50 cuffs and whatever." And so the thing is that the version of Tolkien that came out that most people know is maybe not — it's a more Campbell-ian thing than I think the original actually was. At least, as I remember it, it was a little more charming and joyous.
Brandon: Yeah, there's a sense of deep magic, a sort of a reverence for something you don't create, right? It's something that is given to you that you were in service of. So all of the questing and so on is not primarily a hero, but rather a kind of calling.
Jaron: Yeah, I mean, I remember the Tolkien books being kind of more like the Narnia books. Maybe I'm misremembering them. I don't know. Maybe I have it wrong. But the Tolkien movie was more like a Marvel thing or whatever. You know, not that there weren't some good things about them, certainly.
Brandon: Yeah, these are critical tensions. Well, I can't thank you all enough. This has been such a fantastic conversation. How could we direct our viewers and listeners to anything that you all are doing, working on? Taylor, where can we point people to?
Taylor: Oh, certainly. Yeah, I mean, I opine on my Substack on occasion. That can be a good place as any for encountering some of my work, for sure.
Brandon: Okay. We'll put that in the show notes. And Glen?
Glen: Glenweyl.com is my website. We also have aka.ms/plural for the Plural Technology Collaboratory. You can find me on X @GlenWeyl.
Brandon: Fantastic. And, Jaron, where can we direct people to?
Jaron: Oh, I have a crappy old website. I don't have any social media. I kind of operate my life on this idea that people who need to find my stuff will. I don't really promote myself, and I'm really bad about it. And somehow it works out. So I just ask the wind.
Brandon: That's right, yeah. Yeah, fantastic. I know we're past time. But is there any chance, Jaron, that you might be willing to, for 60 seconds, play us something from one of the thousands of instruments behind you?
Jaron: Oh, God. Well, what do you feel like?
Brandon: Whatever you're in the mood for.
Glen: I request the oud, Jaron, if you have one.
Brandon: Oh, yeah. Let me second that.
Glen: Jaron, I think the oud is one of Jaron's favorites.
Jaron: Let me see. The thing about ouds is you never know which oud will be in tune. So there's a famous joke from the composer Igor Stravinsky, that harp players spend half their time tuning and half their time playing out of tune. But the thing is, that joke was originally from an oud book that's like several 800 years old. So how in tune? Eh, we'll live with that.
(Jaron plays the oud)
So it's out of — I shouldn't. Okay, it's out of tune.
Glen: Well, that's great.
Brandon: It's alright. Thank you.
Glen: It's wonderful. Thank you so much.
Brandon: It's still enough to transport you. It's still enough to transport you.
Jaron: That's the thing about the oud that's just like you're on, yeah—
Brandon: Amazing.
Jaron: But oh, I think la, la, la, la, la, la, la... I don't know.
Brandon: Well, thank you. Thank you so much.
Glen: Thank you, everyone. Take care.
Brandon: I can't thank you guys. It's been amazing.
Glen: Live long and prosper.
Jaron: Okay.
Brandon: Yeah, you too.
Jaron: Bye, Brandon.
Glen: Bye, bye.
(outro)
Brandon: All right, folks. That's a wrap for this episode. If you enjoyed the episode, please share it with someone who would find it of interest. Also, please subscribe and leave us a review if you haven't already. Thanks, and see you next time.