Beauty At Work

Faith, Love, and AI with John Havens - S4E4 (Part 2 of 2)

Brandon Vaidyanathan

John C. Havens has spent years at the heart of the global conversation on AI ethics. As the Founding Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, he led the creation of Ethically Aligned Design, a document that went on to influence the United Nations, OECD, IBM, and dozens of organizations shaping the future of AI. He also helped build the IEEE 7000 Standards Series, now one of the largest bodies of international standards on AI and society.

Today, John serves as the Global Staff Director for the IEEE Planet Positive 2030 Program, guiding efforts that prioritize both ecological and human flourishing in technological design. But his perspective on AI doesn’t begin with policy or engineering; it starts with love, vulnerability, and the deep spiritual questions that have shaped his life.

Previously, John was an EVP of Social Media at Porter Novelli and was a professional actor for over 15 years.  John has written for Mashable and The Guardian and is author of the books, Heartificial Intelligence: Embracing Our Humanity To Maximize Machines, Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Change the World, and Tactical Transparency: How Leaders Can Leverage Social Media to Maximize Value and Build their Brand.  John is also an expert with AI and Faith

In this second part of our conversation, we talk about:

  1. The core of reality as love
  2. Dangers of ignoring grief
  3. Why values must be integrated into AI systems from the very beginning
  4. How generative AI entered classrooms and workplaces without care, consent, or love
  5. The seductive danger of simulated relationships
  6. The role of faith communities in an automated society
  7. John’s GAP framework: gratitude, altruism, and purpose
  8. Risks of using AI in religious settings
  9. How genuine community embodies the kind of love and dignity that technology must never replace

To learn more about John’s work:

Books and resources mentioned:

This season of the podcast is sponsored by Templeton Religion Trust.

Support the show

(intro)

Brandon: I'm Brandon Vaidyanathan, and this is Beauty at Work—the podcast that seeks to expand our understanding of beauty: what it is, how it works, and why it matters for the work we do. This season of the podcast is sponsored by Templeton Religion Trust and is focused on the beauty and burdens of innovation.

Hey everyone, welcome to the second half of my interview with AI ethics expert John Havens. Please check out the first half if you haven't already. In this part of our conversation, we talk about the core of reality as love, the dangers of ignoring grief, and why values must be integrated into AI systems from the very beginning. John also shares why economic systems often lack love and inclusivity. Talks about the role of faith communities in AI ethics and what it means to place human worth and community at the center of technology. Let's get started.

(interview)

Brandon: How do you see these values? I mean, once we recognize what these are, I suppose I have a couple of questions. One of them is whether there are certain objective values, let's say, for lack of a better word. Is love perhaps a core value that all human beings ought to sort of make a priority? Or can some people say, "Look. My values to sort of minimize discomfort or something of those lines, or maximize technological progress or something, and that's okay, right? There are some people, I think, who would say that, that maybe something along those lines is more valuable to them. Maybe receiving love from another human being is too painful or not as seductive. It doesn't have that attraction, that power, might, right? And so some people might value power in some form or other. And so how do we sort through that also, especially in the development of these AI systems? Do you think it's possible? I mean, my sense is you're saying that you can't build in values or ethics post-hoc. There are already values baked into these systems, and you have to recognize what those are and build them in right from the get-go. Could you speak to that as well, and how love might have to do with the creation of GenAI systems and so on?

John: Great question. I mean, the systems, the ones that I really feature in my last two books are economic systems. There are paradigms that I think, like I mentioned, the lights going down in a theater, I didn't really know what they meant until someone told me about Gross National Happiness from Bhutan. That stemmed apparently from a speech by Robert Kennedy not too long before he died, where he talked about the things we measure and the things we don't. Beautiful speech. I forget what university he was at. Kansas? He said, "We'll measure advertising, but we don't measure the time with our kids." I'm paraphrasing. But it's a beautiful speech, kind of what I just said a minute ago, where the guy made a great point. What you measure matters than what you don't.

In the States, how much do we pay teachers? How do teachers feel right now about GenAI and during their classroom? Were they given a certain amount of money to test these tools? Were they given instructions how to test these tools? Are they being kept in schools as humans teaching, or are they being compared to standardized tests or other things which may have value? But where GenAI I don't think was loving in any stretch was how it was introduced. A lot of technological pools just came into your consciousness and, all of a sudden, it just started invading. If invading is too strong a word, changing. Anyway, economic systems are the biggie.

Then the paradigm of DEI, which our current administration is challenging, this is a tough subject to talk about, I recognize. But it was only in the last couple of years where I started even to understand what white supremacy is. I think at least I'll speak — obviously, I'm speaking for myself. All this stuff is John, not IEEE where I work, by the way. I learned from a lot of people in my work, from countries not in the West, how dominant Western thinking power is. Silicon Valley dominating a lot of tech narratives and regulation and all that. The EU AI Act is fantastic. That shrug was not the EU AI. That shrug was, worst case scenario, Meta gets fined a billion dollars. Zuckerberg says, "Eh." He says he doesn't care about the EU AI Act. So the power structures behind a lot of these tools, that's the part where I get very depressed and sad a lot. Because the systems are sort of like, "Hey, this gets introduced. You have no choice." That's why agency is such a big deal to me. And by the way, it doesn't hurt. I'm not going to stop using Google. I don't really use Facebook. I have them for years, but I'm not going to stop using tools. If I'm given agency and permission, especially in terms of having working advertising in PR, that gives a voice back to a consumer or a citizen in ways that we haven't had for the past 15 years like in Silicon Valley.

So all these systems, like the reason I will just say, they're not built with love. That's not how they're designed. I worked in PR. It's not a joke, but I will say a lot: no marketing funnel ends in abstinence. Period. You don't take a marketing class and say, "Hey, Brandon. One of my clients used to be Gillette. You're obviously a very well-groomed, put-together guy." So I'd be like, "Oh, I'm going to go after him." He's an influencer. What tool? Hey, try out my razor. These are not evil things. I want people to use this stuff, right? But P&G that owns them is brilliant in terms of these ideas of, find someone who could use your product. Do they use the product? Yes or no? If it's no, then it's creating awareness. If it's yes, do they like it? If they don't, then you'd send them free stuff. Then they have to try it. Then they want you to recommend it to a friend, right? This becomes formulaic for how the entire undergirding of the system works, which is advertising. Google is still an advertising company. That is how they make their most money. They're not a search company.

So how the design of the tools could turn into love, in my expert opinion, is certainly, first of all, fundamentally about data. People tend to forget that AI systems are built on data—human data. If you went to that page, I've already kind of explained this, but there was a genuine disclosure from these companies. "Hey, welcome. We're the designers of OpenAI. These tools are really powerful." We know, based on our experience—they wouldn't necessarily have to say the Stanford teacher who teaches behavior like an honor—we think this is a really cool way for you to learn about stuff. And when you prompt, you're going to ask questions. You're going to learn a whole new paradigm with how to get back words. When those words are put together, we can't guarantee, but we're pretty sure you're going to be mesmerized. It's going to feel like magic. But it's not magic. Then at different points, if they had that tool and then they regularly didn't just give me blog post that no one really reads—except the geeks like me and the tech outlets that I used to write for—boom. Hey, user. We're thinking that we want to get more data. Because large language model needs a lot of data. We're going to be maybe going after thousands and thousands of books by authors. Do you think we should try to reach out to all those authors and get their permission and maybe even give them some money? What do you think? And I answer. Now, when they come back, they can tell me what the survey was of their users. But then, are they going to use that as justification? They probably might. You asked this before, and I know we're also getting near the end of the hour, so I'm trying to also stay positive and helpful and pragmatic in my responses.

Brandon: Sure.

John: I think I mentioned in the book C.S. Lewis, who's one of my heroes, who started off as a massive skeptic for Christianity. I love the skeptics who convert. By the way, Marshall McLuhan converted to Catholicism—one of my favorites in the world. But C.S. Lewis, he says it better. There's a term that I call "moral absolutism." I might be misquoting him. But I think he pointed out the example of getting a seat in a bus, if memory serves. Like if I was walking towards a bus seat, at the end of the bus the seat was open, and someone 10 feet away from me sat down, I'd be disappointed. But I wouldn't be like, "Eh." If I was about to sit in that seat, like my butt is hovering over the seat, and someone shoves me out of the way—with the exception of the physical pain of being shoved—why do I feel incredulous? It was my seat, right? So there's a kind of genetic level thing there.

But I think the thing, too, is like working in the AI ethics space, I believe there are moral absolutes around children, for instance. I think children trafficking, I think certainly, whatever, sexual issues. I think not giving parents or caregivers a way—we have a thing in IEEE that I am very proud of—early on, what's called age-appropriate design. That really is the idea of a wonderful, amazing human in the UK, Baroness Beeban Kidron. It's basically about creating agency for people taking care of kids and saying, "Look. Whatever the age is—16, 15, 14—what's the nature of how kids or young people are approaching these tools? Can we empower those people teaching kids for the first time?" My kids are in their 20s now. One is almost 20. I don't want to give their ages away. But my kids are that age. So it's completely different for people who have kids now. And so, all that is to say, if someone says to me, "But there's this culture where it's okay to beat children" — I'm using an extreme example, but I like to. Moral absolutism. Ultimately, the value of taking care of kids, taking care of nature, those are moral absolutes, where if they're deprioritized—we have a paper at IEEE called Prioritizing People and Planet as the Metrics for Responsible AI. People can Google that. Proud of that work. But anyway, I'll wrap up there. Thank you.

Brandon: Well, let me ask you a couple of more questions, if you don't mind. One is on one of the dystopian themes running through your book—which it seems like you've got the first half that's somewhat dystopian, and the second half that's a lot more positive and hopeful. But on the dystopian things which build on a couple of things you touched on in this recent answer, you talk about a couple of scenarios. One in which you have this imagined scenario of your daughter dating a robot and the accusation of something like flesh-ism, where it's possible to imagine a scenario where the difficulty of human relationships leads us to prefer a relationship with a machine of some sort. Similarly, the concerns about creating AI representations of deceased loved ones—a child or a parent who passes away—why not just create a digital avatar to let that person live on, right? Both of these, it seems, are they have a certain kind of beauty to them. They're seductive anyway. Could you speak to what you see is problematic at the heart of these, and perhaps how they really are not fundamentally in accord with love, even though they might seem to be?

John: Well, I think, first, it's grief. As an American, maybe as a guy, as an American, grief, for me, growing up—again, I'm 56—plenty of stupid movies, whatever else. Like, guys are strong. So it's not just about not crying. It's about avoidance of grief and the rituals surrounding grief or not. So when you kind of see movies still that are like, "Oh, we're going to bury grandpa and take care of the funeral," the American grief in general is very dismissive and fast. For a couple weeks, everyone really is sad and being general hyperbolic. Then the person's gone, and then it's the people who have to do with it. When my dad died, for an entire year, I mourned. I wasn't sure how to call that all the time. So I think, certainly, one thing is just the fact is: if you ignore grief, you miss a lot of the human experience that most people face all the time. And if you bury it, I think it's going to be harmful no matter what. Meaning, taking on a parent in an AI form.

I forget the book about a guy—this came out years ago—who did that with his dad. It was a very helpful book. Because ultimately, what he realized in the book is what I think most people will find to realize. The New York Times covered this recently with another guy who filmed his father. The AI version of your loved one is obviously not going to be them. So you will use slash we will face the reality of a whole new thing, which is, this entity — I'll use your word earlier, and I think that's appropriate in this regard. This new thing, not really a creature, is mimicking aspects of a person I love. But because of hallucinations, errors, combined with synthetic data or whatever else, it's just not him or her. So are you avoiding the grief that you'll need to face anyway, versus like, "I have recordings of my dad that I filmed him telling stories about five months before he died." They're very hard to watch. But now I'm at the point, since he's been gone for almost 10 years, I can watch and have sort of a smile along with the tears. There's that. Someone can say, "It'll work for me," or, "It works for me." I read people, again, saying, "I'm in love with this chat bot. I have a version of my dad." That's where I'm really challenged. Because it's easy for me to say anything. No, you weren't. They are. I can't tell someone the subjective truth.

Then there's cultural, like Japan, animism, and different indigenous traditions where this very beautiful, to your point, like loving spiritual aspects of things like this. It's just that it's so new, A. Then B, by and large, it all comes within the paradigm of surveillance capitalism that is sort of like a white supremacy. Different in a lot of ways, but not in others. Like a power structure that any of these experiences still happen within. So that person who's like, "I'm experiencing my dad. I love it," I feel like a jerk. Because I'm like, "Hold on. You feel what you feel. But let me tell you about all the other things." I'm like, that's going to make him feel better about losing your dad? But there is a possibility. There is a possibility of the data agency, as I mentioned. Then we will experience these things versus the value that I didn't mention, the metascrial bundle paper, the reason it's eugenics is: a lot of the leaders of GenAI who she quotes are utilitarianists, where their logic and belief as an ideology is, "We are only made up of our consciousness and our cognitive selves. Let's put ourselves on machines and go into space." Or we'll be on Mars, you know. But this is while they dig their burrows, and Zuckerberg has his massive island underground in Maui. I think that when utilitarianism says we think the future of the human race is X, but they know that the planet today and the majority of people, their action are accelerating the larger loss of life, then for me, I'm happy. That's immoral. There's no way to say—from a deontological, certainly a virtue ethic standpoint, that the loss of 8 billion or more people in the planet—don't hinder innovation. That's eugenics. Let's call it what it is.

Brandon: Yeah, and it seems like even the extension of that logic too is, at some point, it might become seen as irresponsible if you don't upload yourself into an AI mind clone or something, right? And so the way in which the logic of these technologies develops brings with it some kind of value system, which I think you're right to say, is there is a deep-seated eugenicist bent to it that a lot of us aren't seeing. If you could speak just maybe very briefly about the solutions you proposed. Because you mentioned virtue ethics, but you draw a lot on what you called this gap solution of gratitude, altruism, purpose. I mean, how might cultivating virtues actually help us better live out this future driven by AGI? And if you could maybe even add a word about what all of this means for people of faith, for faith communities, what innovations might be needed in churches or other faith communities as they try to respond to the development of AI systems?

John: Well, first, I wrote about this on LinkedIn. AGI is a faith-based belief. I like saying that whenever I can. Because technically, it's speculation. There's no agreement, whether it's Tintin, who's quoted all the time, or Zuckerberg, whoever. AGI is an idea—Super intelligence, Nick Bostrom, ever since his book came out, right? Because what's going to happen? There's no pragmatic explanation. Like, hey, when AGI arrives two years from now, Altman changes it all the time. But what does it mean? You and I wake up, and we get a text? "Hey, I'm Steve. I'm AGI." Then the other thing that's overtly harmful—and seriously, I say this not as a joke. But Brandon, you're a good person. If this resonates, then I think it's true love. But for your viewers, watch the messages that happen with the constant incessant, The Medic. This is why I love McLuhan so much. What are the messages when those three letters happen with AGI? Guaranteed in English. The words that you'll see: race, competition, when we'll lose jobs. Then again—I'm going to swear—the absolute horse shit of what AGI will do for most people with jobs. Newsflash: the tech industry is firing more people because of AI as of late. And outside of whatever severance packages they might get, there's no long term—at least in the States—guarantee of health insurance or money.

I think that's a horrible message. Having been an actor and having lost jobs a lot as an actor, and then getting let go a couple of times, you lose your job. I'll talk about myself. I lost my job, and I'm going paycheck to paycheck. Then debt builds up and all that type of stuff. There's knowing. It's like, "Hey, hey, hold on. Wait a second, buddy. Here she is, Sheila. AGI Sheila, she's going to be there for me, buddy. She's going to write you a check." Because then we talk about universal basic income, which I've written about for years. These are all solutions that are just words. They're interesting, but I have been going to 11 years or so of conferences on these words. When you lose a job, when you lose a marriage, who is there to help? Who is there to give money? A lot of times, it's faith-based institutions. By the way, Alcoholics Anonymous is another wonderful faith-based institution that's not about — they didn't even know a lot of people think it's God and Jesus. It's really not. I've been to a couple of AA meetings. I have never felt — well, a couple times, but community of strangers more than when I thought. Maybe I'm having a couple too many glasses of wine dealing with my divorce.

I'm just trying to say these things to be real, Brandon, because I appreciate your work on beauty. But gratitude, altruism and purpose, the gap thing, gratitude is really hard, especially when one is going through something really hard. In my case, not to dwell on it, but mostly, divorce was so isolated. Normally, I'm very grateful for my kids and stuff. But I recognize how hard it is to be a parent around kids when you want to be needy and have them do the work that they're not supposed to do in general as kids. They're supposed to be your kids, not your therapist. So that said, gratitude there was for friends, for my mom. Leaning on gratitude keeps you in the moment. That's the thing about a lot of faith-based practices: Buddhism, meditation, swimming. For instance, when I was really suffering, swimming became something. I had read this wonderful book. Her name will come to me. The title of the book is something like When Things Are Broken. It'll come to me. I forget her name. She talked about breathing, Buddhist meditation, breathing, and swimming. That's all you have to do. Pain didn't go away. It's just for something you've started to recognize.

Altruism, a lot of this stuff seems kind of selfish because it sort of is, but it's self-oriented in the sense of healing. When you help someone else, in my experience, but the science also says, you kind of forget about yourself. You have these blissful moments where you're like, "Oh, this is pain." Right? All the things about grief and pain are like, you can't get away from your own stuff. Oh, the divorce, whatever it is, right? You help someone else. And for those blissful moments where you see some kind of connection, maybe you'll help them. Maybe you won't. Maybe they're like, "I didn't need the clothes," or, "Stop bugging me. I'm fine," whatever it is. But you're trying. Then in those moments, people can see, for whatever reason, this human is reaching out to me from this beautiful moment of electric consciousness happened. I do it a lot when I travel. Stuff as simple as like, "Do you want to get in line in front of me?" That like in modernity, like in an airplane. People are like — it's almost like you're doing something wrong. Like, "What? Okay. Thanks. Sure. How are you doing?" I love this, like when you're on the phone with a human and you know it's a human, whatever. "Hi." Wells Fargo, my bank, or USAA Insurance, whatever, so and so. "You're being recorded. This is Sheila," whatever it is. You're like, "Hey, Sheila. How are you doing?" That is altruism. You wonder why? Most people don't ask anybody how they're doing in those jobs. And it feels great. Sure. People are like, "Are you doing it to feel better?" Like, yeah, because I want them to do it to me. But also, whether or not, it's still—

Brandon: There is a real connection. It is a recognition of the value of the other.

John: Yeah, and then the purpose is kind of what we're talking about here. Like, is there a reason for living? Some days, I don't feel that way. But then most days, at some point, I'm like, I'm so blessed to have friends like Brandon. I'm doing amazing work, my kids, whatever else. That gap logic is helpful because it gives you a sense of recognition for your own worth. That's the gratitude. The altruism kind of keeps you focused on others. Then purpose, usually, by that point, the purpose is pretty evident. But where you can pursue work that brings you joy, wonderful.

Brandon: Could you say maybe one more word about just on the theme of what faith communities could do? Because I think this is another area where we have to really think about, "Well, who is really the best position, perhaps, to shape the way in which we approach the development of these systems, the way in which we live with them?" I think there is a danger in which we find even faith communities will simply be chasing after the latest fad so that they don't feel left behind. But is there an innovative role that they could play in a society that becomes increasingly automated and driven by some of these logics we've talked about?

John: Definitely. Well, now it's been maybe a year. Wow. Time flies fast. I joined as a volunteer expert to a group called AI and Faith, which I'd recommend your folks check out. I can introduce you to some of the folks there. The guy who started it, his name will come to me. David? It'll come in a minute. But he started it years ago. What I really appreciate is that he's not focused on the one faith. It's not a Christian organization or a Jewish organization. There's a lot of opinions from transhumanists. By the way, that's a very general — it's like saying Christians. It's such a broad group of people. I learn a lot from transhumanist friends. It's the ones who then, like any group, cross the ties with force politically. But they're like, hold on.

I worked at a church for years when I was an actor, Trinity Baptist Church on the Upper East Side in New York. You learn a lot about faith-based institutions when we kind of learn how the sausage is made, as it were. Running a church means trying to get people to come, things about tithing—which there's a lot of issues about tithing when you ask for money. It's like, well, you've got to keep the lights on and charity. But what does that mean? Then when people come, how do they register? You get their data, and then the sort of seasonal side of things. It's like being an actor. You work when everyone else doesn't. In faith-based institutions, Sundays or Saturday nights or Fridays or whatever, you're working as it were. It's work. So I think there, first of all, a lot of times, faith and community, those two words are interchangeable when you say blank-based institutions. And so when there's any opportunity to have people come together around a shared communal, positive communal offering, I think that's a fantastic community to go to and say, "How are you using these tools?"

What's interesting is, I did go to an event in Dallas—a very helpful event, a lot of amazing people—with a group called Missional AI. That is a Christian-based organization. And I'll be honest. As an ethicist type, I was freaked out. It's not about the people there. This is not about Christianity. But I can't tell you how shocked I was at how many people just put-up screens that essentially said, "Hi, I'm so-and-so." But then it was related to, "Hi, I'm Abraham," or, "I am Jesus." Not got to avert. But basically, they didn't know about data, or disclosure, or whatever else. And so I appreciate the different groups I'm involved in that's using my first message. It's like, listen. It's hard enough for me when I know people don't understand anthropomorphism for just a general-purpose tool that they use for everything. When the first introduction to not just a church, but the potential for any faith is, hi, I'm blank, then insert all the scary part of them is where all the scary stuff that could be there.

But then the other part is, if someone is led to think that the tools themselves are spiritual. And again, the people proffering those tools may believe it. So that's a different conversation. But where there's sort of an ignorance, not out of stupidity, but just like, "Hey, let's use this rapper. ChatGPT gives this rapper. We can build this tool off of it," I'm like, I sat there. I sat for four or five years. Not that long even, but as an office manager. People were like, "Hey." They came in off the street, needed money, needed food, came in on the weekend. We're a million dollars on Wall Street, right? Your connecting with humanity is uncomfortable.

Then also, it's a reality of that I believe having been in the biz and having proselytized in high school and recognized why that isn't, can be 'effective' but is not genuine. It's that ultimately, it's like you share tools around what someone can believe. You share scripture, if you want to call it scripture, depending on the edition. You share whatever. But ultimately, it's these communities, I believe, coming in place of love—where the love also is partly, "Hey, we know eventually you're going to make your own decision." Or it's kind of a cult. When I first came to New York, I remember it was tempting. Somewhere as an actor, a very attractive woman—Christine, I think is the name—was like, "Hey, do you want to come to a party?" She named a church. I won't name the church because it's still pretty well known. I'm like, "Sure, cute girl. I'm doing great. I just came to the city." I went to the party. It was essentially kind of a cultish, evangelical side of Christianity, where people were like, "We want you to get baptized." And I'm like, "I was baptized." They were like, "We got a bathtub filled with water. You're going to go get baptized." Weird music playing, wine-like drinks but not wine being packed. It was just all this stuff where I'm like, "Hold on. I believe in the guy. I went to college." They were like, "Hmm."

Anyway, so all that is to say, like, when community is an imitation and that blissful blessed moment happens—AA is one of the places I felt this the most—wasn't a we have a solution for you because you're broken. AA, the thing I felt so good about was like, the good is not the right term, but healing. It was a given, and this is what I take from when I read New Testament scripture. It's that we're not broken like we're less than. When we make mistakes, it doesn't mean we're fallible or evil, even if we do "evil" or sinful things. The phrase, "All have sinned and fall short of the glory of God," I think you can either get into like, oh, flagellation and whatever else. I'm not trying to mock any religion or faith-based belief. But it's a sense of like when I mentioned that story of Jesus and the woman at the well. Most of what I take from that is a sign that, no matter what we do, we are loved to the point where, I believe, positive view of — I'm going to use a father term, but I don't need a gender. But Yahweh, Daddy, Abba. There's this entity in the universe that loves us so much, that we have free will. You have the opportunity to love others as God loves us. In the midst of knowing, my belief. Perfection is a strange term to use anyway, unless one is perfected as it were in the act of trying to love, and in the act of recognizing the need for love and where usually one of the other person is in more of a state of the need of that love, and one person might be more able to give that love.

And so I think their faith-based — this is what I love AI and Faith and other organizations are trying to bring these conversations into work setting. Not to proselytize for specific faith as it were, but to say, if there are these conversations and ideas from around the world, Ubuntu ethics, whatever, that are not normally brought into business setting, then that's how a lot of the issues facing not just GenAI, but the GDP thing won't change. Because it's the faith-based institutions usually that are saying, "What about caregiving? What about love? What about community? If we aren't measuring those things in our day job, maybe it's time to change." So faith-based groups, I think, maybe have the biggest way of saying, "Hey, while we're talking about AGI and our consciousness being put into boxes and going off, can we also talk about Allah? Can we also talk about Buddha?" And if the answer is no, no, no, then I think that's a really good indication to say, this conversation is not going to bring innovation to human. It's going to do the same stuff we talk about all the time. That especially goes for indigenous cultures, for women, for people who normally often are on their tables like these.

Brandon: Right. Thanks, John. Where could we direct our viewers and listeners to your work, to IEEE's work on this?

John: Well, thank you so much. I'm pretty active on LinkedIn for all of its problems and positives. I use my middle initial, C. So John C. Havens. So you have it written here. IEEE, if you Google I with three E's, and then letters AIS, you'll come to our main page that has a lot about AI and ethics work. A big compendium book called Ethically Aligned Design is one of the core things I help drive. About 800 people took part in that over the course of three years. Then it lists the standards that we work on. Then the other project that I've been working on the last few years is called Planet Positive 2030. And if you Google IEEE Planet Positive 2030, those are my two main aspects of work at IEEE. Then thank you. My book, my last three books are on Amazon. So if you type my name, Heartificial intelligence, Hacking Happiness, and Tactical Transparency are my three traditionally-public books.

Brandon: Great. Excellent. Thanks, John. It's been a delight. I'm very grateful for your time, for your wisdom.

John: Well, thank you. And I'll end with saying what I started with. I really appreciate your work on beauty. You're taking such a beautiful, unique angle for a word that, I think, at least for me, is oftentimes too put in a box. So thank you for expanding the paradigm of that idea of beauty with your work.

Brandon: Well, glad that it resonates. That's the goal. Yeah, thanks, John.

(outro)

Brandon: All right, folks. That's a wrap for this episode. If you enjoyed the episode, please share it with someone who would find it of interest. Also, please subscribe and leave us a review if you haven't already. Thanks, and see you next time.