Beauty At Work

Faith, Love, and AI with John Havens - S4E4 (Part 1 of 2)

Brandon Vaidyanathan

John C. Havens has spent years at the heart of the global conversation on AI ethics. As the Founding Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, he led the creation of Ethically Aligned Design, a document that went on to influence the United Nations, OECD, IBM, and dozens of organizations shaping the future of AI. He also helped build the IEEE 7000 Standards Series, now one of the largest bodies of international standards on AI and society.

Today, John serves as the Global Staff Director for the IEEE Planet Positive 2030 Program, guiding efforts that prioritize both ecological and human flourishing in technological design. But his perspective on AI doesn’t begin with policy or engineering; it starts with love, vulnerability, and the deep spiritual questions that have shaped his life.

Previously, John was an EVP of Social Media at Porter Novelli and was a professional actor for over 15 years.  John has written for Mashable and The Guardian and is author of the books, Heartificial Intelligence: Embracing Our Humanity To Maximize Machines, Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Change the World, and Tactical Transparency: How Leaders Can Leverage Social Media to Maximize Value and Build their Brand.  John is also an expert with AI and Faith

In this first part of our conversation, we discuss:

  1. How love reframes “weakness” in both human life and AI ethics
  2. The impact of generative AI on creativity, intellectual property, and the erosion of human craftsmanship
  3. The dangers of anthropomorphism in AI design
  4. Ways AI systems undermine our capacity for conscious choice
  5. How the surveillance economy and advertising systems shape our habits and decisions
  6. Positive psychology matters for designing technology that supports well-being
  7. What dreams, virtual reality, the spatial web, data, and spiritual life have in common

To learn more about John’s work:

Books and resources mentioned:

This season of the podcast is sponsored by Templeton Religion Trust.


Support the show

(preview)

John: I'm just going to say it. I think this is the first place I've said it to a person versus writing it. I think all use of GenAI, any GenAI, is irresponsible. We have not been trained as humans to recognize how precious our data is. Because it reveals who we are, but it's not all of who we are. When other people take it and kind of feed it back to us, GenAI has accelerated so many of the worst parts of these tools.

(intro)

Brandon: I'm Brandon Vaidyanathan, and this is Beauty at Work—the podcast that seeks to expand our understanding of beauty: what it is, how it works, and why it matters for the work we do. This season of the podcast is sponsored by Templeton Religion Trust and is focused on the beauty and burdens of innovation.

How do we understand what it means to be human in an age of artificial intelligence? How can our experiences of loss and weakness and connection shape the way we design technology, and why must love be part of our conversation about AI ethics from the very beginning? Addressing these questions is my guest today, John Havens. John is the Founding Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligence Systems. This initiative produced a landmark document called Ethically Aligned Design, which has shaped AI principles for the United Nations, the OECD, IBM, and dozens of organizations. John also helped create the IEEE 7000 standards series on AI ethics, and he currently serves as the global staff director for the IEEE Planet Positive 2030 program. John is also the author of books including Artificial Intelligence: Embracing Our Humanity to Maximize Machines and Hacking Happiness.

In our conversation, we explore John's unique background as an actor, journalist, and AI ethics leader; his reflections on beauty, empathy, and loss; and how these deeply human experiences should guide the ethical design of AI. We talk about why agency matters for introspection and decision-making, why anthropomorphism in AI can be manipulative, and how the surveillance economy is reshaping our ideas of privacy and data. Let's get started.

(interview)

Brandon: Hey, John. Thanks for joining us on the podcast.

John: My pleasure. Thank you for all the wonderful work you do on beauty.

Brandon: Thanks. Thanks, John. Well, let's get started. Speaking of this, speaking of beauty, I usually have my guests begin with a little story about a personal experience of beauty. So do you have a memory that comes to your mind of a profound encounter with beauty from your childhood?

John: I do. The first one that came to mind when I read your email with that question—which I think is such a good question—was when I was a junior camp counselor in Wellesley, Massachusetts, in what must have been—join me in sharing my age—the late ’70s. No, probably the early ’80s because I’m 56, so it would have been like ’82, something like that. So I was like 14 or 15. I remember I was put in charge of this kid named Ernesto. My camp counselors, they weren’t jerks; that’s not a beautiful way to say it or talk about people. But they were like teens, older teens. They said, "Take care of Ernesto, this little boy," Hispanic by his name, and I could tell. I just couldn't figure out why he was always running really slow. He was quite small. At the time, I didn't know what cerebral palsy was.

Brandon: Oh, wow.

John: It was like three days after I was taking care of him that one of the counselors pulled me aside. He said, "You know, Ernesto has got cerebral palsy." Looking back, I'm like, you didn't tell me that from the get-go. But it completely shifted my perception. This is what I thought was a beautiful memory. It was mainly about him. I still haven't ever seen him again and I kind of — I don't know. He probably would be like if you meet someone, it might shatter your memory. But the thing about him is, once that shifted — meaning, when someone told me that and I recognized — I didn't really understand. I knew some of the aspects of it. Anyways, we became friends. I accepted that his pace was what he could bring. He still worked so hard. When he ran, it was a gift for him to run, so it was like a big deal.

Anyway, the real beautiful part is, at the end of every day, at our camp, we would walk kids to their parent's cars. The cars would do circles and pick up the kid. Jonathan Smith, Ernesto's mom. And so, the last day he was at camp, I put him in his car. He spoke Spanish more than English. I didn't speak Spanish at the time. I do now. As I was clicking him into his car, he said, te quiero. It means I love you. It still brings tears to my eyes because whatever level he had, whatever the condition actually brought, he was still Ernesto. The fact that the gift to me, the beauty, was that part of my language but being a dumb ass on whatever level where I wasn't being kind to him no matter what the case, because that's the job as a counsellor. The gift, I think, from him to me was recognizing, maybe he didn't know. Then we kind of move beyond that, and it just became John and Ernesto. And the fact that he said I love you at the end of that, it stays with me to this day.

Brandon: That's amazing. Yeah, that is truly beautiful and profound. It's one of these things that, as I was reading your book, Artificial Intelligence, the value of our humanity, the value of suffering, the value of even living with disability, with illness, and recognizing that, all of those aspects of the fragility of our condition still have something beautiful and worth treasuring about it, I think, is really under threat, right? I'm curious to get, maybe if you have a sense of what allowed you to recognize that beauty at a young age. Because there may be many kids for whom suffering and disability are things to write off. Especially, there's a kind of machismo among young men that doesn't want to recognize and see anything valuable about weakness.

John: Well, I think the word 'weakness' is an interesting word, you know. I really appreciate you reading the book. Seriously, Brandon, thank you. Because I don't know all of the reasons. I wrote it in 2015. It came out in 2016. I still never had someone come up and be like, "Hey, AI ethics. Write about it."

Brandon: Right.

John: My dad was a psychiatrist. He passed away in 2011. My mom was a minister, and I was an actor for years. So studying the human condition, a pretty big part of whatever, who I am. Then I went through a divorce and COVID. I think there, at least for me, now, at a very deep level — and in 2016, I would've already lost my dad in 2011. Weakness is an interesting word. I think more and more there is a quote from a philosopher named Karl Werner, who I read about in an encyclical from Pope Francis about love, the encyclical about love. Karl Werner's quote is — I'm just making sure I'm correct. Let's see. The core — is it the core? I'm going to get it wrong, so I will send it to you, the correct version. I should have it in my mind. It's basically, oh, yeah, the inmost core of reality, of love.

I bring that up because I think, for me—I'll even say I know. I'll take that risk with my friend, Brandon—I think all humans are more interested in being loved than they are in being smart. In that sense, weakness, when it comes to artificial intelligence, I think there's all of these statements that are really tedious to me. Where people are like, "Well, humans make mistakes. Why shouldn't we trust machines because they don't make mistakes?" When, of course, A, they do. Secondly, humans designed the machines. But without trying to be negative towards the potential of the machines, the algorithms, the outputs, is to question why. Why are we doing these things? And if the logic of being weak physically in any way, if someone saying you're weak because you're not as smart as whatever is the world that we live in—which we kind of do from a key performance indicator side of things—that's a cool reason I think I did write the book. I've been writing similarly. It's to sort of defend what I think is not just like a tree hugger, bleeding heart, anything side of things, but a person who, at least in my case, losing my dad and my divorce were two of the fundamentally hardest things I've ever gone through. And when you get broken with whatever it is that breaks you, there's where you really identify what is it or who is it that brings me comfort and peace. I love books, I love information, I love AI, I love tools. It was humans loving me that kept me sane. It was my recognition that I'm not only not interested in being perfect. I don't know what that means. But I figure if I can wake up every day and try to love myself or other people or also nature better, then that's a good day.

Brandon: Yeah, thanks, John. You talk a bit about — I mean, you mentioned you don't quite know why you wrote the book. But your career trajectory has been quite unusual, right? I mean, you're an actor, a musician, a journalist, and then you're an expert in positive psychology. I think your first book was on that topic, and now tech policy. Could you walk us through just very briefly how did you end up studying AI systems and AI tech policy?

John: Sure. It’s kind of you to call me an expert. I would say I’m a person who is fascinated with it. In my 2014 book Hacking Happiness and my 2016 book Artificial Intelligence, I do talk a lot about positive psychology and quote from heroes of mine in the space—the Martin Seligman’s. I always mispronounced Mihaly.

Brandon: Csikszentmihalyi, yes.

John: There it is. Thank you — who wrote Flo. Barbara Frederickson. So there's like the Titans of positive psychology where I've learned any of their stuff and then incorporate it to my work. It'd be lovely to be called an expert. Anyway, the point being, I think it was my desire to be a minister in high school. I don't know why, I mean, if we're going that far back. Because I think the nature of your work, which I really appreciate, is tender. As a person of faith—meaning, I believe in Jesus at the time where you love other people versus judge them and condemn them—I really learned about it from my parents.

My dad, although he was a psychiatrist, I recognize when you say someone has anger issues, as people have said about me, it feels very condemnatory just because it's very vague. It's like everyone has anger issues. But what you're saying is, hey, sometimes it manifests in ways that the passion or intensity might throw people. That's helpful. That's a useful critique. But I bring that up because, as a kid, I just knew that my dad, when he was home, he kind of watched what we were doing. He was never violent towards me or something, but he spanked and yelled. A lot of times, also, when he came home, there was a sort of like, "Dad got home from work." It's quite common amongst humans. But it's when my mom accepted Christ — I'll use that term vaguely because that's not what your show is about. Vaguely, in the sense that she went from just holding a book that she called the Bible and going to church—which we did at a Methodist church in Massachusetts—to… I saw her. She was already an amazing, wonderful human, but her demeanor changed.

Six months later, my dad really hurt his neck. He was sitting with this horrible medieval contraption back in the '70s. This was the '70s. My mom would put a stack of books next to him, the top of which was a book of the Psalms, the Jewish scripture. My dad said — he was very upset one day, leaning into the closet, looking into the dark. We went the metaphor. He picked up this book of the Psalms and started reading. He said he felt his heart changed. For the next year or two, I actually felt that he transformed who he was. He used words like "Jesus" and "God" and whatever out. But I saw him change his demeanor towards me and others. That meant that when I was 13, I accepted Christ. For me, that journey went from in high school — this is so I'm giving you all this background. In high school, I did what a lot of people do, I think, with any new faith, any new thing you're excited about which is proselytized. I talk too much and not listen. I think I could convince someone with my words or proving with historical accuracy, which is a lot about, especially New Testament scripture I learned about in college. Really exciting stuff when you really read any historical document.

Anyway, then I got to college. At the college, it was what's called Brethren in Christ—very conservative from my background—where we couldn't drink, dance, or smoke. There, I sort of had a wonderful — I'm taking this long to talk about it because my "faith" is, A, oftentimes wildly hypocritical. Because I judge and don't love people well. Then secondly, it came from a high school experience where the secular setting being the Christian geek. Then I went to a place were like it was hyper, kinds of Acts-focused. Let's call it "letter of the law” versus “spirit of the law." I'm being judgmental. There, when people live their faith, it really inspired me. That's then what launched me to my acting career. Because my acting teacher in that college was the kind of crazy liberal guy. Then I went to New York City. From that point on — I can skip all the details, so you can ask follow-up as needed. But really, I think what I was gifted with was parents who had demonstrated the life that you try to live—loving others. At least my dad, being a psychiatrist, listened; that was his job—50,000 hours of listening to people. Then when you observe life, I had a sort of natural empathy that made it easier to try to scrutinize outputs of our lives. Then things like I got into writing and marketing and PR, which are kind of outputs of that. Then when my dad died, that's the positive psychology side. It's sort of an homage to him. Then my mom probably from artificial intelligence and ministry, and now the work at IEEE where I work now and AI and ethics and now focus on sustainability continues to evolve. Although the last couple of years with GenAI, it's become a lot more challenging.

Brandon: How so?

John: This is not about IEEE where I work, so I'm going to put that aside.

Brandon: Sure.

John: I'll tell you what, Brandon. I think more and more — I'm just going to say it. I think this is the first place I've said it to a person versus writing it. I think all use of GenAI, any GenAI, is irresponsible. Now, I'm going to qualify that. The outputs that you or others may use it for, I'm not going to tell Brandon — I can talk about you in the third person. I'm not going to tell you, "You use ChatGPT to rewrite something, and when you look at it, you feel good about it." That's your subjective truth. I honor that. What I know, in my experience, especially with data, of my 2014 book that was focused on data, is: humans don't have access, certainly Americans, to their data. If you know books like the 2019 seminal book, The Rise of Surveillance Capitalism by Shoshana Zuboff, that sort of put so much clarity. The first chapter of that book is monumental in its paradigmic level of understanding—how in 1776 the nature of consumerism started to form the basis of the surveillance economy. All these big words just mean like we have not been trained as humans to recognize how precious our data is. Because it reveals who we are, but it's not all of who we are. When other people take it and kind of feed it back to us, GenAI has accelerated so many of the worst parts of these tools.

Fundamentally, I'm still on the Screen Actors Guild. As a journalist, I wrote free books, many of which apparently now through no protections of mind that I had are being used, subsumed by different tools because they are deemed fair use—which of course is a ludicrous use of that term. Intellectual property, especially from a lot of GenAI designers and creators, the logic of we need more data for our systems has nothing to do with them getting permission from you or others. I don't understand, Brandon, why humans of many type don't recognize. Just because actors in the Screen Actors Guild are like, "I make money from this face. I want to protect it," then they go, some people, "Well, all things must change. Don't hinder any innovation." Then I'm like, it's a face. You have the same face that I'm fighting for you.

Basically, the use of GenAI tools—this is before we get into energy and water—largely because it's not narrow, testable tools, I think it's irresponsible. So I'm going to keep doubling down on that, because I've been talking about AGI being ludicrous and essentially occult for years. GenAI, I think it's just more to say, if someone can't say how they're being responsible about it that satisfies not just me, but all the people that I respect so much in the space, hundreds of them, then my answer is, "Look, I get that they're cool. I use ChatGPT sometimes—once in a while, not that often," I get the allure. Especially, from a spiritual sense, what is produced, it's harder and harder, I think, for people to say, "Oh, those are the words that I created," versus, "This is the aggregated, really just slop, morass, especially from synthetic sources, where no one knows how to cite an original author. By using these tools unwittingly, out of ignorance, not necessarily designed, people don't recognize you still are mitigating, lessening, and harming all human creativity. Because anytime you use these tools and whatever the result is, you can't identify where it came from. More and more, you start to go, like, "Did I write that?" I read my books from years ago. I'm like, "Oh, that's pretty cool. I guess I wrote that." I know I did.

Anyway, that's a long answer. There's a lot of other reasons. But basically, the leadership, too, of the companies. Always talk about when AGI is going to come and basically just remind people pretty overtly that they are not happy about humanity as it stands, but yet a lot of parts of society are giving them so much power to do so.

Brandon: Yeah, there was a lot more I want to double click on these themes. Let me ask you some questions about some of the issues that you've raised. I mean, even 10 years ago, you were writing about just this sort of complex, seductive allure of AI systems. One of the risks you've pointed out was that — you said that our desire for introspection, our capacity for independent thinking, our ability to even appreciate the benefits we already possess were at risk. Could you say a bit about how was it that AI threatens—especially when it comes to these questions of faith and spirituality—our capacity for introspection for even just reflecting on who we are come into the creeps of our basic sense of humanity? How is that threatened by these systems?

John: I think most of all has to do with agency, which I talked about at different times. I think it's a really challenging subject because to have agency around the concept of APG is a challenge. The example I've been giving at least myself and some others recently is the concept of theater. As an actor, I got very used to performing, and I'm very conscious of when the lights were going down. Because as a professional, your job leads up to that moment. You're already working. Then when you're backstage, the sounds of humans watching something is really interesting.

I only did one Broadway show, but in that Broadway show, I played harmonica. I would roll across the Richard Rodgers theater on these wooden stage on wheels. Playing harmonica while bouncing was really challenging. But the thing that was just exhilarating, just looking out and seeing—I think it was 1,200 or 1,600 people—all looking at these dancers where the spotlight was. I knew that no one would be looking at me. Even the eye senses motion, they were looking where the spotlights had to look. So I could just look at these people unencumbered, just staring at a show. And so I bring a lot up to say, in a theater, when the lights go down or a movie theater, no one gets on the microphone and says, "Okay, everybody, don't be freaked out. The lights going down are a symbol that you're about to see a show." It's kind of the invitation for millennia about an invitation to catharsis. This is not real. We have that agency. It's a cultural phenomenon. Or even in cultures or places where they don't lower the lights, when a show starts, you're about to read a book, there's some kind of inhalation, spiritual — I say spiritual. I don't necessarily mean religious of one particular faith, but the sort of consciousness of attenuation to nature or outside something. Music is very similar. Because we know those things, we don't have to say them anymore.

Now, in my field, when you see a white page of a screen and there's a rectangular box and it says, "How may I help you today," I can list literally about seven things immediately that are not engendering agency. They're actually manipulating. For instance, you have to scroll down on ChatGPT at the very bottom below the fold—you don't see it right away—that says, "ChatGPT may make mistakes. Check your results," which is not genuine disclosure. When you use anthropomorphism, how may I help you today, there's things where people are like, "Oh, we're used to these tools." I'm like, we aren't. We aren't. I mean, if you're 56, people my age, you are because of your intelligence in your age. Kids aren't. Young people aren't. A lot of people aren't. They see, "How may I help you today?" And so I bring all that up to say the spiritual side of the loss of agency, the other example I've been thinking of is like, if you walked into a room and there are 12 people and someone had a sign saying Buddhist, someone had a sign saying

Christian, someone had a sign saying agnostic, and someone had a sign saying Deweyan—you mentioned John Dewey because of my wife. Whatever—words that were symbols that you thought might be attractive or not attractive, but you stood there and that choice to look around, I'm deeming that agency and modernity. Whereas right now, there's a legal term called a term of adhesion, which is a legal term, which means one can't operate except in the world into which one is given or invited. And so, to say to someone, "Well, you don't have to use these tools," is like saying, "You don't have to use the Internet." It's not just that it's unrealistic and inaccurate. It's manipulative, and it's part of a larger design, where giving someone tools—many of which I'm still trying to create. There's a standard at IEEE called 7012. I can tell you about it in a minute—is not me trying to tell Brandon or anyone else, "Here's how you should feel." But it's essentially saying, "Hey, when you go to that web page for the first time, that white page, they have the opportunity of those designers and that company to have a first-time cooking structure. Boom." First time at ChatGPT, you know these things. It's very simple. And to say they didn't know is absolute—if I can swear on your show—absolute horseshit. There's so much from Stanford's behavioral economics and whatever else, like design 101.

When you know about what works in terms of manipulation and do it to get your tool out in the world and then later go, "Oh, I didn't know," then that's either ignorance at the level of massive irresponsibility, or it's pre-awareness, obfuscation by design. It's the type of thing I've been fighting or trying to remedy for years. Where what normally happens — Sherry Turkle has said this, another hero of mine who wrote the book, Alone Together. She's one of the leaders in the space of kind of awareness and understanding how to use the tools well. Once you tell someone, "Hey, here's these things you disclose, A, B, and C," are you aware of the anthropomorphism on however you describe that to someone said they'd get it, and you give them agency? So, in that example, you come back the second time and you know those things. She has a great quote, which she says in terms of loving. At the time when she wrote the book, it was about loving robots. She used the term, "We're ready for the romance. Humans like being manipulated," or I should say they like being told stories and the morals. But if you don't give them the chance to even have agency, then you get into — eventually, from the business standpoint, lowest common denominator, you're harvesting their information. They're not going to be useful anymore. Then more importantly, from a larger way that I live my life is, every person has worth. Every person should be given the human right, legal right, to make their own decisions. Right now, the answer is: they absolutely don't.

Brandon: Yeah, I mean, is the core issue here around the anthropomorphizing? I mean, if you had LLMs that, say, gave you promise in the third person saying, "Bot 365 is ready for your questions," as opposed to, "How can I help you," does that take away the problem? Part of my understanding is that the anthropomorphizing is helpful to create a sense of attachment to this entity, right? Do you think you can build a relationship with it, it becomes addictive in the same way that Facebook and all these other things are? I mean, is that part of the issue—that there's this illusion that's created by the sort of first person?

John: Yeah, I mean, that's kind of the core reason. Your use of the word 'entity' intrigues me because I like to use the term 'systems.' Now, that said, I try to avoid telling other people how they feel because that's not really relevant. Meaning, I might have misinterpreted you or anyone else, but they can just tell me. Because there's a lot of people who believe that algorithms now or the systems comprising the algorithms are sentient or alive. And in the same way I believe in Jesus, I'm not here to judge that.

However, the disclosure around the third person, when not given—I know this as a journalist—initially, it used to irritate me to disclose with certain things. Because I'd feel like, oh, I'm going to mitigate. I never really wrote where it was like I work for NBC, and I have to disclose that because I'm writing about NBC and they're getting me money. But you disclose things, and you're like, "Ah." It's like a magician showing their trick. I got to disclose it. I get it. I get that concern. Valid. But then, when you don't disclose something and then you find out from a reader that not only they feel fooled, but they think there's legal issues of wherever else, it's hard. Disclosure is hard. But in this case, it's not. It's not. Everybody knows. I mean, when ChatGPT first came out, I was the ethicist. Like, "Stop using I. Stop using I." It hasn't really changed. Then what happens is, you'll get the language. Like you just said, this tool uses third person in an effort or whatever. Usually, within two prompts, it will say me, or, I, or we again. That's in English. I only read a little bit of Spanish, so I have no idea what the nuances are. But even the phrase ‘natural language’ is misleading. The reason it is such a big deal is because when I read — I don't know. I can't think of a good analogy. I get lost in books. I read different writing. I know I haven't written it, but it feels like I could have. But that separation of, like, now I'm reading something by Brandon, or I'm reading this book about Marshall McLuhan and whatever it is. I'm really into it. I know the difference. When I write something myself—like I said, I read stuff I wrote years ago—I forget that I've written something, but I can trust that I wrote it. Then I see the citations when I'm quoting someone else. Because I don't remember. Then my friends are like, yeah, you probably are copying other writers inherently the same way as a musician. You copy BB. King's Licks or whatever else. I think there is this like an homage logic. But also, I can't copy Emerson. If I quote him directly, that is called theft, you know?

So all that is to say, like, the anthropomorphism is just one of the tools, but it is kind of the biggest one. Because I think it's a spiritual issue. I don't mean Jesus, or Allah, or whatever. I mean, you go to that white page. It's very ritualistic, very stage-like, like a proscenium, and where all of modernity, you and I have grown through pre-internet days and all of that, all these signals of going like this and picking up this thing. You and I have all these signals that became such a part of who we are. That now, when a kid, a young person, opens a screen, it just sees that box and "How may I help you today" is the entrance. Then anthropomorphism, anybody is saying like, "Oh, it's natural and anthropomorphized." It is. But from a design standpoint, when that's known, not disclosing is an overt tool that is harmful, you know?

Brandon: Yeah. Let's talk about values. I mean, that's one of the key points you make in your book. It's that it's really critical to explicitly codify our own values to shape both AI systems and our flourishing. I mean, there's an approach to values clarification, which is sort of, in my sense, a way to simply track your individual preferences, whatever they might be. That implies a certain kind of relativism, right? So then you've got your values; I've got mine. But that doesn't seem to be the kind of thing you're talking about it. My sense is you're borrowing your friend Constantine Ogdensburg’s theory of values dissonance, which suggests that unhappiness results from not living up to our own values. Could you say a little bit about why it matters that we recognize the values that are at play in our own lives, and why those values need to be codified into our AI systems—not as an afterthought, but in their design?

John: Sure. I'm glad you mentioned his name, Constantine. I remember I haven't seen him for years, but he always gives a compliment for me. He was one of the geeks for my 2014 book. I also interviewed about quantified self. A lot of times, people are like, "If you scrutinize yourself too much with all these different tools, you lose the beauty of your life and all that." I find, more and more, it's a test. It's a certain amount of time. It's a way to recognize what we care about. So in his case, I think his memory serves, he wanted to spend more time walking with dogs. He said, "I find joy with time with my partner, walking my dog, being in nature." He did that by just taking — I think it was a month, any longer. Not just journaling, but really scrutinizing everything he did. This is the aspect of the sort of advertising regime around our actions, the surveillance economy, that when shared with a user in a positive way is beautifully illuminating. Hey, sleep app, this app, relationship app, whatever it is, sentiment analysis, emotional awareness, looking at your facial cues, all these things—things that we just don't see because we're not built that way. Then when they're aggregated with insights, they're wildly helpful. That's what he taught me. Then the values work that's in the book, some of it is pretty simple, like spending time with your family. I joke about this a lot. But it's like, no one I know comes back from a weekend at work, "How was your weekend?" Well, great. I was more efficient in my time with my kids. Three weeks ago, I spent four hours with them. Last week, I only spent an hour. But I maximize my love time with them, right?

I'm glad you laugh because it's sort of supposed to be ludicrous, but it also shows — not from you, Brandon. But we, as humans, are not supposed to measure those things. Yet caregiving is the main part that's left out of the GDP, gross domestic product. Caregiving is pragmatically meaning certainly acknowledging women, children, and nature. Why some of the major reasons we're in the anthropomorphism, that our planet is suffering so much, because we don't measure caregiving. I've had a lot of people over the years say, like, "Well, if you measure caregiving, it's going to harm it and mitigate it." I think it's the opposite. I think, first of all, isolation from COVID, and extending now with these tools—a lot of these chat bots, et cetera—isolation tends to increase with a lot of use of certain chatbots. Noble as the aspirations may be. I think there, in terms of back to your values question, I think for me, at least, one of the hardest things about values for me is also saying, does it work?

I wrote a book on measuring your values and then got divorced. Now, I'm not going to talk about my divorce or my ex or anything like that. But I certainly wondered, "Do I have credibility talking about emotions or whatever else?" The short answer is, I don't know. I think credibility has to do with the person looking at me. I can't make myself be credible to someone. But if I try to mention, as I will hear, that in one sense, I wish I hadn't gone through the divorce mainly because of my kids. But then I wouldn't have met the love of my life, Gabrielle, who I'm married to now, and I wouldn't have gone through an experience that I would wish on nobody. Divorces, more and more, it's interesting like you watch TV shows. I got divorced twice. Everyone's journey in pain is unique and different. But at least, for me, categorically, if someone was like, "Would you prefer to get shot," yes. "Do you want to lose a couple of fingers?" Yeah. Because the pain is so lasting, and there are so many aspects of my values that I now have in question. I wrote a book on values about, did I do something wrong? Because I was in a situation where it seems like I wasn't doing the right stuff.

Anyway, and so there, I'll say that now I'm at a place where I come back to that quote, "The inmost core of reality is love." Because at least, for me, going through that meant my ideas around values or tracking values useful as I still think they are. Harder the value ultimately that had to go through from me was my faith. Meaning, do I think that God—in my case, Jesus—are real in a way where that experience happened? Do I feel that my faith fundamentally is kind of what kept me from going unhinged? The answer is yes. And so there, in that sense, today, hopefully not sound like I'm proselytizing, but share that an examination of values where one says their faith—capital F, small F, whatever—is when they get out of bed in the morning, how they recognize they can keep going. That again, goes back to love for me.

The final point I'll make in this answer is: I've written a diagram on LinkedIn about that statement, "The inmost core of reality is love." Because as a geek, I'm like, is that the web reality that you and I are on now? Hypertext? Is it virtual reality and augmented reality, which I've written about a lot? Is it the spatial web? Is it a new set of protocols? Is it GenAI? Is it data? Is it our dreams? I think about dreams a lot. Being 56, you wake up and you're like, "Oh, I had a dream." It wasn't real, but it was. Something happened in your brain. You woke up, and it stayed with you. So that's a reality. Death, life, spirit. I love that Karl Rahner said this. "If every one of those, the inmost core—I rightfully phrased that—is love, then seeking that love as a value is probably the core of what I'm trying to do." Because I guarantee, John, especially in the last couple years, you're not the guy you want to be like. How do I live by following values? Unless it means finding someone as amazing as my best friend and wife, and also leaning on her sometimes too much in terms of love. But if that helps. I think I really appreciate you're asking the question because a big part of the book—which is definitively the same, which I want to make sure to say to your viewers and listeners—is me, John. I know. I won't just say I believe. I know that every single person in the world has worth inherently. Because you breathe. And so, asking what are my values, if no one has asked you to ask that of yourself, you are worth the time. The book has some examples of values, things like family, work, et cetera, where you can start to ask yourself: what is my purpose for the world? Am I living to these things that I think are bringing me value? And if you test and you know that they are, amen. And if you test and you realize I'm stressing myself doing 70-hour work week, and I think I'm losing time with my family—which might be harming me ultimately—then my answer, especially from my position, is: take the time you need now.

(outro)

Brandon: Alright. That's a good place to stop the first half of our conversation.  Join us next time as John takes us deeper into the role of love and grief and faith communities in shaping ethical AI and why he believes every person's inherent worth must guide the future of technology.