Thought Leadership Studio Podcast Episodes:
The Interplay of AI and Humanity with David Espindola
Episode 66 - Navigating the Future: AI's Role in Shaping Human Consciousness and Business Transformation with the Author of Soulful: You in the Future of Artificial Intelligence
Or Click here to listen or subscribe on app
What this episode will do for you:
- David Espindola's Tech and AI Journey: Discover David's transition from Silicon Valley to a leader in AI, exploring his extensive background in technology and entrepreneurship.
- Insights from 'Soulful AI': Delve into the key concepts of David's book, 'Soulful AI', which examines the intricate relationship between AI and human consciousness.
- AI and Human Collaboration: Understand the synergistic relationship between humans and AI, emphasizing the importance of empathy and relationship-building in systems that include both humans and AI in the loop.
- Challenges in AI Adoption: Gain perspective on the hurdles organizations face when integrating AI, including ethical considerations and privacy issues.
- AI's Role in Enhancing Human Performance: Learn about AI's transformative impact in areas like education and sports, and how it can augment human capabilities.
- Empowering Society with AI Knowledge: Explore the necessity for society to embrace AI education, preparing for a future where AI plays a significant role in various sectors.
David Espindola.In this podcast episode, I interview David Espindola, an entrepreneur, keynote speaker, author, consultant, and advisor. We discuss the intersection of artificial intelligence (AI) and human performance, as well as the transformative potential of AI in various areas.
David shares his background in technology and entrepreneurship and explains how he became interested in AI. He also talks about his book, "Soulful AI," which explores the relationship between AI and human consciousness. The conversation touches on topics such as the collaborative approach between humans and AI, the challenges of AI adoption in organizations, the importance of privacy and ethical considerations, and the need for society to embrace and educate itself about AI.
David emphasizes the need for humans to leverage their unique capabilities, such as empathy and relationship-building, while collaborating with AI. He also provides insights into the role of AI in enhancing human performance, particularly in areas like education and sports.
Some of David's coordinates:
Curated Transcript of Interview with David Espindola
The following partial transcript is lightly edited for clarity - the full interview is on audio. Click here to listen.
Chris McNeil: I'm Chris McNeil, with Thought Leadership Studio, and I'm sitting here across Zoom with David Espindola, who's an entrepreneur, keynote speaker, award-winning author, consultant, and advisor. He's the founder of Brain is a company that applies the principles of transformative purpose, life lifelong learning and servant leadership to guide human AI transformation. What a fascinating topic. Welcome, David. Great to have you here.
David Espindola: Thank you very much, Chris. It's a pleasure being here with you.
Chris McNeil: So to introduce our audience to you, if you don't mind, can you tell us a little bit about your story on what sets you off on this path, if there was maybe a pivotal moment or transformative moment or epiphany that helped you make the connection with AI and human performance, transformative purpose and the things that you speak about and work with?
Journey from Silicon Valley to AI Thought LeaderDavid Espindola: Yeah, I'd be more than happy to do that. So it started way back when I started my career. I started my career in Silicon Valley. So from the very beginning I got exposure to the world of technology and entrepreneurship, and I just saw, I was fascinated by the possibilities. I worked at two very fast growing companies, one of which did an IPO, and it just fascinated by the power of technology and how influential it was. So after that, I decided that I wanted to go work for a large software company. So I went to work for Oracle.
I was in the consulting business. I worked with a number of large organizations, fortune 500 companies and helped them with their business processes and helped them with adopting technology. After I was done at Oracle, I decided that I wanted to pursue a CIO career. I had a boss at Oracle that had become a CIO, and he was very kind and generous and inviting me to go work with him.
And he showed me the ropes, he gave me the experiences that I needed, and then when the right opportunity came along, I became the CIO of a software company in Chicago. And so for that entire period of time when I was at Oracle and pursuing my CIO career, I was actually commuting to Chicago. I lived in Minnesota, Minneapolis. And so I traveled for 20 years and then once there was a transition at this company that I worked at and after that I said, I'm done traveling. I don't want to travel anymore.
And I started doing my own thing. And so I've been on this quest of transformation and discovery and just trying to apply other things that I learned throughout my career to help people and organizations transform, all tied in with technology, which what I've spent my entire career on. And so three years ago I decided to write a book.
I had an opportunity, I'm on an advisory board of TLI, the Technological Leadership Institute at the University of Minnesota. And one of the directors there is an editor for IEE. They have a series that they were doing on management, technology and innovation, and they're looking for writers to write a book on that topic.
So I wrote a book with a co author called The Exponential Era, and the whole premise behind the book was things are changing so fast, we have all these incredible platforms that are growing exponentially and companies are having a hard time just keeping up with all these changes. So we wrote about these platforms, we wrote about the impact on business, and we provided a methodology for these companies to actually be able to deal with all these fast changes.
And so in that process, there was one platform that I thought was really going to be tremendous that's going to change business, changed society, changed the way we work, and that was AI. And I thought I should write another book, just focus on AI. And that's how I came about writing my latest book, which is Soulful: You in the Future of Artificial Intelligence. And I'm just amazed at how fast this technology is changing. It's just fascinating.
Chris McNeil: I was looking at some of the reviews for soulful and Amazon as I had mentioned, and it seems like you're speaking to the intersection of human consciousness and artificial intelligence.
It's the combination of the Human and AI That's Very PowerfulDavid Espindola: So yeah, so in the book, so the title itself, right? Soulful and then artificial intelligence. So writing the title, what I tried to do was capture this juxtaposition of the human intelligence, the emotional, the soft side of being human with the hard, analytical, predictive aspects of being a machine and artificial intelligence.
And I believe it's really the combination of the two that's very, very powerful. The machines can do a lot of things that we are not so good at, but we have a lot of things that we can do as humans that the machines can't do. Things like empathy, things like building relationships, building trust. So it's really that combination of the two that I wanted to capture in the book and talk about how I believe this technology is going to be transformational to us as humans transformational to society and technology is neutral, technology is neither good nor bad, but it does have some good things that you can do with it.
And there's some negative things that can come from that. So I try to capture some of the concerns as well. But I wanted to give it a positive picture because I had seen so many negative stories and just commentary on social media and in the media in general about AI is going to take over, it's going to take our jobs, it's going to do all this terrible stuff. And I wanted to counter that with some positives as well, while acknowledging some of the ethical concerns at the same time.
Chris McNeil: Well, I'm sure some horse farmers were concerned about automobiles taking their jobs at one point as well.
David Espindola: Exactly.
Chris McNeil: But they created a lot more jobs and it just took some flexibility. So this is a fascinating topic to me, and there's so many threads I'd like to explore with you. And one of them is what about the usefulness of instead of considering is AI better for this or humans better for this, or AI is competing with humans in this projection as if it's another species.
Instead considering the system of a human working with artificial intelligence, the input from the human into the ai, the output from the AI is fed back to the human that allows the human to modify that and the input and output from that system of human and AI interaction. Would that be a more useful way to think about it?
David Espindola: Yeah, I like to think about it that way. I like to think of our interaction with AI as being a collaborative approach. So we've had computer systems for many, many years, but our interface and our interaction with these computer systems have been not as dynamic as it can be with AI because AI can be very responsive and it can give you new possibilities, new ideas, and you can take that response and then you can refine it and you can throw it back at AI, and AI can then give you some other response. And it's a dialogue. It's a collaborative effort. So I think it's very powerful from that standpoint.
Sentience and Artificial IntelligenceChris McNeil: I agree with that. And I wonder about the implementation within organizations - with system design that incorporates AI - if maybe sometimes there's too much reductionist thinking in terms of "AI should do this" and "humans should do that" instead of the proper design of the system as a whole that has a purpose, but includes AI and humans within the loop that achieves this purpose.
David Espindola: Yeah, absolutely. I think the challenge is that there are several challenges. One of them is this is new, so everybody's trying to really understand what this means, what's the back to the workforce, what's the back to the business. So there's a lot of learning and experimenting that's going on right now. And the second is the fact that change is hard, right?
Change is hard no matter what technology or what situation we're dealing with. And this one is particularly hard because I think there's a lot of fear out there. People are afraid of this, especially this whole aspect of AI taking their jobs and so on. So I think there's some challenges there in terms of the change management process where we need to educate the workforce on what's real, what's hype, how this technology can be used, how it can benefit the organization, how it can benefit individuals, and teach them how to collaborate with these systems. So that educational component I think is really, really critical.
Chris McNeil: I agree. And is not one of the strongest, if not the strongest human motivation is (the drive) towards the familiar?
David Espindola: Absolutely.
Chris McNeil: .... anything that pulls us away from the familiar (will add friction). And I'd like to get your thoughts too, especially given the message of the book Soulful about .... maybe its projection, maybe it's partly not of considering potential "sentience" of AI.
Extending and Withdrawing the Sense of Self and AIAnd my thoughts on it are, well perhaps in the sense that somebody who's ridden a motorcycle on a racing track, you become one with a machine and your sense of self extends to include the rubber on the road. And only through that can you be conscious enough of what the motorcycle's doing to stay safe at high speeds.
And on the other hand, the withdrawal of the sense of self of perhaps a surgeon working on a traumatic accident victim, or if he was fully immersed in himself, it'd be very difficult to do because of the emotion it brings about. So they have to withdraw their sense of self. Are we extending our sense of self through AI in that sense or are their other dynamics you'd like to speak to about that?
David Espindola: Yeah, no, that's a great question. So it gets into a number of different conversations. So let me see if I can unpack this a little bit. So I do think that in a way we are sort of becoming cyborgs because we are so dependent on our technology. Just think about how revolutionized our lives. I can't even imagine what life was like before my smartphone and now it's part of me. If I don't have my phone with me, I'm sort of lost, right?
I don't know what to do because everything that I need is there. And now we're taking that even further because we have smart watches, we have smart glasses, we have chips that are being inserted into people's hands. I don't know if you've seen this, but there's a company called Wallet More that will insert a tiny little chip in your hand so we can make purchases with that chip.
And then people are working on BCI, and there's the possibility that someday we'll have a chip inserted into our brain that we'll use to make purchases to communicate with other people and so on and so forth. So we're becoming evermore intertwined with these technologies and becoming dependent and extended by them. On the other hand, when we talk about being human and having consciousness, I think the challenge is that we don't really have a real good understanding or definition for what consciousness really is.
And so it's hard to talk about machines becoming conscious. I lean towards thinking that machines will not become conscious whether we are going to reach a GI or not. Artificial general intelligence, which is another term that's used often, and it's also a term that has multiple definitions that's also debatable. I think there are some AI experts that believe that we may be very close to getting there.
There's one senior Google executive that said he believes there's a 50% chance that we'll get to a GI within five years. And then you have other AI experts. The one that comes to mind is Kai Lee, and he believes that we got so many challenges to be able to get to a GI. Things like understanding emotional intelligence, understanding counterfactual thinking, understanding some of the cognitive capabilities that we have as humans that are very difficult to duplicate in machines.
So there's huge debate about AGI, whether we're going to get there or not, or how long it's going to take. I think this is an exponential technology. So it's possible that these machines will become ever more capable and intelligent. But I still think there's a human aspect of this that is very, very difficult to replicate in machines.
Chris McNeil: Well, when I consider what it is to be a human in relationship to artificial intelligence, to me, it brings up issues like "What makes us human?". And there's of course the mechanical aspects of the brain. And in that aspect, certainly AI can do certain calculations much faster than the human brain can.
Can AI "Become" Conscious?But to me, we're more than our thoughts or our thinking, even our mind, let alone our brain. And I think about anthropologist philosopher, system thinker, Gregory Bateson's definition of logical levels where the simplest explanation I found for that is in sports, the player is at a lower logical level than the team because the way the team plays together, the player participates in that, but the team is a higher order and influences the player in what he has to do a lot more than the player influences the team. And the league would influence the teams because the league sets the rules of the game, et cetera, and the team has to work within those constrictions.
So the league would be at a higher level of the team. And I think Bateson's statement was something like "A set of members cannot be a member of its own set", and that's how he defines a higher logical level.
And consciousness to me is at a higher logical level than understanding because all of our understanding happens within consciousness. So how are we ever going to understand consciousness, yet I see a lot of people who project consciousness onto machines as if it's going to emerge from the machine, from an assumption that consciousness emerges from biology, which I haven't seen any evidence of that whatsoever.
It's more like to me, sensory experience, including our experience of biology, exists within a field of consciousness. And I know you've written about consciousness in your book. What are your thoughts on the relationship of consciousness and technology?
David Espindola: Yeah, I love the way you put that. I think it's difficult for us to really understand consciousness because consciousness is part of what we are. So it's hard for us to really study the subject when we are both the person that's studying it and the subject matter at the same time. So I think some of this conversation gets into the profound, right, and I think some of it really depends on your belief systems and what you believe in.
I happen to believe that humans are more than just the electric discharges in our neurons or the chemical reactions that are happening in our brain. I believe there's more to being human than that. And I think a lot of it is things that we don't understand back to your point, right? Because it's beyond our ability to really know what it is. And that gets into the matter of the so right?
And the so can be defined in so many ways, but I think there's this other aspect of being human that is more than just the mechanics behind it. And I think that ties in with this concept of consciousness. And that's where I think the machines can't touch. I think the machines will always be based on the desires that humans, we have intrinsic desires that we can impart to the machines, but I don't think the machines will develop their own intrinsic desires.
And so it gets a little philosophical and complex, but those are sort of my high level thoughts on it.
Chris McNeil: Well, that makes sense. That ties into something that I found as explanatory and what I've been asked to speak on AI, although I don't consider myself really an AI expert - I'm just somebody who uses it a lot and, like many of the rest of us, I'm trying to learn how to best use it. I've been involved in some software projects involving AI that predated ChatGPT coming out. So I have some understanding from working with some high level experts on that.
But the Chinese room experiment seems to be explanatory, and I'm sure you're familiar with that. Somebody sitting in a room and they can't see what's outside and people put notes written in Chinese under the door, and then he has a rule book on what to respond to what symbols with, and he sees the other symbols to respond with, and he follows the rule book, but he has no idea what the symbols ever mean. Although he is per these rules, he's putting out the correct responses. But somebody from outside the room might think he understands Chinese, but he doesn't because he's simply responding to symbols with rules. And I think that helped explain it to a certain group of people how AI is not conscious the way we are, but is pre-programmed to a set of rules to appear to be in some situations?
David Espindola: Yeah, that's exactly right. And I think we can see that. I don't know if you've seen some of these interviews with Sophia and other robots and the interviewer may ask a provoking question, so conscious, do you understand these deeper questions about consciousness and so on?
And the machine may respond back, yeah, I have feelings or I understand pain and I understand consciousness, but the machine is just responding to those symbols, just like you explained, right? It is not really true that they have, they're just regurgitating back to us what we taught them. So it's fascinating.
Chris McNeil: It is, and it creates a lot of odd dynamics, I think in some cases with what people tend to project on AI. So I'm a big fan of human performance enhancement, and I was deep in sports psychology, also in neuro-linguistic programming. I use it in my coaching in thought leadership because part of that isn't just creating thought leadership campaign, just helping develop people's ability to lead thinking, to be leaders, generate their role models, and that's a human performance thing.
AI and Human PerformanceAnd I've thought about how AI could be a powerful adjunct and just imagine an athlete who wants to improve performance in the sport, and there's specific movements that he or she wants to get better at, and perhaps a virtual reality simulation guided by AI that could watch themselves modeling somebody who is already an expert at that to train the neuro pathways to approximate expert performance, even slowing it down. And with some type of biofeedback AI interpreting the muscular response or the EEG or other types of biofeedback, the brain response to help provide with that biofeedback loop and enhancement of performance. Have you looked into things like that or what your thoughts on AI in other areas I may not have thought of in enhancing human performance?
David Espindola: Yeah, I have thought about this particularly in terms of education. And the way I think about this is I think AI is going to evolve to really learn deeply our ability to learn, our ability to improve performance, our ability to deal with all the challenges that we deal with. And I think AI can become a tremendous helper in helping humans become better at learning, at performing at a number of different areas.
So I think there's a huge potential there because you're going to have this on-demand tutor or coach or teacher that's going to be there for you at any moment that knows exactly how you learn and knows your state of mind at that specific point in time, whether you're ready to absorb some information or not. So I think it's going to know each individual intimately and be able to be a very effective coach.
So I think that's the positive side of this conversation. Now, I also have some concerns about using AI as a performance enhancement capability, if you will. And my concern is that to what degree are we going to give up our privacy in order to take advantage of these performance enhancement capabilities? So we're already dealing with that situation today. So every time we interact with a computer, whether you're doing emails on Gmail or whether you're browsing the web, you are giving up a little bit of your privacy, but it's something that you need because it's part of how we work as a society today.
We need to have access to these tools, and you're giving up a little bit of your privacy. I think that's going to grow exponentially too. We're going to have more and more exposure to potential systems and processes that are going to expose our most intimate thoughts and behaviors and strengths and weaknesses. And the question becomes who has access to that and who will determine to what extent that knowledge can be used for or against you? So I do have some concerns from a privacy standpoint, and we'll see where that goes.
The transcript is lightly edited for clarity and is a partial transcript- the full interview is on audio. Click here to listen.
Free Stuff and Offers Mentioned in Podcast