
Discover more from Gil Friend on STRATEGIC sustainability
Yeah, AIs “hallucinate.” So do you!
Not that there's anything wrong with that. I do too. We all do. We're assessment-making beings—constantly, inescapably constructing interpretations of events, people, motives, ourselves, etc.
We can do no other. (As Peter Yaholkovsky puts it, "I rarely make assessments; they're just already there!") We encounter the world shaped by our histories, by the filters that our filters have built. Again, not, there's anything wrong with that, as Jerry Seinfeld would remind us…except when we start to believe our own stories…not noticing that they're stories, not "fact".
Let me give you an example: A coaching client told me recently, "they think I can't cut it." "How do you know?" I asked. "Well," my client responded, "I guess I don't really; maybe it's me being nervous."
It gets funny. When you start noticing yourself making interpretations about other people's interpretations about your interpretations, you might start wondering whether there's better way to play.
Fortunately, there is. Fernando Flores teaches a simple practice for encountering assessments (whether someone else's or your own): 1) recognizing that it's an assessment, not "the truth"; 2) recognizing that it may be based on actual evidence or experience—or not—and inquiring into which; 3) remembering (or deciding) whether you're open to (or interested in), that person's assessments, and 4), deciding whether or not to pursue the conversation further. Try it—starting with being grateful for the assessment, whatever it is—and notice what happens to your reactivity, your mood, and to the focus or squirreliness of your attention.
(I call it a practice because—believe me—it takes practice. Lots of practice.)
It's not about "getting it right." It's about whether your interpretations serve your commitments, your relationships, your sense of possibilities. It's about how your interpretations shape the moods in which you encounter the world, and in turn how your moods shape, your interpretations—and your possibilities.
What does all this have to do with strategy, innovation, sustainable business, regenerative economies, and climate action? Everything.
To be continued…
Yeah, AIs "hallucinate." So do you!
"All outward forms of change brought about by wars, revolutions, reformations, laws and ideologies have failed completely to change the basic nature of man and therefore of society." ~Jiddu Krishnamurti
As he witnessed the first detonation of a nuclear weapon on July 16, 1945, a piece of Hindu scripture ran through the mind of Robert Oppenheimer: he instantly quoted Bhagavad Gita Chapter 11, Verse 32 - “I am mighty Time, the source of destruction that comes forth to annihilate the worlds”. No matter how little we know of the Hindu religion, this line lives within us all. Watching the fireball of the Trinity nuclear test, Oppenheimer, who had learned Sanskrit and read the Upanishads and the Bhagavad Gita in the original, turned to the scriptures. He referred to the texts as some of the most decisive influences in shaping his own philosophy. While he never became a Hindu in the religious sense, Oppenheimer found it a useful philosophy to structure his life around; his interest in Hinduism was about more than a soundbite, it was a way of making sense of his actions.
Oppenheimer enjoys the dubious distinction of being the ‘father of the atomic bomb’ that devastated Japan in World War II, shockwaves of which are felt even today around the world. It was a defining moment of scientific endeavour and a milestone in humankind’s progress. But the persistent threat of nuclear destruction still hangs menacingly on our species, thereby demonstrating the lasting and double-edged nature of the human quest. Artificial Intelligence is on a similar path, and is potentially even more powerful, rallied on by exponential technologies. Not to mince words, the AI of the future is perceived much more as a threat to our species than a tool for the progress of the planet.
For all of history, no other form of intelligence has been able to come within striking distance of human intelligence (at least not visibly), a superpower that gave us primacy over other species and dominion over all of Nature’s resources. Now AI seems to be on an exponential trajectory, threatening human supremacy in unprecedented ways. Humankind is the new Dr. Frankenstein, for our own creation seems to have the potential of taking us over. While it is natural to feel threatened by such a force, “we must face our fears if we want to get the most out of technology — and we must conquer those fears if we want to get the best out of humanity", proposes Garry Kasparov, among the greatest chess players of our time. Getting the best out of humanity, however, is not possible without first understanding what it really takes to be “human”.
"In the end, you know, we are very minor blips in a cosmic story. Aspirations for importance or significance are the illusions of the ignorant. All our hopes are minor, except to us; but some things matter because we choose to make them matter. What might make a difference to us, I think, is whether in our tiny roles, in our brief time, we inhabit life gently and add more beauty than ugliness." ~ James G March
What is the human condition and how are we to truthfully explain and solve it? Australian biologist Jeremy Griffith puts forth the idea that when we humans developed a fully conscious mind some 2 million years ago, a battle unavoidably developed between it and our already established instincts. The result of this conflict between our instinct and intellect was that we became psychologically defensive, angry, alienated and egocentric—the upset state we refer to as the human condition.