Skip to content
3 min read 2020s

HOPE: Comfort, Crutch or Contagion?

HOPE: Comfort, Crutch or Contagion?

I had an experience of hope recently listening to a YouTube conversation between Paul Anleinter and Jordan Hall. I was attracted to the title The AI Apocalypse is Here: AI, Religion, & The Great Transition. I know a lot of crazy stuff is pouring into social media channels and thought “I’ll watch this and see how crazy it can get.”

I was surprised. Paul Anleinter is a cultural historian and theologian who has written several books on the topic and Jordan Hall is a technology tracker, whose recent essay on The Great Transition apparently attracted Paul’s attention. Their conversation was broad and interesting, relatively free of hyperbola and full of sweeping conceptual observations. Here were two intellectuals trying to make sense of things. I could relate. I’m trying to make sense of things myself and respect thinking.

The Current Context

After an hour sweeping over the arguments about why the development of AI is similar in impact to the development of the hydrogen bomb, I began to flag a little. Jordan was making the point that civilization is driven by scarcity and the desire to obtain scarce things like water, food, medicine, etc. Some handle this scarcity issue by cooperating and generating new techniques and technologies. Others handle it by taking resources from other people. This struggle drives civilization. Then Paul brought in images from scripture about the lamb and the dragon, or the lamb and the lion depending on what you read. If the lamb symbolizes a vertical orientation to love and cooperation, the message of Jesus and the dragon represents the force of power and aggression of “taking” which will “win?” Back and forth they argued. Cooperation represents the reason science works and is core to the generative side of society. But fear of scarcity can drive cooperation into developing something before someone else does as a way of staving off the dragon. We had to develop the bomb before Germany, etc.

Winner Takes All

Jordan’s point eventually was that the four big AI superpowers, Open AI, Meta, Google, and the Chinese have for years discussed this issue and assume that AI is a “winner takes all game.” I.E. if we don’t do it someone else will and take over, etc. As a result, we now have an AI Manhattan Project underway. It will change EVERYTHING they concluded. The norms and agreements that constitute our current civilization are being eroded at a pace we can’t even grasp. Software companies are obsolete. Finance companies are obsolete. Anything that smarter machines can handle (all words, images and symbols) is obsolete. “We are not prepared to think at this scale,” Jordan asserts (as he proceeds to do just that!) He may well have been referring to organizations and governments and their leaders. I would agree with that. I’m getting more depressed as I listen.

Empowered Networks?

But then he turned hopeful. As he and Paul talked about the possibility of vertical orientations toward a higher purpose might counter the horizontal concerns about scarcity, Jordan says the future might actually belong not to organizations and governments, but networks of AI empowered individuals who learn how to collaborate and work together, just like agentic AI systems are learning to do across platforms.

I don’t want to defend this argument, but it set off some thinking on my part. I’ve been fascinated with the role of mycorrhizal networks in the health of forests. These vast networks of fungi have fine hyphae that both penetrate the roots of trees and other plants and well as create a sheath around root tips. Both types are able to break down soil nutriments and provide them to the trees, receiving carbohydrate sugars and other things in return. The endomycorrhizal fungi (EM) penetrating the roots and the ectomycorrhizal fungi (ECM) providing a sheath around the root and root tips could analogized to external and internal consulting networks that work with the larger organizations to help them thrive.

I began to dream about our networks using AI in sophisticated ways to help solve problems and provide insights—the nutriments of growth in larger systems. Do we know how to do that? Can our collaborative orientation extend to technology, with us staying in the human driver’s seat, or are we going to abdicate to B2B AI solutions?

Like a little cloud appearing near the sun a caution emerged. Someone on Medium wrote about the neurological consequences of getting overdependent on AI and social media, constantly looking to chat to help with emails, reports, and everything else. He found that his ability to focus and struggle with difficult thinking questions became severely limited and even disappeared. He experimented with completely stopping chat for a month and found that some of his capacity returned. Ah the sun reappears.

A Dilemma

But here is a dilemma. Can people in AI augmented networks both learn to partner with AI and still rise above its defocusing, numbing allure? Are those of us who have spend decades guiding groups and individuals to become self aware and relational willing to enter into this arrangement?

I remain hopeful that I was able to feel some hope in the midst of a depressing blizzard of news about climate, war, politics, and technopolies. I’m committed to finding more examples. I wonder if hope could fuel a contagion of experimentation guided by higher purpose?