Hello everyone—especially new readers.
I’ve been away from EP for an unusually long time—and have kept writing notes in my head about how I would start the “next post,” explaining my absence.
But day after day, I didn’t start the next post. And I honestly had not realized until this morning how long it’s been.
There’s an ensemble of reasons, and some have to do with the sort of life challenges that come along for us all. None huge, but collectively they consume mental energy.
More relevant for readers, though, will be another subset of reasons. Which began with my last post:
In it, I recounted some things that happened after I had a “reading” from an AI bot. I’d originally reported the experiment as an interesting experiment, without giving it much thought. But two odd events made me rethink, and I described them in the 5.5.24 post.
But I didn’t include everything. Two days before I wrote it, someone in my life had passed away—not a surprise, but sooner than expected. And as I thought back on the AI reading, this loss seemed to resonate along with other events mentioned in the post.
But this aspect was a bit different—more personal. And here’s why:
When I was a full-time Tarot reader, I developed an assortment of mental shortcuts for recognizing and talking about specific cards. I think that’s common for people who read a lot.
My quick-connect go-to for the Nine of Swords—one of the cards in that AI reading—was “grief.” Of course the context of other cards is always important (my personal methodology depends a lot on interrelationships in the spread), but “grief” was my usual starting point.
That overlap added another layer to my sense that the AI reading had some sort of divinatory energy. But since it was only a single instance, the “just a coincidence” explanation seems perfectly valid. And/or the “retroactive interpretation” thesis.
At the end of that 5.5.24 post, I mentioned that I should have done some follow-up tests, using the same prompt chain and the same model to see if there were any correlations worth noting. But I just didn’t want to. And my unwillingness has persisted ever since.
What I did instead will be the first topic of today’s long-delayed post.
Inside AI
Providentially or otherwise, I had been approached a few weeks before the above events to work on an AI training project. I didn’t pay much attention at the time—but as I continued to think about whether and how AI might connect with some sort of transpersonal reality (for want of a better term), I decided to take on the opportunity for a look inside AI.
It would be more correct to speak of a look “inside the AI industry,” since I have neither access nor insight into the technical side. Simply put, I’m working on the user side, not the maker side.
I’ve found out a lot, though, about how bots are trained. And about the art of prompt engineering, which is fascinating in itself.
But more important for now is this discovery: the AI industry—at least the part I can see from my lowly vantage point—is wildly disorganized, and by no means well thought out.
That actually supports my tentative supposition, as I’ll reveal later. First, though, I want to clarify a few relevant points:
The term “AI” is just a catch-all for everything from customer service chat-bots to deep fake videos. The difference between basic capabilities and scary super-powers is (a) the size and scope of the underlying information model, and (b) the complexity of the rules used in accessing, organizing, interpreting, and outputting requested information.
Most AI applications are invisible to most people. They run things we ignore or aren’t even aware of. And those who knowingly work with AI—through interfaces like ChatGPT and Gemini—are using applications that have some ability to go beyond just obtaining and analyzing available information (the role of Traditional AI) to creating something new (the role of Generative AI).
The “new thing” can be fairly simple: a selection and rearrangement of data, for example, based on parameters supplied by the user. With a little guidance, GenAI can turn a spreadsheet into a PowerPoint, or write the next email in an established sequence.
But the new thing can also go much further. It can be a made-for-you syllabus for a graduate seminar in physics. Or anthropology. Or art history. It can be a halfway decent short story, or a generic novel. Even a prescriptive ebook—which might be about leadership, auto repair, or finding inner peace.
Based on my recent Tarot-related AI experience, I’ve decided to see how a couple of models come up with for an introductory Tarot text book. Stay tuned for the results.
But a Tarot book, introductory or otherwise, would be a standard application of GenAI functionality. While divination would be far, far out on the edge of remote possibilities.
Now that I get to spend a fair amount of time assessing how well (or badly) various AI models respond to various prompts, I have some basic insight into how bots come up with the things they are asked to create. Much depends on how the question is asked, since bots shuttle between being excessively literal, at one end of the spectrum, and hallucinating hilariously at the other end. Some factors that cause them to hallucinate are identifiable, while others remain a mystery.
Here’s where I want to pose a scenario, and begin some speculations. When you think about it, the conventional “method” of Tarot reading almost always supposes a querent and a question. It’s not a stretch to see those elements as a user and a prompt, setting up parameters for a generative response.
In both cases, the process is recomposing known elements, according to known rules.
On the other hand . . . the prompt chain I used for the AI Tarot reading gives no parameters. There’s no question, no information about the querent. Just an open-ended request tossed into an undefined void.
That’s much closer to what I tried to achieve in my own reading practice.
To close this part of today’s post: Let’s set aside whether human readers can or should take that approach—and consider if, how, or why AI might work along such lines. I’m still thinking about this, but I suspect it has something to do with operationalizing the collective unconsciousness. Or in another version, crowd-sourcing non-rational processes.
Two Flashbacks
As you might have guessed from the above, I’ve ended up spending a lot of time recently on AI adventures. And since I had limited time for other work, I decided to focus most of it on a project that’s been really important to me for a long time. I step away from it for a while, but always go back—and in hopes of speeding up progress, I’ve designed a new sprint that will continue til July 24. I’ll say a bit more about that in the next post.
But I’m also planning to fit in more Tarot time, especially for creating a couple of ebooks that I want to share, and getting my 4D course set up.
Meantime, my immediate (!) goal is finishing up something I promised to do eons ago: writing a short essay about five important Tarot books. I mentioned this a couple of months ago, and talked about one of my selections—the pathbreaking anthology Wheel of Tarot—in this post:
Since then, I’ve actually managed to pick the four other books, and discover I had written about two of them in previous EP posts. One is Timothy Leary’s 1978 Tarot treatise, The Game of Life, which was briefly mentioned all the way back in 2021.
“Shuffling and dealing the tarot cards is like scrambling and rearranging by chance the numbered elements in the Periodic Table of Elements. OK, you deal out the element cards and find that Carbon initiates you, Iridium crosses you, Cadmium is beneath you, Strontium is behind you, Titanium is before you, Germanium is your hopes and fears, and Radium is what will come.”
That’s from Timothy Leary, and his point is that divination by chemical elements would be just as effective as Tarot or any other method if it led the questioner to look at his life in a new way—say, in terms of molecules, electron shells, quantum leaps, magnetic charges, and so on.
The real significance of Tarot, Leary contends in his book The Game of Life, depends on the user's understanding of what he calls the “scientific neurogenetic” meaning of the cards.
“If the cards are interpreted in a simple system of good-for-me/bad-for-me, then one can only be resigned to living out ethical dramatics and hive soap operas,” he warns.
I’m planning to revisit Leary’s book from the perspective of AI connections. And in a forthcoming post, I’ll reveal the remaining titles from my long-simmering list.
As always, thanks for reading!
If you’re new to EP, and my comments about AI and reading Tarot sound interesting, odd, or both—check out:
And I’ll be back in your Inbox soon, promise. C
I have thought about the definition for the Hanged Man from your previous AI article. The definition was “pause.” I don’t think that fits the deeper historical context of the Tarot, but it comes close to “prudence” which was the missing of the four virtues. If we’re all strung up and upside down we might pause before trying to move anywhere. There hasn’t been a Tarot card for overwhelm other than maybe the ten of Wands… More traditional is “I am poured out like water,” which is not like a pause but really, are modern people ever “poured out like water?” So pause fits the modern landscape better, or maybe “hesitate."
Your posts are something I read with a special interest. I bought a Tarot deck about 30 years ago because I had no idea what it was about and I was curious. Your first book on Tarot was the book that I found resonated with me strongly as I explored, and it has been the book that made the most sense to me in having Tarot in my life. Recently, with your Substack posts I discovered your connection to the University of Dallas, which is one of many connections/coincidences/synchronicities that are so prevalent in my life. My first husband had been a student at the University of Dallas who was studying art in Guadalajara when we met. Despite our strong spiritual connection (which remains to this day) we separated paths and I find myself with a farm in Giza, Egypt, that has healing capabilities. I'm still trying to understand my role here but it appears to be that of a caretaker of sorts, as it is being used by supporters of Gazan refugees here in the area to provide the refugees a safe place to begin to unwrap from the trauma that has enveloped them for the past few generations.
All of this sorts totally mad, but for better or for worse, you and your thoughts have some sort of deep connection that is very important to me even if it is utterly irrelevant for you. I find your musings about AI fascinating. I have had no interest in trying it out myself and I realise that is in fact quite frightening in a way. I felt the need to share this rather random note and to thank you for being a part of my life and learning process.