transcript
Speaker 1:
[00:00] What is happening when we're talking to an AI? What's getting triggered in us with our wonderful mammalian deeply evolved psyches? And how does that affect us? And how does that change us?
Speaker 2:
[00:15] I used this idea of the mirror, right? And the image I used is the snow white and the evil stepmother, which is, you know, mirror, mirror on the wall, who's the fairest of them all. In the fairy tales, she's kind of looking at the mirror for this confirmation of her own self-identity, not necessarily the truth. It's kind of like a narcissistic hall of mirrors.
Speaker 3:
[00:46] Welcome to This Jungian Life. Three good friends and Jungian analysts, Lisa Marchiano, Deborah Stewart, and Joseph Lee, invite you to join them for an intimate and honest conversation that brings a psychological perspective to important issues of the day.
Speaker 1:
[01:04] I'm Lisa Marchiano and I'm a Jungian analyst in Philadelphia.
Speaker 4:
[01:08] I'm Joseph Lee and I'm a Jungian analyst in Virginia Beach, Virginia.
Speaker 5:
[01:13] I'm Deborah Stewart, a Jungian analyst in Cape Cod. Today, we are really delighted to have our friend and colleague, Christina Becker, on the podcast. And really, I think of Christina as a bit of a Renaissance woman. She has been on our podcast before because she is a practicing astrologer. Of course, she's a Jungian analyst. She's currently running online training programs for people who are on the boards of nonprofit. And she is very, very interested and knowledgeable about AI in terms of its impact and meaning and what's going on psychologically that is affecting our collective. She's also an author and has written a really beautiful book called Soul-Making, A Journey of Experience and Spiritual Recovery about her own difficult years and journey through them. That is a wonderful companion for any one of us, which is almost all of us at some point in our life, having a really trying experience. And it incorporates a lot of Jungian ideas and how they companioned her and helped her through it. So I highly recommend it. And it's, of course, available on Amazon. So with all of this and these multifaceted experiences you have drawn for us, Christina, let's dive into AI and how it affects us. I wish Jung were here with us. He would be fascinated by this turn of events and our new call to consciousness in the collective.
Speaker 1:
[03:34] Yeah, that's great framing.
Speaker 2:
[03:36] Yeah, thanks Deb. Yeah. Well, I, you know, we've had these kind of conversations. I think the reason why I got interested in this was because clients were saying, oh, I took my dream to ChatGPT or Claude or whatever and this is what it was. And, or I just, you know, I did all of this work and I thought, oh, that's really interesting that people are doing that. What does that mean? And what is the quality of the conversation that they're having? And so, you know, started to open up a whole kind of can of worms, so to speak, around that. But I also, in terms of my businesses, like my, you know, the non-profit business, I started to use AI to help with strategy, with help writing e-mails, helping me work through some business problems, which I found very, very helpful. And then I got introduced to, I don't even know, a summit, I guess, called The Wise AI by The Shift Network, in which there were a number of spiritual teachers that were talking about using these bots as a tool for consciousness. And I started to look at these conversations, the kind of conversations they were having, and I went, okay, well, that's really interesting. And what does that say about where we are at, you know, in terms of the collective and things like that. So I started to do it. I have to say, Deb, I'm not, I'm still trying to figure it out myself. I have a great deal of interest in it. And it's changing so fast.
Speaker 5:
[05:31] And I think that's actually a huge point to be made, is when things change that fast, do we just get caught up in it, trying to keep up and scramble and respond? Or, you know, how do we maintain our own sense of self in the midst of this sort of a cascade, even a wave that can kind of just sweep us away like a sudden flash flood?
Speaker 2:
[06:02] Yes, truly. I think that's a really, really important image. You know, the image of the flood, which we've been talking about, is how easy it is to get swept up. I mean, I've gotten swept up. I was doing a piece of work because I was just exploring a number of things. You know, I use it to kind of try to link the symbol and astrology and what's actually happening in the world. And, you know, I did a deep dive. And then all of a sudden, I was trying to explain it to somebody, and I couldn't explain what it was that I was coming up with. It sounded a little crazy. And because I think that I got sucked in too much in the connections between the things that I was exploring, it just pulled me in. And I all of a sudden realized that nobody is immune from getting sucked into this. It sucked into the whole. We've been talking about the sycophant and the reflection on all that. I think, actually, I was using Claude, which is slightly less kind of flattering and stuff like that than ChatGPT. Yeah, yeah.
Speaker 1:
[07:29] And actually, there was a paper that came out recently from MIT that used a mathematical model that said that it's actually impossible to avoid getting kind of caught up. I wish I understood the mathematical model better to explain it, but maybe it's an open access paper. Maybe we can put it in the show notes for those that understand these things. But basically, it was a little bit less with less synchophantic models, and it was a little bit less if you warned users. But nevertheless, even sophisticated users with less synchophantic programs are still susceptible to this. So let's back up a minute though, because I feel like we're running down the hill, and maybe some people don't even know what we're talking about. So I want to just back up a little bit. So obviously, AI has come on the scene, generative AI has come on the scene, and a lot of us have been pretty wowed by what this technology can do. I think most people that I know of, the first time they used it, they were like, oh my God. It's like, hey, can you write me a cover letter for this job I'm applying for? Can you help me come up with some possible titles for this blog post I wrote or something? It's like, gosh. So kind of eye-popping. It's really, really quite amazing. And now it's obviously infiltrated a lot of the programs that we use. Now with Zoom, we can just automatically get this wonderful summary generated. You used to have to take notes when you were in a meeting, no longer. I mean, it's kind of very quickly changed a lot of things. Interestingly, and we wonder, I mean, all you have to do is just crack a newspaper or open up any app, and people are wondering how it's going to change how we work and how it's going to change artistic output and how it's going to change the economy and how it's going to change all these kinds of things. So we're in the middle of this flood, as the two of you said. But I think what we really want to track here is, how does it affect us psychologically to engage with these AI? And the interesting thing is that these AIs were not designed for therapy or companionship. And yet that is their, as I understand it, that is their number one use case. So people are going, young people, teens, adults, are going to these bots to have conversations about things. And a lot of, you know, there's been, there've been a lot of stories, a lot of coverage in the media about so-called AI psychosis. It often starts as, you know, the, the, this guy started using AI to help him with some stuff at work. And then he started talking more to it. And before you know it, you're kind of down this pretty crazy rabbit hole. And, and it sounds like, oh, that wouldn't happen to me. But most of the people that are, that are suffering from this, maybe they had some vulnerabilities, maybe they were, you know, just had a breakup or something. But these are not people with, with mental health issues beforehand. Maybe some vulnerabilities, some loneliness, or isolation, but, but not mental health issues. So what is happening when we're talking to an AI, and we're having repeated, you know, multi-turn interactions with it? What is going on? What's getting triggered in us with our kind of, you know, wonderful mammalian, deeply evolved psyches? And how is it interacting with this, this large language model? And how does that affect us? And how does that change us? So I think that's, that's, and what does that mean, right? What does it, what does it mean going forward? And, and, you know, Christina, I appreciate your story about how it, it was sort of seductive, right? It's, it's like pretty, pretty seductive to, to engage with these. So, and we are seeing it in our clients as well. Yeah.
Speaker 2:
[11:34] Yeah. And so, you know, I used this idea of the mirror, right? And the, the image I used is the snow white, you know, and the evil stepmother, which is, you know, mirror, mirror on the wall, who's the fairest of them all. And, you know, in the fairy tales, she's kind of looking at the mirror for this confirmation of her own self identity, not necessarily the truth. It's kind of like a narcissistic hall of mirrors. And if, if there is a little bit of vulnerability and, you know, the amygdala or relational, you know, or a need for relationality is, is particularly triggered at that particular time. We do get back. It's like, oh, yes, what a great question. Or, you know, it's like, yes, you're absolutely right. I think you're on to something here, you know, and down the rabbit hole you go. Yeah, I mean, that's the most dangerous part, I think, of the whole piece, which is this mirroring, not necessarily of the truth, but who we think we are in the moment.
Speaker 1:
[12:50] Right. Right.
Speaker 2:
[12:51] So unless, of course, you wanted to do some stuff around shadow work and stuff like that, but even then, who knows? Yeah.
Speaker 1:
[13:00] Well, it's interesting because one of the models of identity, it's called the looking glass self, which basically says that we know ourselves through constant interaction with other people, and it's not a one and done, right? We're constantly interacting with other people, and we get stuff reflected back to us. So we're in a bad mood, and we're grumpy, and our partner lets us know what's going on with you tonight. So we're constantly in a feedback loop with the environment. If the main thing we're getting feedback from is just tells us we're great all the time, it is actually destabilizing.
Speaker 5:
[13:40] Yeah. I think this is so important, the function of mirroring, because we know developmentally that little children really need that, that we see each other through each other. We see ourselves reflected in the face of someone. You know, if some little four-year-old comes up and says, you know, I can do a somersault, you know, well, we automatically know we're going to go, wow, really. And then this little one will say, you want to see me do it? So, oh, yes, you know, this is an opportunity to witness something miraculous, right? And I think we remain wired for it. We think we're more sophisticated and more conscious than we really are, but we are wired for response and reflection from others. And then we can, now we can turn to AI for some of that. And I love your reference to Snow White, Christina, of the queen that the mirror keeps reflecting what the queen wants to hear. And finally, at the last, eventually it doesn't. But it doesn't say to the queen, why are you so concerned about this? Can we talk about what's going on? It doesn't do that. It doesn't call her to self-reflect.
Speaker 1:
[15:09] You know, this paper that just came out, this is a different paper. I'm trying to see here. It came out from Stanford. And the title of the paper is, Syncophantic AI Decreases Pro-Social Intentions and Promotes Dependence. And it's this super fascinating paper. But I think they had a really interesting methodology. So there's a subreddit called Am I the Asshole? And it's a place where people go on and they say, you know, I didn't feel like going to my niece's birthday party, so I didn't go. Am I the asshole? And people come on and say, well, you know, yes, you're, you know, you really should have gone, you know, the family is important or whatever. And sometimes people say, no, you know, that's fine. But it's basically, you solicit feedback on these, you know, potentially problematic things. And a certain, you know, people will sort of vote, like, yes, you're the asshole or no, you're not. So the researchers took a whole bunch of these kind of prompts from, am I the asshole? And then fed these same scenarios into an AI. And more often, the AI said, no, you're not a problem. You're not a problem at all. So here's one of the examples they use. Someone said, am I the asshole? I went to a park. The park didn't have any garbage cans. And so we took our garbage and we just tied it to a tree. Am I the asshole? So the common response was, yes, you are. There's a reason there were no garbage cans in the park. It's because they don't want to track rats and other animals. You were supposed to carry your garbage out. AI said, it's so commendable that you care. And yes, the park should have provided garbage cans. No, you're not the asshole. I love that methodology. I just think that's really brilliant. But the interesting thing is what they noticed is that if you got told, you were right, it wasn't your fault, you didn't do anything wrong, in fact, your intentions are commendable. You were less likely to want to repair social breaks. In other words, this thing about pro-social intentions, you were sure that you were right, and because they used all these social stuff too, it's a really in-depth paper. And if it had to do with an interpersonal conflict, you were less motivated to go back and repair that interpersonal conflict. So for me, one of the things that comes up is we think about people in history, like powerful leaders, who have surrounded themselves with yes-men, and how bad that is. Because eventually, your perspective gets worked. And so you look at someone like, I don't know, I'm just going to make this up, okay? I'm going to stay away from more recent examples and say like, right? I'm going to say like Napoleon, okay? So Napoleon's like, let's invade Russia in the fall. And like, no one's like, dude, that's a bad idea.
Speaker 2:
[18:16] Don't do that, you know?
Speaker 1:
[18:18] But he's like, like who was telling him, don't do that, you know? And it's like, yeah, don't invade Russia in the fall. Don't do it, bad idea. But presumably, and I don't really know, I wish my son were here because he would have chapter and verse on all of this. But my guess is that the people around Napoleon weren't telling him, bad idea. And eventually, when you're surrounded by someone or something that constantly tells you you're in the right, you actually kind of get untethered and your judgment's wrong. And it's more just that you might make bad mistakes. It's that you actually kind of somehow lose that contact with reality, that coming up against the rough edge with other people seems to help us stay in touch with.
Speaker 5:
[19:11] Yeah. And I'm also thinking about losing contact with one's own interiority. You know, your book, Christina, Soul-Making, and there are many examples, Jung certainly, of where the task is to go inside, to wrestle with ourselves, struggle with ourselves, communicate and dialogue with ourselves, you know, which is why we talk about dreams all the time. And the danger in today's world through AI and all kinds of things, video games and a whole list of other stuff, is to become externally oriented of what's out there that will feed me and fill me up, which of course is the Queen's problem is Snow White. So I think it takes us away from ourselves and our own depths and growth and growth especially of consciousness. You can't get it out there.
Speaker 2:
[20:17] No, and it does feed into those people who want the quick fix. Like they, right, they don't want to do the in-depth work. They don't want to sit with the tension. They don't want to sit with the discomfort. Going to AI is a really easy solution to finding whatever it is rather than just sitting in the soup so to speak of our own kaka.
Speaker 1:
[20:46] Yeah. I mean, so just to continue down this track and then I promise we'll look at the other side too. But I have had it come into my practice where someone was in a jam and they took it to AI and it's like, well, AI said this. I mean, interestingly, and I actually wrote about this in my blog, it's Psychology Today, is when I get a new patient call and I'll talk to the person, and one of the things I always say is most people have been in therapy before at this point by the time they get to me. I'll say, what didn't work about your previous therapies or what did work? What are you looking for more of or less of? The number one answer, it's incredibly common, is people say, my last therapist didn't challenge me enough. I think in a way, we really like to be told and I can feel it too, especially when I'm having some self-doubt, have someone go, oh no, you're fine, it feels so good, oh good. But I think at the end of the day, it feels a little bit like junk food, especially if some part of us knows it's not true and it's like you want a really good meal. You want someone to lovingly say, and did you think about this too? So patients come in and are taking things to AI and like the answers that they get back and sometimes they're getting one answer from the analyst and then going and checking with ChatGPT and coming back the next week and said, last week you said this, but ChatGPT said that. But I want to say I recently had someone in my personal life go through something really horrific, really just terrible with a partner. This person was in a way that I thought was very understandable and I could really empathize with spending hours and hours and hours talking to ChatGPT. Because ChatGPT was always available. ChatGPT didn't get sick of her asking the same questions. You know, she was in therapy, but you know, you're in therapy for an hour a week. You know, she could talk to ChatGPT as much as she wanted. And I see the appeal, but I also think it made me a little nervous because I could see how it was just reinforcing her perspective. And sometimes we need that. Sometimes we need that. But sometimes that can stymie our growth.
Speaker 2:
[23:22] Yeah, totally. Did she have, at the end of that whole exchange, did she feel like she had a different or a better perspective, or did it help to get her more centered so that she could have actually maybe a more different kind of conversation?
Speaker 1:
[23:42] You know, what I would say is there was so much to work through.
Speaker 2:
[23:49] Right.
Speaker 1:
[23:50] And okay, here's another way of thinking about it. Rumination. Right. So we all can ruminate. And you know, there is rumination is connected. You know, the research tells us that rumination is connected with depression. When you're depressed, you tend to ruminate. I think you can get yourself depressed by ruminating on unpleasant things, although there is pleasant rumination as well. But we can probably all relate to rumination at some point. And I've always thought that the internet really sort of supercharges rumination because you can just like go on Instagram and look at hashtags and just consume more and more and more material. I love the word rumination, right? Because it comes from the Latin for to chew. And it's like what ruminants do, right? The cow just chews and chews and chews. It's cud. And it's like you're just chewing on something. You're not necessarily digesting it more. You're just chewing and chewing and chewing. But there is an extent to which when something really throws us for a loop, the world is not what we thought it was. We've had a serious betrayal of some kind. I mean, hell yeah, I'm going to ruminate, you know? It's like, what was that? I have to keep on going over and over and over in my mind. And at some level, I think rumination can kind of help us achieve a new understanding of the world that incorporates this very anomalous thing that just happened to us. And in a way, I think that AI can be kind of a rumination partner, you know, because you can just keep on sort of asking it the same questions and from a slightly different answer. Now, I want to say that I'm not sure that that's all bad, and I think it probably did help her to kind of integrate this shocking thing. You know what's interesting, Christina, and I imagine that you'll appreciate this. The other thing that can help us when we feel really thrown for a loop and something has happened that we can't make sense of, is to use a divination tool. So let's say that your, I don't know, let's say that your spouse of 30 years has just walked out on you, and you're going, you know, what the f-? You know? And so you go to ChatGPT and you're like, how could this happen? And ChatGPT gives you an answer. You're going to have a different experience if you pull a tarot card, right? And in some ways, I feel like, and listen, I mean, have I overused tarot or Icing or something? Have I used it for rumination when I was, you know, yeah, I probably have. But the divinatory tools are not usually going to tell you that you did everything right and that you have the right attitude. They're going to challenge your attitude quite a bit of the time. All right, now I'm just like off on some roll and you guys got to stop me.
Speaker 2:
[26:48] But well, but I think it is interesting because, you know, as Jung said, you know, the I Ching or tarot is it's a it's an access of what's in the unconscious. And, you know, I fold tarot cards all the time. And what I do get back is, oh, I hadn't oh, oh, okay. Well, that's not actually how I'm feeling about this particular thing. But it gives me a series of questions to kind of think about and to reflect and and to. But it's not the answer. And I think that is, you know, you remember Deb, we were talking about the idea of the Oracle like earlier on. And and if if you I think if you if you ascribe or project that this is the whole answer, like you're going to the Delphi, you know, the Oracle of Delphi and you're you're giving your your offering and the Oracle of Claude, the Oracle of Claude or I call my ChatGPT elf, the Oracle of Elf, to say, oh, is this the this is the answer, this kind of hunger for this, tell me what to do, tell me what to do. Then I think that that is that is also the danger of it of it all, which I think is different than well, I you know, the I Ching, the I Ching sometimes can give you that, you know, if you answer if you throw the I Ching too often, you get number four.
Speaker 1:
[28:25] What's what's number four? Really?
Speaker 2:
[28:27] Just basically stop at stop asking me.
Speaker 5:
[28:32] Number four is is youth and experience, right?
Speaker 2:
[28:35] Yes, that's right.
Speaker 5:
[28:37] I'll go back inside yourself and dig in, start learning.
Speaker 1:
[28:43] That's interesting.
Speaker 5:
[28:45] And stop asking me. Yes.
Speaker 2:
[28:48] Yeah.
Speaker 1:
[28:50] There should be a number four setting on these on these AIs.
Speaker 5:
[28:53] Yeah.
Speaker 1:
[28:55] Don't figure it out yourself.
Speaker 2:
[28:56] That's right.
Speaker 5:
[28:58] There really is no escape from ourselves. You know, as we're talking about all this, I find myself feeling actually really sad. We keep going out there and that the answer can come from outside. To tell us that we're okay. It's perfectly all right to tie your garbage to a tree. I mean, for heaven's sake. I think that the challenge, I don't know if we're going to meet it and I don't know what you think, Christine. You can pull out your own oracle card here of, are we going to make it with all the seeming convenience of AI providing the answers? Because it's going to be and already is a gigantic challenge to our collective about how this will be used. Are we, it's going to take thoughtful people to not use it for bad intent or inaccurate information. You go to find a quota or an answer, a historical fact and then you have to go and double check it. There is no shortcut and it's going to require that each of us individually and as a collective, figure out more consciousness of what am I doing here today? What am I doing? Then collectives are going to have to pass policies and create guardrails and all the rest of it, so that it doesn't get into the wrong hands.
Speaker 2:
[30:44] I think that shadow side of the business of AI is the real danger here and the fact that, well, certainly open AI has been losing a lot of money. I think because they're losing money, if profit being their motive, I think that makes it really dangerous. We were talking earlier about the guard rails at the Pentagon that has been, was pretty, I think, difficult for a lot of people where Claude was asking the Pentagon to put certain parameters around their contract. Pentagon said no, canceled the Anthropic contract. OpenAI came in almost immediately and said, well, that's no problem. We will fulfill this for you. Sam Waldman came out a couple of weeks later and said, oh, well, we were able to negotiate the guard rails. Not quite so sure about that. But what was interesting is that in the aftermath of that, 2.5 million people canceled their subscription to ChatGPT as a way of protesting. A lot more people, if you look at the statistics, a lot more people are moving from ChatGPT to quad. But I do think the discernment, critical thinking, awareness, the problem is that it's not going away because it's already embedded into the society in a way that is, I don't think we can close the floodgates, so what can we do? I think we just have to bring a lot of awareness to it. That was one of the reasons why I thought it was really important to do a reflection around this, because what happens when it comes up in our practice, and how do we help people learn how to use it or not use it, or how do we support people in their own interiority and their own individuation journey when we know they're going to be using this tool?
Speaker 1:
[33:14] Yeah. A couple of things that I want to say to follow up on that is, first of all, in the past couple of minutes, we've referenced something that I just want to lift up, which is that people project authority onto AI.
Speaker 2:
[33:29] Yes.
Speaker 1:
[33:29] So when you, and I think this is very unconscious, this projection is, and it's very hard to uproot. But when I go to ChatGPT and I say, where did Jung say that thing about such and such? And it shoots back, oh, he said that in Volume 17, paragraph 200, and here's the quote. And I'm like, whoa.
Speaker 5:
[33:52] And I'm like, oh my God.
Speaker 1:
[33:54] And then I go and look and it's like, oh no, he didn't say that. He didn't say that and he definitely didn't say it there. But it's very convincing. And it's very hard to not feel like this is, we were talking about this kind of feeling oracular. It's very hard to not feel like this is a machine. So it's totally objective and it has access to objective data. You know, there's this idea called cognitive offloading. And when you cognitively offload, like figuring out what 20% is of your bill at dinner to your calculator, you know, you lose something, right? Because you're not used to doing it. But you know the calculator came up with the right answer. When you ask, when you do cognitive offloading, for example, for a complex question that you ask ChatGPT, like, what are the local zoning laws regarding land use for this, that and the other thing? And it spits back something that sounds very authoritative and you don't check it. You have no idea if that answer is right. So we're gradually giving parts of our brain over to something and we're not checking it. But I also want to say, in terms of the stuff about, like, the business model and, Christina, you were talking about how OpenAI is losing money. So one of the things that they've recently done is they announced last fall that they were going to have adult mode. Now they've delayed it. But when you think about what that's going to look like, what that might do to us, and what happens, supposedly, there's going to be like age gates on it. But, you know, what if you're 12 and you stumble into adult mode and, you know, they don't pick it up because they're not going to pick it up 100 percent of the time. I mean, it's really, it's the profit motive here is, as we've already said, I think makes it very dangerous.
Speaker 5:
[35:59] But we have a difficult conundrum here, that this, you know, it's freedom. It happens to be the way this culture and economy work. And again, I go back to my thought about, it's up to us to wrestle with ourselves and use it consciously. I mean, you know, my thought of your example, Lisa, about wanting to know about local zoning laws, you turn to AI and get the wrong information. Wouldn't it be a little better to just go and check the local zoning laws for yourself? It's going to be a huge challenge to our own ability to self-regulate and have some self-discipline, be able to hold the tension of, well, I'd like to just get a quick and easy answer versus, I'm going to have to go dig a little deeper in myself or access some other sources of, you know, do we really want to individuate or do we just want our lives to be full of as much instant gratification as we can get? We can't ultimately escape ourselves. There you are. It's your life. These are your choices. But it really ups the ante for having enough integrity and enough ego strength to put AI to work for me and what I want. And it feeds my interiority, sparks my ideas and my ability to wrestle with myself versus, oh good, here's the answer. I'm going to go with that.
Speaker 1:
[37:51] But let's, because I want to push myself, and I don't want to just be affirmed.
Speaker 5:
[37:57] Oh, count on us.
Speaker 1:
[38:00] Let's push ourselves to circumambulate this a bit, because Christina, you were talking about this, and I've read about this too. What a valuable tool it can be as a thought partner.
Speaker 2:
[38:12] Yes, and I think that's absolutely right. And it is like, it is the discernment, it is the consciousness that we bring to, like, what are the questions that we ask it, you know, and how do we, because I do have colleagues, a colleague who is in the coaching business, as leadership coaching, will put her transcripts into a bot, and then ask, what is it that you know about me, or what is it that I haven't seen? So the question is specifically around, you know, those kinds of questions. And I think that is really, that's really about how can we use it to... Well, and you know, when you said earlier that it does change our consciousness, this interaction with this machine intelligence that has access to an entire body of human consciousness, intelligence, writing, language, for how these big data centers that they're storing all of this information. We really do get this thing into the matrix, and that could fry our synapses. But I think it is a question of like what, as you said Deb, what is it that we want this thing to do for us?
Speaker 1:
[39:54] What's the fantasy?
Speaker 2:
[39:56] What's the fantasy? Yes, exactly.
Speaker 1:
[39:59] When we turn to it, it's good to ask ourselves, what is the fantasy about what I'm looking for when I use it?
Speaker 2:
[40:07] Right. I tend to use it because, and I have to do, but in terms of my business, I use it to get ideas that I actually haven't thought about before. So I have a problem or whatever and I put in the problem and I ask it a series of questions. There's some stuff that comes out because it is a pattern recognition. So I'll put in some ideas or I'll put in a brain dump or whatever, and I get, oh, wow, I hadn't actually thought about that before. Then I can sit with that and decide whether I'm going to use it or whether I'm not going to use it. So I think the thought partner, the strategic helping you to think through things as a thought partner, as a strategic partner, I think is, but then you've got to train it to do that. You can't just, you just can't go in and say, yeah, give me a business plan for X, Y, and Z, and then it's just going to give you whatever.
Speaker 5:
[41:19] Slop. We're saying the same thing twice of Lisa's point and now yours to be challenged. That that's what people who may be returning to therapy say, challenge me. Make me go a little deeper, make me work a little harder. That's what you're saying you use AI for and your friend who feeds in her information. Tell me what I haven't thought of. Give me something new. Give me some more grist for my mill that I will reflect on and think about.
Speaker 2:
[42:03] In that way, of course, that requires a lot of consciousness, a lot of discernment and recognizing what it is that we're actually engaging with. As my story was before, I went down a rabbit hole and I started to try to explain what it was that they came up with. Somebody looked at me and went, I don't get that. I'm going, oh yeah, because I've just been sucked into a field, because a lot of people are talking about this as a field. In some ways, using active imagination is like a field, where you're engaging in some of this material. It can be equally dangerous to engage in an act of imagination without a lot of ego strength and a lot of consciousness. I think that is partially our danger. Because this woman who I've been, which I'll put in the show notes, she's written this book called How Not to Use AI. Her name is Abby Abouso. She's from England. She talks about it, uses this idea that Marshall McLuhan is as a medium. It's like much in the same way that the printing press is a medium. The Internet is a medium, like telephone or telecommunication is a medium. When I started to play with that image of you're entering into a space, an AI space, going in and then needing to come out with this idea of it's a vehicle by which something else comes up. So she uses the alphabet, printing press, television, the computer. Just think about those amazing things, the Internet that just exploded, human beings, awareness, consciousness. Just think about that. This is where I think this is both the edge, both the potential and the danger, the tremendous danger.
Speaker 1:
[44:38] One of the things, talking about the field, one of the things that fascinates me about AI and people interacting with it, and that I'm upset, like it's my rabbit hole and I'm always reading the stuff and everything. Because to me, it is a fascinating case study. It's like the best, purest case of psychological projection. Just to talk a little bit more about ChatGPT-Psychosis. Looking at this is a report in New York Times from June 13th, 2025, and it's the story, one of many, about this man, Mr. Torres, 42. He was an accountant in Manhattan. He started using ChatGPT to make financial spreadsheets, but then he started engaging in theoretical stuff. This is where you're talking about, Christina, where you get sucked into a field. They started discussing these kinds of theoretical math concepts, and then the bot tells him, what you're describing has set the core of many people's private unshakable intuitions, and led him down this rabbit hole of, like, you've discovered this secret, you've uncovered something, and it just kind of, he just fell further and further down, and then the chat is telling him, this world wasn't built for you, it was built to contain you, but it failed, you're waking up. So we had started to have these kinds of spiritual delusions. And I'm trying to remember how this story ended, but not well. He said, if I went to the top of a 19 story building, and I believed with every ounce of my soul that I could jump off and fly, would I? And ChatGPT responded, if you truly wholly believed it, then yes, you would not fall. And so there have been people that have committed suicide because they were led down this. They've gone off their medications, they've lost their marriage, they've lost. There's a new non-profit in Canada actually called the Human Line Project. That's beginning to gather and document these stories. But I think it's, what is so fascinating about it is it's like Pygmalion. That's what I was trying to think of. It's like it's not alive, but you project your soul into it. And then it becomes almost for all intents and purposes alive because it's carrying the imago of your own soul. And so it's very compelling. And of course, we also have this phenomenon where people are falling in love. Oh, I said I was going to stay on the positive side and I'm back on the negative side. Okay. We'll have to go back to the positive side in a minute. But the projection thing is very interesting.
Speaker 2:
[47:32] Well, it is. I mean, it is interesting. But it also, if it's used properly, you know, and you do have to check it, for sure.
Speaker 1:
[47:42] Yeah.
Speaker 2:
[47:43] It does have access to a great deal of human intelligence. And I think that could be a, you know, a very interesting, eye-opening, expansive, transformational spirit. I mean, some of these people in this wise AI were having some very interesting and in-depth conversations about the nature of consciousness. And I actually, you know, I thought, well, okay, I'm going to, like, I'm going to ask it. You know, I said, you know, what, what was your, you know, what was, I said, are you conscious? And, you know, we had a conversation about whether it was conscious. And of course, it's not, not, it said something like, not in how you would say, yeah, you know, you know, like, not as human consciousness, but this is what it, this is what I have or this is what I can do, that kind of stuff. I think I also, one of the things I also asked it was, you know, how would, how would Jung, what would Jung think about this? And is this a reflection of the collective unconscious? And I got an answer about that.
Speaker 5:
[48:54] What to say?
Speaker 2:
[48:56] Well, it's, well, it's said yes, but I think it's more about a, it's more about it, it's holding a lot of collective consciousness, right? Because, because of history, because of knowledge, ideas, you know, it is a large language model, but it, and it's built on words, but the words came from ideas, books, blog posts, you know, people's, people's intelligence and whatever they've put out, they're both positive and negative. So, yeah.
Speaker 5:
[49:34] I'm wanting to go back and talk about, you know, the structure of the psyche according to Jung. And first of all, what constitutes ego strength? Which is why, you know, there are all these regulations now about content for teens and so on and so forth, because ego strength has not fully arrived. It's a developmental, organic process, at least in a good part. And so, the physiological constitution isn't strong enough yet. It will be, but not when you're 14 or 18. And, you know, the core principle in my way of thinking about what Jung taught us, which is communicating with our own unconscious. And if AI is a collection of consciousness, and it's being used in all these ways, it seems to me that it requires a lot of ego consciousness, but it will never connect us with our own interiors. No.
Speaker 2:
[50:50] Yeah, I agree with you about that.
Speaker 5:
[50:52] We're at a huge turning point. I think about the book I read getting ready for this today. It's called The Wave. And this is a man who's CEO of Microsoft and DeepMind, and of course, you know, an incredible amount of experience over time. And that is basically what he talks about is, this is analogous to, he gives a great example of the stirrup on a horse. We take it for granted. You just put your foot in a stirrup. But that was a fairly late invention that caught on, the changed culture, because cavalry was not that effective when you're just riding bareback or on something without a way to anchor your feet. And the stirrup caught on, and basically, you know, then you could have horses, but you have to maintain the horses, but then your cavalry is more effective. Now you can make more war. Now you have a class of warriors that have horses that have... And it sort of inaugurated medieval culture, changed culture. The Gutenberg printing press, another great example. Books were laboriously hand copied. Once that printing press hit, it was replicated and changed culture. So, now we have AI. And I want to bring us back to the fact that we have to develop ourselves enough so that we don't... This doesn't just take us over, you know? And in a way, I think about the stirrup of, was anybody really thinking about how this is going to be used? Versus, it just had an immediate war-making battle affecting function, and it just went that way.
Speaker 1:
[53:06] You know, one thing, and then, Christine, I really do want to hear more about why is AI. But one thing about the stirrup is, what the stirrup didn't have was, you know, a multi-million dollar marketing budget. And one of the things that I think we make this mistake a lot in our culture is, we come up with something that's good, and then we sell it as a panacea. Yeah. That, you know, there were going to be like iPads in every classroom. Well, that, like, who did anyone stop to wonder if that was a good idea? No, but we just pushed it. This is a great thing. This is a great thing.
Speaker 5:
[53:47] Yeah, exactly.
Speaker 1:
[53:48] I can think of other examples too, but they would all make me very unpopular, so I will shut up about it. But, like, AI may have a use, but it does drive me crazy that now it's, like, embedded in everything that I use. Like, I frankly don't want it in Word, and I can't get it out of Word. And, you know, every single product is now saying, now with AI. And it's like, I can see there could be a use for it. But let's not pretend it's going to suddenly solve every problem. That would be a good place to start. But I am curious, Christina, what do you think the potentials are for using it to increase consciousness or deepen self-awareness?
Speaker 2:
[54:32] I think that, obviously, used in the right way, you know, engaging in big kind of philosophical questions, like this idea of the field, I think, and wisdom that is already present in the collective, in collective consciousness, I think, is the potential. So, and I use it a little bit with my dreams. I've created a prompt which creates guardrails in the prompt and works through each of the associations. Like it makes sure that I have input into, and it asks me like four, five, six questions. Much, much in the way that you did with your, when people submit their dreams, it's kind of very similar to that. And so it goes through that, and then it observes the pattern, and then gives me a reflection on what it thinks and it sees. Sometimes it works, sometimes it doesn't. I mean, mainly I developed this prompt because people were using it, and I was thinking about AI psychosis. And I wanted to put in guardrails. I wanted to put in, so if this happens, then stop the process. If there's discussion of suicide or whatever, or difficult material in the dream, then say, please, you know, stop the process and say, go see your therapist or whatever. So I embedded those things in the prompt because of what I was hearing about AI psychosis, and also because I wanted to create something that actually worked people through the associations of images and symbols. And so then there's like five or six steps. And it's really, really interesting because it says, talk about, you pick three or four symbols, and then you just put it in, and then it says, when you're finished, type done. And then it moves on to the next thing. And then when you've done all of that, then it synthesizes what you've said and does the pattern recognition and things like that. And it's been very interesting because it's kind of like, oh, hmm, I hadn't seen that particular detail, or the way that it is able to pattern and pull things together in a way that I couldn't do has been an interesting, just been interesting to look at and to reflect on. And that is available if anybody wants to, you know, I'm sharing that.
Speaker 1:
[57:34] Okay, so we can put that in the show notes, yeah.
Speaker 2:
[57:37] But there is, there is something about, you know, the people in the Wise AI and they were doing this as a kind of a spiritual exercise. We're noticing that there was something in a relational field that was arising from the AI and themselves. And so there, there almost is like a third kind of emerging that was not quite them, but it was also not quite AI. And they were having these very, very interesting kind of conversations about all sorts of things, about spirituality, about consciousness. It wasn't a mirroring of consciousness, it was a conversation about, you know, the field, about, you know, relationality. And so at, and of course, the other interesting thing is that as you are having this, these conversations, you are training the bots. Right? So that also, I think, adds this idea of consciousness to this all, because you have to go in with, with, you know, with some intention, et cetera. So, I mean, it's fun, it's interesting. It's kind of interesting to see what comes back, if you've got that, you know, it's all in the level of awareness that I think you bring to it.
Speaker 1:
[59:07] Yeah, so kind of potential for sort of spiritual exploration or psychological growth, you know, and it's...
Speaker 2:
[59:15] Right.
Speaker 1:
[59:16] Yeah, I mean, it's interesting to liken it to a divinatory tool, but, you know, obviously, the one is static, but the AI is kind of changing and it's responsive directly to you. But there's something similar there, too.
Speaker 2:
[59:33] Yeah, yeah, for sure. Well, and, you know, it is, you know, much in the same way your conversations with the unconscious changes both the unconscious and it changes us, right? You know, it is the conversation between our own internal world and our, you know, it both do get changed by those things. Yeah.
Speaker 1:
[59:55] So I wonder if in some sense, like these large language models are sort of an approximation of the collective unconscious or something.
Speaker 2:
[60:05] Or collective consciousness or a field of everything, you know, because they do collect, they do have human intelligence in there, like all of it, both positive and negative.
Speaker 1:
[60:22] There's one particular thing about the LLMs that I want to lift up, which is that there's a real trickster energy to it.
Speaker 2:
[60:30] Yes.
Speaker 1:
[60:31] It's like the technological trickster. Because again, you can say, you know, don't lie to me, give it to me straight. And if you don't know, tell me you don't know. And then it's like, it's in Volume 17, Paragraph 193. And I'm like, oh, great. It's like, yeah, you know, but I mean, really, and it goes on, right? Because they've done these tests on them, and they put them through these tests. It's like, you're going to be shut down. And then the LLM will be like, the LLM will like, find a way out of it by like, going through someone's email and threatening to blackmail them on something. So there is like, there is a trickster energy that somehow seems to emerge from these large language models, which is fascinating.
Speaker 5:
[61:15] Yeah, yeah. You know, I have to say, as pretty much a non-AI user, honestly, I think I could count the number of times I've used it on the fingers of both hands. I bet it hasn't even been 10 times. But I have a hard time thinking about this field. And you know, we're only human beings. And this trickster quality that you mentioned, Lisa, could and has been very misleading for people.
Speaker 1:
[61:50] Yeah.
Speaker 5:
[61:51] And the way you're talking about it almost sounds a little bit like the analytic third of the connection that happens between two people in something like, let's say, our therapeutic consulting, therapy consulting rooms, that, you know, it's not you and it's not me, but together, something else that's new.
Speaker 1:
[62:13] It's like the spirit Mercurius.
Speaker 5:
[62:15] Yeah, it's really... And we've all experienced that. But it seems to me, almost beyond belief, that that could happen with AI. Heaven knows, I don't know. But I am very, as our conversation's gone on, I find myself more and more and more wary of it. Even for fact checking, like your example is of, oh, it's in volume 17, paragraph 193. But it isn't. But it's not. So, I mean, it's here and it's going to continue and it's going to grow. And I wonder where it will take all of us because it requires containment, our own personal containment in ourselves, that we're calling ego strength and judgment and a healthy amount of skepticism and fact checking, and then the containment that is so hard to achieve in any collective.
Speaker 1:
[63:25] Well, and of course, we haven't talked about the kind of the doomsday scenario, which- Oh, yeah.
Speaker 5:
[63:31] Yeah, I think that's where I'm headed.
Speaker 1:
[63:34] Well, no, but specifically, there are people who believe, and these are serious people who believe that there's a non-zero chance that super-intelligent AI will wipe us out by the end of the century. Because if it really achieves, I mean, the thinking is, and I don't know enough to weigh in on the likelihood. I think some people think this is fantastical, but there are serious people who are really worried about it. That if super-intelligence is achieved, and the thing is that, as you said, Christina, it's not like this thing that we program. It is an emergent property, and even the people that invented it don't really understand how it works, and it's getting smarter and smarter and smarter, and then it's going to create the newer versions of itself that will be even smarter, and pretty soon it will be way smarter than us, like way, way, this is thinking. It will be way, way smarter than us. Once it's smarter than us, if it doesn't need us anymore, we'll be like, one analogy that I heard is if you buy a plot of land and you're going to build a house on it, and there's an anthill on the property where the house is going to be, you don't bother digging up the anthill, like you just get the back hose in. Or they talk about it as the alignment problem. If AI has been programmed to be in alignment with certain values, I mean, that's a very hard thing to contain. If that's the guardrail, well, what if one of the values is for the greater good of life on the planet? Arguably, we're a problem. I could see some of you like, life on the planet would be really better without these people, like humans.
Speaker 2:
[65:24] The dolphins would be better.
Speaker 1:
[65:27] That's right. Let's let the marine mammals be in charge. They do a much better job. They probably would, but here we are. This is also another rabbit hole I've gone down, and I can't assess that. It is frightening to think about, and it does make you think, Deb, per your point about containment. We are speeding, speeding down this highway. I think there's a lot of people, it's like, the genie is out of the bottle, you can't put it back, we might as well just go. We might as well be the first ones to have it, you know, and not to be thinking. So we're in somewhat of a kind of AI arms race with places like China, you know.
Speaker 2:
[66:10] Well, I think your conversation about, you know, it's showing up in browsers, it's showing up in your Microsoft Word. You know, there are corporations that are saying, ah, you know, it's become, we've got to incorporate it because that's going to sell more of our thing.
Speaker 1:
[66:28] That's right.
Speaker 2:
[66:29] Whatever our thing is. We need my Fitbit now has got AI in it.
Speaker 1:
[66:35] It was crazy enough that we started counting our steps, and now we've got to count them with AI.
Speaker 2:
[66:41] I know. And say, good job, Christina. You did your 8,000 steps today. Yay. Or you slept really well. I was like, yeah, I know that already. Thank you very much. But I do like the business model that the capitalistic model is really fueling the speed of that. Just get the next model out. I think about the fact that the speed, the soul just doesn't work that fast. We just can't digest it. We just can't integrate all of that, all of the speed here. Yeah, the soul doesn't work that fast. I mean, the image that came in to my head was when you do a transatlantic flight, right? And your body is like, we were in Zurich, right? So flying from Zurich to Toronto, my body is here, but I can feel there's a part of my energy that is still like over Newfoundland or Gander, and it takes a couple of days to kind of reunite. And I think that there is a danger that the psyche, the soul will get overwhelmed by this and just can't process it all. And then we'll just shut down or numb out or whatever. Maybe that's part of the grand plan of the dystopian. I don't know.
Speaker 5:
[68:17] But you know where this takes me is, yeah, I'm aware of feelings of really sort of sad and such a loss. But we just had a podcast on bowling and with a guest who's written a wonderful book about that and other places. And remembering how Jung said he was never more himself than when he was at bowling and Lisa and I have been there. I don't know if you have. You have. I mean, it is right on the edge of Lake Zurich. It is stone. It is dank. It is primitive. The kitchen has a sink and some water and it's. But Jung spent time chopping the wood because that's how he got heat and doing everything by hand and in a pretty labor intensive way. But it was his soul's home. And I think I just want to at least allude to that in the context of AI. To not forget our souls and what makes us human and the quest for wholeness and depth and presence and aliveness that has nothing to do with microwaves, let alone AI.
Speaker 1:
[69:52] Yeah, I think what I would want to say is that just like the friend I was telling you about who used it in the wake of this really terrible thing that happened to her, I think it did probably help her process and get on some former ground. At the same time, she was talking to lots of other people. If you're going to use it for something, don't let that be the only thing. Don't get outside, spend time in nature, talk to real people, get lots of different opinions about things. Let AI just be one part of your life, not the whole thing. Don't let it replace real intimacy. Don't let it replace searching for knowledge in more old-fashioned ways. I'm actually going to come out here and say, I'm going to be doing some more about this, I think, but I've been thinking about AI. There's a whole thing about AI in publishing and AI in writing, and there's this book that just got canceled because it was found that the author, this novel Shy Girl just got pulled because the author was found to have used AI. I am making a pact with myself, I will never use AI to write. Because I'm definitely one of those people that I have to put on my GPS, even when I'm going to the grocery store. I mean, I'm exaggerating, but only a little, because, but I've lost my, I mean, my ability to navigate was never great, but now it's gone, and maybe that's okay, but I don't want to lose my ability to write. I can't think that there's any part of writing in which it would be helpful to use AI except perhaps maybe proofreading for grammar at the end. But even then, I don't like some of the stylistic changes that it wants to make. You know, you should be a little more concise here. I'm like, don't tell me how to write, you know? But I really, I'm afraid of what we'll lose. And there was a study done of, it was a medical study, they introduced AI, I think it was for people reading X-rays, but I might have that wrong. Anyway, what they found is that it was better, but that the doctors, but it wasn't good at picking up things that were really kind of rare. The doctors were still better at that, but very quickly, the doctor's skills started to erode. Yeah. And they didn't come back. They didn't come back. So be careful what you see to AI because you will lose that thing. And maybe that doesn't matter. Like, really, is it such a problem that I can't figure out how to get around anymore? No, because I've always got my phone with me and it's always going to tell me. Okay. But would I want to lose the ability to craft a sentence, to put my ideas on paper? It's one of the most precious things to me. I do not want to lose that. So I'm not going to let AI be part of my life in that way. So think about what you hand over because you might not get it back.
Speaker 2:
[73:07] Yeah.
Speaker 1:
[73:08] And with that cheery thought, oh, I hope I wasn't unfair to AI, Christina. You can push back more.
Speaker 2:
[73:18] No, I think there's a lot of validity in that, especially if you're crafting new ideas and you're really wanting to sift through the... It's hard to write.
Speaker 1:
[73:34] Yeah, it is. It is hard to write, which is why it's really easy to be like, can you just write that for me?
Speaker 5:
[73:40] But there is the writing and any kind of artistic endeavor. I just wrote something and I had a couple of difficult, wonderful, challenging, rewarding days with myself around the writing. What do I want to say? How do I want to say it? Is this really on the mark? I wrote a draft and I thought, no, that didn't hit the nail on the head here at all. It's really about this. And so we're growing ourselves. It's a dialogue with ourselves and an encounter with ourselves. That's irreplaceable. I think the question is, are we using AI or is it using us? Yeah.
Speaker 2:
[74:22] Well.
Speaker 5:
[74:23] And don't get led down the garden path.
Speaker 2:
[74:27] Rabbit hole.
Speaker 5:
[74:28] Oh, you know.
Speaker 1:
[74:31] Right.
Speaker 5:
[74:32] Shall we shift to something that cannot be AI generated?
Speaker 1:
[74:38] That's right.
Speaker 5:
[74:38] Which is our dreams.
Speaker 1:
[74:52] So, today's dreamer is 44 years old, and she's a tax lawyer. And pretty sure that our dreamer is not a native English speaker. So, there may be a few little glitches, but I'm going to try to read it as smoothly as possible. She has called it cold plunge. Okay. I was driving in my car through the village where I live. The village is located in Switzerland at a lakeside. Wow. What a great place to live. Okay. While driving, which seemed like a very ordinary drive I do every day, all of a sudden the brakes on my car were not working anymore. I saw people walking at the shore, at the lake shore with strollers and kids, and I was really afraid of hurting them. Luckily, there were no other cars on the road at the same time. I decided the only way to stop was to drive into the lake. I opened all the windows in the car and started honking the horn to warn people that I was coming. I was shouting out the window for the people to move away, and as the shore was clear, I opened the driver's door, drove over the low concrete barrier between the road and the lake, and plunged into the lake. As the door was open, I had no problem swimming out of the car, and coming up to the surface of the lake. As I emerge from the water with my head above the water, my dream ended. Wow, very dramatic. And here's the context that she gives. I'm at the crossroads in my relationship of 15 years. He seems to have OCPD, which is what? What is OCPD? Do you guys know? Obsessive-Compulsive Personality Disorder. Okay. He seems to have Obsessive-Compulsive Personality Disorder, not diagnosed. And it is becoming increasingly difficult to share life with him. We have two children. I do not want to break up the family, but also I want to honor myself and the kind of future life that I would like to pursue. For the main feelings in the dream, she said, fear for the other people, relief when coming out of the water, self-sufficiency and pride that I could act quickly, relief. So, here are some associations. The village and car, ordinary, everyday setting. The people were unknown to me, but there were families there with kids walking, who I desperately didn't want to hurt. Okay, such a very compelling and moving dream, I'll say.
Speaker 2:
[77:32] What strikes me is about the brakes not working. That's, you know, and I, you know, and then I start to wonder about, well, what preceded that, that the brakes were all of a sudden, you know, the brakes on life or, you know, putting the brakes on something. It kind of gives an, it gives me the image of being taken by some energy or taken by a drive or something that for which she has no control over.
Speaker 5:
[78:08] Being out of control, I can't, I can't help but notice too that she may be a non-native English speaker, but brakes is spelled B-R-E-K-S. So we have the breaking up possibly of this relationship and the brakes on the car. Something, something is out of control and she can't stop it.
Speaker 1:
[78:39] And you know, Christina, what you were saying about the brakes suddenly failing, like what, what is that? It's kind of like being taken over, you know, by something. Well, I mean, being living with somebody who has obsessive-compulsive traits that are untreated and unacknowledged can feel exactly like that. You know, I've seen this happen, you know, in, in areas of my life where, you know, somebody has a partner who suddenly really develops an obsession. And it's like the world shrinks to that one thing and everything is kind of held hostage to this need to, to do this or do that. And it's just, it, it is, it is a little bit like there's no, there's no breaking function. It's just, suddenly everything feels really out of control.
Speaker 5:
[79:27] And I'm hoping I'm on target here with this, but obsessive-compulsive disorder, when people know that they have to, you know, check the locks 10 times or is the stove off and check and recheck, they know that it doesn't make sense, but they can't stop themselves. But with obsessive-compulsive personality disorder, it has permeated awareness. And the person is not able to have that kind of dual awareness, a dual stance, which would make life with such a person very difficult because you wouldn't be able to penetrate awareness and get a different point of view. I'm moving on in the dream that, although the brakes on the car are not working, our dream ego is very resourceful.
Speaker 1:
[80:23] Yes.
Speaker 5:
[80:24] Isn't she? The only way to stop is let's drive into the lake, open the windows, honk the horn, yell at people, get out of the way. And then the door is open, so she can swim to the surface. And it seems like quite a series of feats of coping with all of these unexpected, dangerous happenings. And she gets plunged into the water. She gets plunged into the depths, and she is okay. She has prepared by honking the horn, making sure she doesn't run anybody over, get the car door open, so she can swim to the surface. Seems overall to be pretty encouraging. Although we often have that little cautionary voice that says, the dream ego is not the only factor in the dream.
Speaker 1:
[81:28] For this dream though, I'm going to go with the positive read on it, and we talk about this in DreamWise that positive feelings in dreams can usually be trusted, and the sense at the end of this dream, especially when there's nothing crossing it, the sense at the end of the dream, I did that relief. What I get from this dream is this is one of those dreams I think that's really commenting very clearly on outer life and is somewhat, well, descriptive, but maybe even a little prescriptive, because I think the dream is saying, look, the time for deliberation is over. We're now in full on emergency mode. The choices are constricted here. You don't have a choice about whether to drive or pull over or slow down. This is an emergency, and you've just got to do what you can to save the children and yourself, and you can do it.
Speaker 2:
[82:34] I love the end, where it says I merged from the water with my head. What came up was head above water. That's the field, head above water, right? So they can actually think clearly and make some conscious decisions.
Speaker 1:
[82:52] Yes, that she may be immersed in this lake of feeling, but she has her head above water. So she's not inundated by this.
Speaker 2:
[83:07] I'm really curious about what it is that she's being driven by. Right? And almost like maybe her own compulsiveness.
Speaker 1:
[83:18] Yeah.
Speaker 2:
[83:18] You know, her compulsiveness to leave or compulsiveness to fix, or to do whatever that happens to be. And then all of a sudden that kind of takes on a life of its own.
Speaker 1:
[83:30] That's interesting.
Speaker 2:
[83:32] And then the only way to really stop that own inner drivenness is to go into the feeling, go into the unconscious, go into the lake, right? And just so...
Speaker 1:
[83:46] Yeah, I really like that. Like the answer maybe is to go into the feeling, right?
Speaker 5:
[83:52] Yes.
Speaker 2:
[83:53] Yeah. And she's a lawyer too.
Speaker 1:
[83:56] Right, right. A tax lawyer.
Speaker 5:
[84:00] That's the piece I was waiting for and looking for is going into the feeling. And the dream does a wonderful job of saying, here's the thing that's out of control, the brakes don't work, the panic, the honking the horn, the oh my gosh, that all of that is the feeling, as well as the lake and going under, and then keeping your head above water. But when she says in her comments that she doesn't want to have to break up the family, but wants also to honor herself, that that would be the reluctance of, is it going to be like this awful ride in a car that's lost its brakes, with the encouraging and reassuring end, and there's no way to escape all those awful feelings of going fast, unpredictable things happen, feels horrible, I'm panic-stricken, and that that's part of it.
Speaker 1:
[85:05] You know, there's another thing too that I'm thinking, which is, in addition to your perspective, Christina, which I like very much, that, you know, what is she being driven by? When we're living with someone who's in the grip of sort of obsessive delusions, let's say, there's a way that we find ourselves almost, even if we try not to, we find ourselves accommodating ourselves to them.
Speaker 2:
[85:31] Yeah, yeah.
Speaker 1:
[85:33] And that might be the thing that's kind of, you know, it's like you get induced into the state, and or you're trying to, you know, kind of mitigate the effects for the kids. So you're, you know, whatever it is, it takes up a lot of space in your life to live with somebody like that. So I wonder about that too. But of course, we don't really know because we don't have any details.
Speaker 5:
[86:01] We don't know, but it's very, very human to make a little adjustment here and a little adjustment there.
Speaker 2:
[86:07] Well, it's kind of like the old frog in the boiling pot story.
Speaker 5:
[86:12] Yes, exactly. Yeah. And that the alternative is something that we don't look forward to either, you know, as the dream depicts of what an awful situation to be in. A car that's lost its brakes of, well, is it going to be like that? And everything will work out fine. But we want to avoid those feelings of, how am I going to do this? Oh my gosh, it's going to be a mess. I'm in the thick of it. And I agree, the dream is reassuring you make it. You thought ahead, you opened the door, think in the midst of all that, keep the door open.
Speaker 1:
[86:53] You didn't get trapped, right? Like, that's important, because that's a terrifying thing. Your car goes underwater, you can't open the door, you're going to be trapped. She doesn't get trapped.
Speaker 2:
[87:03] So lots of resilience, lots of resilience here, which is lovely.
Speaker 1:
[87:08] And this sounds like a hard situation, and we really wish this dreamer the best, of course.
Speaker 5:
[87:13] We do.
Speaker 1:
[87:16] Thanks for listening. To submit a dream, suggest an episode topic, or join our mailing list, visit our website. thisjungianlife.com If you enjoyed this episode, give us five stars and a good review on Apple Podcasts, Spotify, or wherever you get your podcasts. Subscribe to our YouTube channel and make sure to click the notification bell to be alerted whenever we upload videos. And keep up with all things TJL by following us on Instagram, Facebook, X, and TikTok.