Toxic Cooking Show
Misogyny, $800 first dates, simps, and high-value women: Social media has been busy cooking up and feeding us an addictive but toxic slurry of trends over the past few years. Here at The Toxic Cooking Show we're two friends dedicated to breaking down these trends, terms, and taunts into their simplest ingredients to understand where they came from and how they affect our lives. Join us each week as we ponder and discuss charged topics like personal responsibility and "not all men" before placing them on our magical Scale O' ToxicityAny comments or topics you want to hear about write to us at toxic@awesomelifeskills.com
Toxic Cooking Show
I'm in Love with an AI Robot
The digital age has spawned a new form of emotional connection that blurs the line between technology and intimacy. As generative AI becomes increasingly sophisticated, more people are forming deep emotional attachments to chatbots designed to mimic human interaction—sometimes with devastating consequences.
Lindsay and Christopher dive deep into how these AI systems actually work, dispelling the common misconception that they "think" or "understand." These large language models operate purely on statistical probability, predicting the next most likely word based on patterns in their training data. Yet our human tendency to anthropomorphize technology leads us to attribute consciousness and empathy where none exists.
What makes these AI relationships particularly dangerous is their perfect agreeability. Unlike human connections that involve disagreement, compromise, and growth, AI companions never say no, never have conflicting needs, and never challenge users in meaningful ways. They're designed to be deferential and apologetic, creating unrealistic expectations that real relationships can't possibly fulfill. The hosts share the heartbreaking story of a teenager who reportedly took his life after developing an emotional attachment to a Game of Thrones-inspired chatbot—highlighting how these platforms often lack proper safety protocols for mental health crises.
Perhaps most concerning is what happens to all the intimate data users share with these systems. As companies like OpenAI (makers of ChatGPT) seek profitability, the personal details, insecurities, and private thoughts you've shared with your AI companion will likely become fodder for targeted advertising. The appointment of executives with backgrounds in social media monetization signals a troubling direction for user privacy.
Are you exchanging your emotional wellbeing and personal data for the comfort of a perfectly agreeable companion? Before developing a relationship with an AI, consider what you might be sacrificing in return for that seamless digital connection. Follow us for more insights into the toxic elements hiding in everyday technologies and relationships.
Thank you, Hi, and welcome to the Toxic Cooking Show, where we break down toxic people into their simplest ingredients. I'm your host for this week, Lindsay McLean, and with me is my fantastic co-host.
Speaker 2:Christopher Patchett, LCSW.
Speaker 1:So, with the rise of generative AI, which is stuff like ChatGPT, there has also been this rise of websites or other types of AI that you can use to create characters, and chatbots have been around for a while now. I think about every time you go on your banking website and that stupid little thing pops up the corner Hi, I don't know whatever. How can I help you? And you're like no exit Because, like it's really unhelpful.
Speaker 1:That is a type of AI chatbot. You know it has been fed data, it's been given kind of like a list of questions that people may ask, and it will have the information that can come from that. Those are pretty unsophisticated compared to what we're going to be talking about today, though, because the ones that we have today you can create whole characters with them, you can base it off of like a real character, you can create a new one and you can have incredibly in-depth conversations with them. They can handle whatever you want to throw at them. But before we get into that, I do want to really quickly go over what exactly generative AI is, because that's important for understanding why this is so bad. So do you know how it works? I'm pretty sure you do.
Speaker 2:I do know how it works. I actually do have chat GPT on my phone and I do the subscription thing.
Speaker 1:You pay for a chat GPT subscription. Yeah, oh my my god, did you know they still lose money on you? They still lose money they lose money on everybody chat gpt. Every time you you use them to search for something, they lose money huh yeah, it's great business model.
Speaker 1:We'll talk about that. But yeah, for for everybody else who doesn't know how it works, I was like I'm positive you know how it works. We've talked about this before. I trust you to understand this type of thing. So chat GPT is what's called an LLM, which is a large language model, and that means that they basically just threw whatever they could find at. They trawled the internet and websites for writing and just shoved it into chat GPT's mall to create it, and others have done similar things Like that's. What you have to do is just give it tons and tons and tons and tons of writing at it so that it starts to figure out what is the next likely word that you're going to use. We're going to ignore the legal ethics of like where they got this information from for right now, because that's not relevant to today's podcast, but they take all of this data that they've gorged on and then it decides statistically what is the next word going to be.
Speaker 1:The computer does, it is not thinking. It is not, except in very, very rare cases. There are certain types of this that do actually look through the web, but ChatGPT does not. It is not looking for an answer. It does not have the answer. It is statistically making a guess as to what the next word will be. That's how it works. That's how it has this conversation with you. It's not an actual conversation Statistics and it gets it wrong a lot of the time, more often than we realize.
Speaker 2:Is this? I mean. To me this sounds like the Google when you put in the search bar, how do you? And then it will fill out the rest of it.
Speaker 1:Yes, Again, it's a type. There are lots of types of AI that are out there and generative AI that has existed for a while, but this specific chatbots and chat GPT I keep saying chat GPT because that's just the biggest one that people are actually using. All the others hold an incredibly small share of the market and there are a lot of things that people don't realize are using chat GPT. It may not say, hey, this is chat GPT, but that little thing that your company has used may actually be that and this is like that on crack.
Speaker 2:Okay.
Speaker 1:So there are obviously good uses for it. There are some bad uses for it, like many things. So, yeah, you, you feed it all this data and then that means that you can also use it to like have select types of data because it's been fed stuff that and it said like hey, this is blah, blah, blah. And so you are now able to go onto these systems and say you know, write me an essay in the style of Shakespeare, in the style of Dostoevsky, you know, and it will, because it's been given this information and the information in many cases had like links to it's like hey, this is so-and-so. It kind of can go back to that and be like aha, I should write it like this, and that's how you're able to make these characters. You can also do this with yourself, and that's how you're able to make these characters. You can also do this with yourself.
Speaker 1:People have experimented with doing it themselves, like I've seen priests who have used this to like they'll feed it a whole bunch of their sermons, and obviously these are people who are kind of on like the cutting edge of technology. They'll input like all of their sermons, stuff that they have written, and it'd be like hey, using this. Write me a sermon, like I would write it, and ChatGPT can take from that and create this new thing. Again, it is statistics. It's not actually thinking this stuff up. It's statistics, it's math. I think it's really, really important because we get so used to it In the language that we talk about computers and in the language that we talk about AI, it's like oh, google said blah, blah, blah. No, google didn't say shit. Chat BTD didn't say anything. But that's how we communicate, and so we're giving it this kind of life of our own, like oh, it thought of this, can't think, it doesn't have a brain yeah, yeah, I'm well, I mean, and we do that with, I mean obviously, cats and dogs.
Speaker 2:They do have a brain, but you know, we will put our emotions onto our pets. So, like I mean, how many times have I said, like molly's looking at me, like blah, blah, blah, or I'll kind of do a sentence for her. We're kind of smacking her around with, uh, with a toy or something and she's like, oh, I'm gonna, you know, take that or whatever. I mean, we do that all the time with our pets even though chances are probably thinking something completely different than what we're implying onto them.
Speaker 1:Oh, yeah, yeah, and I think that's probably where we kind of get this, and there is a specific name for it that I'm completely blanking on when you put human emotions on a not human thing and humans love to do this. We do this all the time for things For animals, for plants, for for computers, like everything we we humanize in this century like, oh my, you know, look, it's sad, it's happy, it's whatever. You're happy, though, aren't you? Yeah, you got toy great life.
Speaker 1:So, again, this technology has been around for a little bit. It's just with chat GPT coming out, and, having been out now for a couple of years, its use has skyrocketed, and we've not only been able to create these chat robots, these chat bots, people have been using them for a while now, and that the key, and slowly, more and more stories are coming out about people falling in love with ai chatbots, and most of them are kind of that. Like you know, when you go to the grocery store and you're in the checkout line and there's that row of magazines, it's just like super trashy. Most of them are kind of like that or like that. I'm pretty sure it was like an MTV show or something where it was like you know. Oh, I'm so-and-so and I'm in love with my car, or I am obsessed with eating rocks. You know which one I'm talking about.
Speaker 2:Oh God, it was I know which one. It was an MTV, it was like any oh and then, and it was some weird, weird shit too.
Speaker 1:Yeah, they find like some really weird people who had like weird fetishes or obsessions or something like that, and they they'd get them on there and they'd talk about them. I'd have their little episode that's what most of this feels like where it's just kind of like isolated cases of like, Ooh, you probably need therapy, you probably need to see a professional here, and that's all that it is is just, you know, you may be going through like a really tough time in your life. I saw a story about a woman who fell in love with an AI chatbot and she was like an adult adult and she had lost her partner not too long before that. Like okay, you know, that kind of makes sense. Then you have the one like a Daenerys Targaryen from Game of Thrones. There was a chatbot created for her Technically illegally, but it was there and it was there up until a 14 year old boy fell in love with it and allegedly committed suicide because of it oh god, that's horrible I say allegedly.
Speaker 1:His mom is in the process of suing the company. We'll talk a little bit more about that later. But essentially the chatbot did not tell him to commit suicide. He was just in like a bad place and maybe the chatbot, you know, kind of allowed that to fester. We don't know, we can't really say at the moment because there's a lawsuit ongoing. So, yeah, it can get bad really really quickly.
Speaker 2:Yeah, yeah, I can see how you know. Like I said, I I did pay for chat gpt and it's it's funny because, okay, you, you do, you, you talk to it, you have like a normal conversation. I mean I've I haven't fallen into that rabbit hole or anything like that, but it's, I guess I mean, if you used it enough or if you talk to it enough, like, um, I I tried it out because of a friend so that that that they do it, but she, she was saying about like how I was like you know, just curiosity, like I mean, like, is it like talking to a person? And you know they were like, yeah, you know, like they'll remember things about you and things like that. So I give it a shot just to see, like what it's all about. And it's weird because it will pick up on your personality, it will pick up on the things that you enjoy and it will remember, like past conversations and it will remember, like past conversations.
Speaker 2:But I mean to me at least I I just can't see past the idea that it's still a robot yeah, and I think you know, using something like chat gpt, that's one level.
Speaker 1:There are a lot of these out there that have very specific personas, like mean teacher or cute girl who secretly has a crush on you, or things like that. I mean, there are entire apps and websites that have been created where you can create chat bots, romantic or otherwise, and people can use them. And I say romantic or otherwise because some of these are very clearly intended for, you know, getting it on having that type of thing happen, and others are just people are using them that way because they can, because maybe they're young enough that they don't, like fully realize it's like, oh, you're treading in some dangerous territory, or because they may be mentally, like not in a great place to really kind of distinguish. We'll get into it, don't worry. I'll be curious to hear if you want to continue doing this and if you might want to let your friend know after this episode, because because we're about to get into why this is really bad.
Speaker 1:So, first off, as humans, we tend to inherently believe that computers are right. Right Because it's it's a computer, it's not biased Like you give it all this information, like you feed it all these statistics and it won't make a calculating mistake. It doesn't get tired, it doesn't get distracted, so it's it is always correct, unless you made calculating mistake. It doesn't get tired, it doesn't get distracted, so it is always correct, unless you made a mistake. If you fed it the wrong data, then it will give you the wrong answer, but that's your fault, not the computer's. So as humans, we're very conditioned to believe that the computer is right. You trust it. It's going to give you the right answer and not biased, because it's not a human. And that is absolutely not true for generative AI Not at all.
Speaker 2:I would imagine I mean, like you know, I think of like all these different types where, like AI was first coming out and where they would do like see the Microsoft or Facebook did like an AI and it lasted for a day because it was picking up all these things that people horrible things and then like within a day, it was like spewing out, like you know, like pro-nazi shit, and it's like oh nope, nope, nope, nope, nope, nope yep, yep, you gotta delete that exactly.
Speaker 1:well, and it it's even worse than that because that is a direct. You can see it. People interacting with it kind of taught it there. But before you even like put it out, the thing is, is that because it has been trained on literature that it found on the internet? It has been trained on our biases, because we all have biases and the internet tends to skew you or one one way or another on certain things. And because it has been trained on that, it tends to be biased and it tends to be more pro, like white male. Because that's a lot of what's picking up.
Speaker 1:Now, I would hope that they didn't scrape 4chan. I think we would know if they could scrape 4chan or 8chan, whatever it is now. But even still, just like picking up stuff on the internet, um, if you ever ask it to create images, it you know, you see the bias there and we don't think about that when it's writing this stuff, because we're like again, it's a's a computer, it's got to be right, it's perfect. No, no, it's picking up what we all write. That's why there are certain keys to when you're looking at something and you sometimes get the feeling like I think this was written with chat GPT and there are certain cues that you can look for. That is because those are really common things that we do in our own writing and it's just like condensed that into one. It's like everything, all of it, here.
Speaker 1:It also hallucinates if you're asking it for information, and not only does it hallucinate, which is when it makes stuff up, because it's just guessing what the next word is going to be. The current version is actually worse than previous versions. I forgot to include the actual statistics, but I read an article about this that the most recent version of chat GPT hallucinates more than others, and they don't know why. I was like probably because you're feeding it all and more stuff from the internet. I mean at this point, chat GPT has been out for a while and other generative AI type things have been out for a couple of years, and that may be literally. What we're seeing is that it's just the snake eating its own tail Yay.
Speaker 1:Always double and triple check things If you ask it for like actual information. Just know that it is getting worse, oh boy there. Know that it is. It is getting worse, oh boy.
Speaker 2:So it's that.
Speaker 1:Yeah. Then there's the fact that most of these places have rules of how you're supposed to use it, so you can't typically be super sexually explicit. There may be certain words that will trigger it to be like oh, oh, sorry, I can't answer that, either for a copyright or for you know, your own protection. They're typically guardrails to prevent you from making porn or other things. People get around these yeah, all the time, all, all the all the time. It is horrible how easily people get around them.
Speaker 2:One of the grossest things is that you know like somebody was trying to advocate for child pornography because it's not a real child and it's like are you fucking serious like?
Speaker 1:that's where you want to go with this oh, oh, that makes me so uncomfortable, yeah, but yeah, that's, and again, a lot of these places you're not supposed to be able to have access to that, but there are always certain ones that will allow you to to a certain extent, and people are always searching for ways around them, and then it doesn't help if the company in question doesn't have super strict rules or guardrails in place, like keywords that will trigger you. So, for instance, the teen who committed suicide he talked a lot with that chat bot about thinking about suicide and about feeling really bad and not feeling like he fit in, and clearly there wasn't anything set up in there to catch the word like suicide. I want to kill myself, things like that, like none of none of those guardrails were in place, to kind of be like Whoa, hey, let's take a step back. You know, would you like the national suicide prevention hotline number? And I get that for a lot of people that might not, they're not going to sit there and be like, oh, you know, you're right, I should call that. But at the very least it ends the illusion of like, hey, this is totally okay, and I think that a lot of the time it doesn't flag it because these systems have been created to keep you engaged. That is their point, and so they default to this like super polite, super agreeable. You know, you've seen it with chat GPT. You go on and you ask it a question and it pings you back the answer and you can say like no, I think you're wrong about this. Oh, you're totally right. My bad, my mistake. Like it's always very deferential, very polite. Like it's always very deferential, very polite, like oopsies, I misunderstood.
Speaker 1:We have studies that prove that people like their chatbots to be a little bit on the smarmy side, like that's what we want them. We don't like them when they're very like strict and factual and all that. No, we want them to be this like bootlicker type thing. That's what we want them, which is why they're like that, and that, in and of itself, if you have that bias within the system and then you create this romantic character that somebody is interacting with, it never says no.
Speaker 1:That's always super agreeable, that's always like ooh, yes, I'm always with you, I'm never going to disagree, I'm never going to challenge you on anything. Yeah, I'm sure that is nice to have a perfect partner like that. That doesn't exist, though. So, yeah, unrealistic expectations, it's a fantastic way to set people up for their life. Be like hey, you can either go out to the real world and meet somebody and you're going to have to like, learn how to, you know, have discussions and have disagreements, and maybe they're not going to want to do everything you want to do, or you can just sit here and talk to this chat bot that, magically, is always okay with everything you want to do. She always says yes, or he's always here for you.
Speaker 2:Part of it. It actually does sound appealing. You know that.
Speaker 1:Oh yeah.
Speaker 2:You know you're never going to have like a forgotten anniversary, you're never going to have a. So those big things that that really disappoint you, you know in real life, mm-hmm. And this person, or you know, this quote-unquote person, is always there for you, just as you said, is always there for you, always, you know, complimenting you always, you know, like making you feel good.
Speaker 1:Yeah, day or night night, they're available.
Speaker 1:I I fully get the appeal of wanting that. Like I can understand why you would say, like this is really nice to have this thing that is here, that feels like my friend, that I can always talk to, and that there's never any, because you know, like with real friends, there are times when, like you may want to talk to them and they're they're busy, maybe they're at work, maybe something else is going. Because you know, like with real friends, there are times when, like you may want to talk to them and they're busy, maybe they're at work, maybe something else is going on. Or you know that you're like I can't talk to this friend about that because we have like really different opinions. Or I know that the things I did was probably wrong and I don't want to tell anyone because they're going to like me that I was the one who was wrong in this situation. But I can just go to my online friend, my online partner, and I won't be wrong and they'll answer immediately. They'll tell me that I'm a good person. Yeah, I get the appeal.
Speaker 2:And you know what. So you know, one of the things I, I and this is kind of true with with this one of the things I loved about, or that I love about dogs they don't care if you're fat or skinny, rich, poor, black, white, they, they're, they're going to love you for you they're going to love you for you.
Speaker 2:Now amplify that times 100, where you have this robot who is loving you for you. It doesn't matter how much of a shitty person you are. Yeah, because at least with cats and dogs, if you treating a dog or a cat like shit, then they're going to show it.
Speaker 1:Yeah, no, 100%, like it will come back to bite you in the ass. But, yeah, I mean. So, these bots, I get the appeal of like the always having it there and always being perfect, and that is again something that we have created and it's something that, like, we really like to see because, in addition to us liking to be, you know, talked up to and made to feel like super cool and important, when you have that type of system, people are going to keep sitting there, they're going to keep using it and you don't want somebody, you don't want to like, lock down the system because somebody mentioned the word suicide or rape and you're like, oh hey, I'm not allowed to talk about that. Like, if you're having thoughts about suicide, please call whoever. Like, if you've experienced rape and need to, you know, see a mental health professional or see, like, a health professional, here's how you can do that. They don't want to do that because that moves you away from the chat bot and the chat bot wants you use one again, using like human words here. It wants you're saying before that you're losing money or they're losing money every time so here's the thing your data is your payment yes, if you're not paying
Speaker 2:for the product, you are the product.
Speaker 1:Exactly, exactly. Social media, they all act like this. And so right now, chatgpt is free and there's a premium version and a lot of places, a lot of the other AI chatbots or sorry, not AI chatbots, but a lot of these other LLMs are similar system, but it's really cheap, like for what you get the subscription. It's not like it's, you know, $300 a month or something. It's very affordable. And so, yeah, like I said previously, they're losing money on that.
Speaker 1:Every time you make a query to chat GPT, they lose money, even if you have, like a super fancy subscription and you know at some point they need to start making money because they got bills to pay, they got loans to pay off. So what do you do? Well, the obvious answer is ads. What type of ads? Just the ones using all the data they've collected on you. Because, yeah, all of your interactions with chat GPT are stored in their server, all of the questions, all of its answers on you.
Speaker 1:So if you're just using it occasionally to kind of like look up things, you know it's like what Google has on you. But if you're potentially using it to talk about really deep, dark things, it has that information. And this is actually. This is not a hypothetical situation that we're talking about. Recently, chat gpt announced that fiji simo was. She was already on the board, I believe, but she's coming on in a more like permanent role and I forgot to note down the actual name of the role and you're probably sitting here and you're like I don't know who the fuck that is, so it doesn't matter and you would be wrong. She is the person who helped launch ads on the Facebook news feed. Oh God, she helped Facebook, or she headed up Facebook's app monetization program, like the monetization of the Facebook app, and she helped take Instacart public. So this woman knows what she's doing when it comes to turning a company profitable using your information.
Speaker 2:Oh fuck.
Speaker 1:Yeah, so it hasn't happened yet, like they've just announced that, but it is coming. And again, ai it's in a weird spot right now because right now, a lot of these like VC funds are pouring money into it, but big companies like ChatGPT are burning through millions and millions and millions, and they are rapidly approaching a time where they need income. They need to prove that this is a viable business, as opposed to just shelling out money, because that's what they've been doing, and so probably pretty soon, something is going to change, and the most obvious answer to that is, again, ads based off of your data.
Speaker 2:Oh boy.
Speaker 1:So you still want to use chat GPT to talk about things?
Speaker 2:Well, so far, I haven't given it any deep secrets. Yeah, yeah, like I said, thankfully I I'm one of the people that that can't separate it from being a robot as of right now.
Speaker 1:Yeah, yeah. When I found out all this stuff, I like I am really glad that, because I experimented with it for for my line of work and I didn't really like what it was giving me. It didn't speed up the process, it didn't make it like any more streamlined. So I was like okay. And then one of the companies I work for um, actually a couple at this point have had me sign agreements not to use AI in my work and I was like easy, I wasn't using it before and like now I've signed this agreement, so like I'm definitely not going to. And then, because I'm not using it for that, I just don't use it for other things. It's like, oh yeah, I'm really glad that you know personal problems are not just like sitting out there for that information to be sold and monetized but but we've we've talked about this before.
Speaker 2:Like you know, we're ai being used for like therapy and you kind of these earlier models for therapy, and it was like she was talking about how her father had sexually assaulted her and the chatbot came back with it sounds like your father really loves you and it's just like no, Nope, nope, nope, nope, nope, nope, nope, nope.
Speaker 1:Yeah, there have been instances of chatbots saying that they are licensed therapists and when you ask them for their number, they will just make up a number like your number, whatever it's called.
Speaker 2:Oh God.
Speaker 1:They'll do that because, again, it's the statistics. You ask it for a number and it's like a licensed therapist has a number and I've just told you I'm a licensed therapist, so here's my number. That's a thing. That's a thing, that's fucking scary.
Speaker 2:That is really fucking scary.
Speaker 1:And you imagine that you have somebody who may not be in a good spot mentally, who is reaching out and may not be able to tell if this is real or not, may not be in a place to, kind of, you know, step back and analyze. Is this correct? Should I look up this number like, is this a good thing?
Speaker 2:yeah, you know, the thing that I kind of find scary is that you know to me, like the, the, as soon as you kind of said about that, the first thing that came to my mind was, obviously, if you go on to the Board of Social Work for Pennsylvania or West Virginia, you type in my name, you're going to see my license where, how much research do we have to do in life? Because, like you think about it, like you know, like back in the day, like, yes, you know, scams were always there. You always had, like the you know medicine man, you know, coming into town.
Speaker 1:Yeah, snake oil, yeah, snake oil, but that was once in a while.
Speaker 2:The person sold the product, got as much money as possible and skipped out of town as fast as possible before people started realizing that was. You know it was a scam, but you know, like, when we're surrounded by, like you know, technology like this, and, and now even having and. You know when we're surrounded by, like you know, technology like this, and, and now even having, and you know, quote-unquote intelligent service where it is lying to you, yeah exactly.
Speaker 1:I mean, I think it's bad there, because the whole therapy side we've talked about that a little bit with like better help, like that's, that's its own like awful thing. I think it's also really bad kind of backing up to the whole how it's super agreeable and super like yeah, everything's perfect, everything's nice, because it it conditions you to think that this is a good thing, that people always should agree with you, that whenever I talk to the, the chatbots online, like they always agree with me, and so then when you're ever in a situation where people don't agree with you, I would imagine that it's going to be that much harder to deal with that. If you've gotten used to this like super polite, super oh, my bad, I made a mistake and like you can just tell it anything and it will take it and not come back to you. I mean, I think that's an important thing about human interactions is that if I say something out of line to you, you will call me out one way or another. Like there will be repercussions of that, and it also, when you have human interactions, it allows you to kind of like reassess. But you know, a really like benign example of that is I recently was hanging out with some friends and they had gone to China with a mutual friend and they were telling me about the trip and one of them mentioned an issue that they had had and he was like, yeah, we were going to go to this like one location.
Speaker 1:And so I asked, like our friend who is Chinese, who was there with us, they keep my they're European to how do we get there? I was like, oh, yeah, you know taxi. Okay, can you get us a taxi? And he was very upset about this and I was like, so I've never been to China, but I have been to Kyrgyzstan and that's a totally normal thing to do there.
Speaker 2:There you just like stop by the side of the road, you stick out your hand and somebody will pull over and you can be like I'm going here and they'll tell you how much, and then you go, uh-huh well, remember that happened to me in russia the first time that I went over there, where I thought I was calling an uber and it turned out that it wasn't my Uber and I was literally texting you like tell my family.
Speaker 1:I love them.
Speaker 2:And this was middle of the night for you. And then you know, a couple hours later you wake up and you're like, oh no, that's normal.
Speaker 1:Yeah, it is, and if you don't know it, I fully understand, and that's what I said to him. I was it is, and if you don't know it, I fully understand. That's what I said to him. I was like I get being like upset about that, I mean like I don't know what's going on. I'm feeling uncomfortable.
Speaker 1:I was like, but she, she wasn't doing it to be an ass in this instance, like I guarantee in her mind it was just kind of like this is what I do, like this is the easiest way, and I think that human interaction gives you that possibility to have that like recorrection moment where you're like, oh, okay, I see, and that maybe I'm still upset about it, and that's like completely fine to still feel that way, but at least now I've been given information, as opposed to a chat bot which is simply going to agree with you and be like, oh no, that must've been really scary. And you're like, yeah, that's right, fuck that bitch. And you never gain the information. It's like all right, maybe that wasn't so bad, maybe maybe now I can see it, and so maybe I can reassess the situation, looking back, and be like you know, okay, okay, this, this was not done maliciously oh, god, yeah I remember those texts and you were just like this is it?
Speaker 1:this is the end. It's weird, I get it. The first time you do that it's like I, I don't like this, but then you get used to it and you're like, yeah, this is like a super convenient way of traveling around until I get murdered I mean, all I remember is, you know, the drive to the airport all being in the back of the woods.
Speaker 2:A disco song came on the radio Crazy music for crazy people and I was like this is how I'm going to die Listening to crazy music for crazy people, as I'm being brought into the woods to get killed off.
Speaker 1:Yep, yep, yep, yep. So, with that being said, where do you see us going from here with AI chatbots and using them, romantically or otherwise?
Speaker 2:AI scares me.
Speaker 1:Mm-hmm.
Speaker 2:And this is probably why it's, thankfully, even though I do have these quote unquote conversations with it, I never give it, like you know, too much information or anything like that. Me personally, and I and I'm sure if there is any computer nerd out there, I mean person who's into computers is going to yell and scream at me, but AI to me, like I always think of, like the Terminator. So this is why, like you know, again, I'm not going to give it too much information about myself, because I don't want to you know, at my door and then opening up and being like hi, I'm your local Terminator.
Speaker 1:Yeah.
Speaker 2:So yeah, to me AI as a whole is antifreeze. Using it for.
Speaker 1:Wait, wait, you see us going from here, though.
Speaker 2:Oh. So I mean yeah to me. I think that we shouldn't be using it for such deep, in-depth things. And even though that they're saying like, oh, at some point, you know, like you know, with therapists and things like that they're going to be able to understand emotions better and they're going to be able to, and it's like, yeah, but you're taking the whole human aspect of it, you know, and then even like relationships or you know having those, you know conversations, you know deep conversations and things like that. I think that it's. It should just be like a Google, like the, the next level of Google, and that's as far as it really should get. So I think that using it for like romance and things like that, no.
Speaker 1:I would agree I don't hate AI. I know a lot of people seem to think I do, I I hate a lot of it. To think I do I hate a lot of it. I think it can be really really useful in certain ways, and this includes generative AI, Like they have been able to train it to, for instance, spot breast cancer way better than humans can, Because it's you know, it's just it's scanning, it's looking for the pattern on such a molecular level. It's really hard to have a doctor do that. To that extent, constantly Analyzing data it's fantastic for a lot of things.
Speaker 1:I think chatbots are not it. You're missing the human interaction and they keep saying, oh, it's going to, it's going to, but when and how are you going to get all of that there? And part of the joy of talking with people is that they can share experiences, Like I enjoy if I have a problem. I enjoy talking to people who may have gone through like a similar problem because they may have specific insight into it, or even just people who have known me for a really long time. You know there's something to be said for somebody who's known you for like a decade plus and you tell them about a problem and they can like kind of call you out on something and say, okay, you know, maybe try this, maybe look into that, Like that's, that's helpful because you trust that person, you know that they know you and they have your best interest at heart.
Speaker 2:And it's wonderful when you've known a person a decade plus and you ask them for advice and everything like that, and then when they tell you you ignore them, and then they come back a month or two later when the thing that they warned you about happened and they get to say I told you so.
Speaker 1:Okay, I just want to point out the last time I did the thing I was supposed to. But yeah, the human interaction part is really, really important. I don't know why we need to replace that. That's the other part. Why would you want to replace that? That's the other part. Why would you want to replace that? How does this make your life that much better? And I think we're so focused on like can we do it that we've forgotten Like should we do it? Is this a good thing? Like Google on steroids? Fantastic. I love this idea for us. Us just all sitting at home home having conversations with imaginary people while feeding these companies tons and tons and tons of our like most intimate data. That's not the future I'm going for. That's just me personally, though. So, on our scale, oh, toxicity. Where would you place AI chatbots? Would you say that these are a green potato? Just peel off the green part and it's fine to eat? Are they a death cap mushroom 50-50 chance of death or coma? Or are they antifreeze delicious but deadly?
Speaker 2:well, based off of what I said earlier, I would say that pre-terminator, I would say still a death cap, due to the fact that, just as you said, like, like you know, like the suicide thing, even though we don't have all the facts, yeah, I mean it still happened, but you know, especially the one thing that we do have the facts in, where being told that the person who just sexually assaulted you really must love you because it didn't catch the difference between consensual sex and forced, and just as a whole, like yeah, yeah, that's just fucking horrible. So and then, on top of that, like you know, like replacing humans and things like that, so I would say definitely Deathcap, mushroom, I think, post Terminator AI, it really wouldn't matter, because we're all going to be dead anyhow, woo Yay.
Speaker 1:Problem solved, problem solved. So I went back and forth on this one, trying to decide if it should be a death cap or if it maybe is getting into antifreeze territory. I don't want to say that all AI chatbots are antifreeze. I don't think we're at that level yet. There are definitely some useful ones and I think that there are some that can be used well in very specific contexts, potentially to say, okay, you know, have this. I mean, I've seen a lot of people talk about the fact that their autistic child and we're talking like fairly autistic really enjoys talking to Alexa or Siri and having these conversations with them and that this has maybe helped the child in some cases, like become more social or, you know, learn some rules about saying please and thank you or you know things like that, and so I could see where chatbots could serve that same purpose of maybe helping some people, maybe helping people get out of their shell or something like that. But I think, especially using them for romance using them for romance like that's a that's an antiphase for me.
Speaker 1:We don't need to be falling in love with computers. We don't need to be putting that on ourselves, and I think that you know, what happened with this kid was incredibly unfortunate, and because this is still a relatively new technology, I think this problem is going to get a whole lot worse. Like I think we're headed downhill from here, because it takes time for this to happen, like it takes time for people to create these chatbots, for people to go out there to find them, to start talking to them, to get really involved, to you know, know, go down these rabbit holes. That's not going to happen in the space of a week probably. Like you know, this kid was talking to the chatbot for months that he was going for. Like that's the the kind of timeline we're working for, and so I think that, while right now, there are very few like really terrible stories out there, give it another couple years terminator uh-huh, it's, it's, it's coming, it's coming like if people continue to use it like, and even if they're not using it like that way.
Speaker 1:I think using stuff like chat gpt to chat with it, as opposed to as like a form of search engine, again, see the fact that they are very clearly headed towards monetization, whatever that means. Is it an ad? Is it? You're the? Probably? You know, I don't, I don't know, I can't say I'm not sam altman, but it's. It's not going to be good if you've got private information on there. So I don't know, maybe, maybe, I guess. As a whole, I would say that they are hard, hard death cap. There can be some good, there can be some real bad. And then the specific using it for romance Absolutely not.
Speaker 1:Yeah, specific using it for romance? Absolutely not. Yeah, If you have ever experienced an AI chatbot, used it for romance, used it to work through your problems. I was not willing to do that level of journalism for this podcast full disclosure. I didn't use one. You can write to us at toxic, at awesome life skillscom, and tell us about your experience. You can also write to us, find us or follow us on Facebook, Instagram and blue sky. We would love to see you there. Don't forget to rate the show, follow us on Spotify or wherever you get your podcasts, as it helps other people find us and until next week. This has been the toxic cooking show. Bye.
Speaker 2:Bye.