The Complexity of Toilet Paper
This is a podcast about the search for simplicity and making life less complicated. A show that dives into both the everyday moments, as well as life's big stuff where we overthink, hesitate, or just get stuck. Through honest conversations, unexpected insights, and a whole lot of potty humor, puns, and hearty laughs - we are here to help you ROLL with it and make life a little less complicated, one conversation at a time. So, come join us in the Stall! Toilet Papewr not provided...yet!
Disclaimer: This podcast is for entertainment, growth, and informational purposes only. Any opinions expressed are those of the hosts and guests and do not reflect the views of any organizations we may be affiliated with. We’re not your therapists, lawyers, doctors, or plumbers, just a few folks talking it out with a roll of humor and a splash of real life. Please don’t make any major life decisions while on the toilet… or at least, don’t blame us if you do.
Show Credits:
- Show open music by RYYZN
- Roll Up music by AberrantRealities
- Stall Bridge music by penguinmusic
The Complexity of Toilet Paper
Ai, Simplicity, And The Human Mess
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
What if the tools that promise a simpler life end up making everything louder? We pull up a seat and tackle AI with honesty, humor, and a clear-eyed look at what gets better—and what we risk losing—when speed becomes the default. Phyllis opens up about her early, visceral reaction to AI’s “non-humanness,” naming the worry so many feel: dependence, the erosion of struggle, and a creeping numbness that comes when machines make the hard parts too easy. Mark leans into pragmatic optimism, showing where AI earns its keep: compressing research, shaping first drafts for unfamiliar audiences, and creating structure when time is tight. Al draws a firm line in his creative life—turning down voice gigs that train synthetic voices—while still using AI to synthesize information, prototyping ideas faster, and crafting meaningful keepsakes for friends.
We don’t debate abstractions; we show the trade-offs inside real workflows. You’ll hear how better prompts act like a sharp chisel for thinking, why boundaries protect your voice and values, and where automation should never replace judgment. We unpack the emotional weight of “faster,” from frayed attention to the skills that atrophy when we offload too much, and we challenge the myth that everything new is automatically better. Along the way, we keep it grounded and a little ridiculous—yes, including a rapid-fire “AI in the bathroom” segment that turns into a lesson in designing tech that protects dignity, privacy, and health.
If you’ve been curious about using AI without losing yourself, this conversation gives you a map: start small, set time limits, pick leverage points, and decide what parts of your craft are off-limits. We leave you with a simple posture: use AI to serve your values, not define them. If it helps you free up energy for the work only you can do, keep it. If it dulls your edge, cut it. Enjoy the ride, then tell us: what’s one task you’ll never hand to a machine? Subscribe, share with a friend who loves a good debate, and leave a review to help more curious minds find the show.
Sometimes I wish we could go back to a time when things weren't so complicated.
SPEAKER_01:Welcome to the complexity of toilet paper, the podcast that dives into the everyday moments where we overthink, hesitate, or just get stubborn. Through honest conversations, unexpected insights, and a whole lot of humor, your hosts Phyllis Martin, Mark Pollock, and Al Emmerich are here to help you roll with it and make your life a little less complicated. One conversation at a time. Right, dude. The beauty of this is its simplicity. Speaking of which, it's time to enter the stall. Put the lid down or not, depending. Get comfortable and roll with it. Oh worry not, dear friend. It's really quite simple. This is the complexity of toilet paper. Ladies and gentlemen, we preface this episode with an important announcement. We will not tell you when, where, or how, or if, but there is a chance that this entire show could be AI.
SPEAKER_05:I knew you were gonna say.
SPEAKER_01:I'm not done yet. Are Mark, Phyllis, and I real? Or are we artificial intelligence set in the stall to create this show? Inquiring minds want to know. Enjoy. This is the complexity of toilet paper. So, is it AI or not? Technically speaking, if you look at the way it's spelled, it is Al. And I hate that because Al looks like AI. But I don't know. What do you think? Was that kind of a cool intro?
SPEAKER_04:I knew that's what you were gonna do.
SPEAKER_01:I know you said that. You interrupted the the uh artificial Al.
SPEAKER_04:I'm going full Phyllis today.
SPEAKER_01:By the way, no none of Phyllis's laughs today will be real because when she laughs, it overmodulates the microphone. So all of her laughs will be artificial intelligence laughing for her. I'm just excited to see what kind of laughs you put in that place. You know, like there could be some fun. I will not, I could not replace it because it is the as I've said many times, it is a joy to listen and watch that woman laugh.
SPEAKER_04:I was gonna say, I think our listeners would be upset if you put in some wonky AI laugh, but what do I know?
SPEAKER_01:Dude, if we muted you, if we changed you, if we fucked with your laugh, we would be shot on site. Don't fuck with it. I know. I mean, I can see this in the future, Mark. We're like live doing, doing, we're complex the complexity to toilet paper live, and it's like, everybody, this is Mark.
SPEAKER_03:Everybody loves Al. Everybody loves Al.
SPEAKER_01:And Phyllis 100%.
SPEAKER_03:Which would make me yeah.
unknown:Yeah.
SPEAKER_01:And like there's a bunch of there's a bunch of people off the side, they're like hippies, and they got like, I love Phyllis. I love hippies. There's some rednecks with like really, really, really, really long hair and mullets going, I love Phyllis. And then there's some New Yorkers. I don't know, Phyllis, but I love her. It's okay.
SPEAKER_03:Of course they do. What's not to love?
SPEAKER_01:Oh, there you go.
SPEAKER_03:What's not to love? Kenahara.
SPEAKER_01:Well, uh, hi folks. Welcome to the first first show. Uh, no, no, one of the one of the first full shows. Uh, really, I guess it would be the second show of 2026. Um, in full transparency, this is actually the first show we're recording in 2026.
SPEAKER_03:Giving away secrets.
SPEAKER_01:Nah, man, it's behind the scenes. Uh, my name is Al Emmerich.
SPEAKER_00:My name is Mark Pollack.
SPEAKER_05:And my name is Phyllis Martin.
SPEAKER_01:And we are real as hell. Very, very real. There ain't no AI in our stall, baby.
SPEAKER_04:You know, if I wipe. Oh, go ahead, Al.
SPEAKER_01:No, we wipe with real toilet paper.
SPEAKER_04:Yes.
SPEAKER_01:I was I tried wiping with AI and it doesn't work.
SPEAKER_05:I was only going to say that if I had used my maiden name, then I could have had a k at the end, like we have Emmerich and Pollock. And I could have used my maiden name just to like stay toe back. Moving on.
SPEAKER_01:Artificial intelligence would have probably seen that and edited. So why are we talking about this? We have a numerous amount of topics that we want to traverse. Uh, 2026 is so exciting, but AI has been something that we have been dancing around with for quite some time. And uh, in full transparency, our fellow stallmates, we're gonna have some follow-up shows on this. But we thought we would start the conversation with what we experience ourselves and what we hear in our own world, because what we realized together, and I'd love for you too to comment on this, is what we've realized is the dichotomy of our show and what's happening in AI. Our show is about how do you simplify the complex? How do you uncomplicate your life? How do you find simplicity? Well, AI can do so much of that. It makes things easier, it makes things move faster, it's been doing it for years and it's only getting better. And yet, the principle, the application, the emotion, all of that that's wrapped up with AI is quite complex. And there are billions of hours of time that is being spent both by people, users, companies to figure this out, optimize it, maximize it, and not ruin the world, but yet to use it to make life better and make people's lives better or worse. And so that's a lot. And we're like, all right, well, how can we do this? And and so we we decided we talk about it. Um, and so I teed up, I teed that up, but Mark, Phil, what do you what do you have? What are your thoughts on that? Phyllis?
SPEAKER_05:Mark and I are staring at each other.
unknown:Waiting.
SPEAKER_01:Yeah, that was a that was not a that was not a mic drop, that was a ball drop.
SPEAKER_05:I'll jump in. So yes, Al. Um and yeah, I have like a um uh not completely defined yet somehow like-hate relationship with AI. So I know we're gonna get get into all of this, but I think that is a great way of saying uh it is complex. And in some ways, it could be very simple, I suppose, but for me it is um it is not. And there's a lot um that goes with it. And I, you know, we'll I'll I'll save the rest of my stories for when we get further into the con into the conversation. But yeah, I'm I'm eager to talk about it. Um and um yeah.
SPEAKER_00:So the reason I wanted Phyllis to go first is because, in all transparency, she actually did not want to talk about this topic a few months ago. When we were we were mapping out what topics we wanted to discuss, AI was her least favorite, um, which is fair, understand. Um and so a bit of a change of heart, uh, which I think is kind of cool that now you're open to chatting about it. I couldn't agree more though, Al. I do think that it's the emotional pieces of AI that are the complex part um versus the actual application that we see and use as consumers. So I'm excited about diving into this topic as well.
SPEAKER_01:And as you're listening today, I I want to preface we are committed on this show to we'll call it an introductory conversation, knowing that we're gonna have more, of course, down the road, yes. But we we already plan and have a second show we want to do, and maybe even a third after that. But this isn't about the technical application solely or getting into the benefits or whatever of AI and how to use it and all that, blah, blah, blah, blah, blah. That's the remember, our show is rooted in how do we invite people into the stall to have an authentic, real, meaningful conversation that ends up providing some anecdotes, some ideas, some pathway to making life simple for you that removes some layer of complexity. And uh, I mean, look, you know, look at the past. We've we've we've tackled uh everything from racial issues to entrepreneurship to disease, uh, you know, to getting out of our own way, life, death. Um, these are the things in life that present complication. This just happens to be a technology that's wrapped up in the human existence.
SPEAKER_00:And we're not approaching this as an expert, is what I think you're saying. We're not experts in this. Yeah. Um, we're just three people who are are experiencing life with it, and we think the likelihood is you are too, and probably have some of the same questions and fears and thoughts that we do. And so that's what we want to unpack today.
SPEAKER_01:By the way, um we're still trying to figure out if we're experts in anything, just for the record.
SPEAKER_00:Oh, I can answer that right now, and for me, it's a hard no. Nope, not an expert.
SPEAKER_01:Oh, I think we're experts on toilets and restrooms now and toilet paper. I mean, shit, we're doing a show on the complexity of toilet paper. We are experts. Technically, we haven't done a show on toilet paper yet.
SPEAKER_00:I know. We need to do that.
SPEAKER_04:Oh, we need a show on toilet paper. Yes.
SPEAKER_00:Yes, we have not done. I mean, it's in our title, we have not done that yet.
SPEAKER_01:We are sending this show, we are sending this show to all the toilet paper companies, and we're gonna ask them, would you be in the stall with us?
SPEAKER_00:How about this? Anybody who's listening to this episode right now, if you have a connection to any toilet paper manufacturer, distributor, anything like that, if you would kindly forward our show uh to to them uh with a little recommendation, we would love that.
SPEAKER_01:Even if you don't, if you're not in the toilet paper side, but if you're on the side that roll that where the role goes, that would be good. Anything related. All right. Okay All right. So um, well, Phil, do you do you object to kind of opening up this can of whoopass here and sharing um your mixed emotions? Because I have a very specific question for you to start it off if you're open to it.
SPEAKER_05:Sure. So when I lived in Jacksonville, Florida, and I worked at the United Way, uh uh the CEO of the United Way at that time uh was leading our uh leadership team retreat. And she um saved um this a section of the retreat, and we watched a uh video uh uh done by a futurist, I can't remember the woman's name. And the woman clearly had been doing this work for she's an expert, like doing it for many years, and the focus of the what we were watching was on artificial intelligence. And I literally got up after that was over and walked out of the room and had to go away for I think we were taking a natural break anyway. When I came back, uh, my boss looked at me and she said, I thought you were gonna come back and quit. And I said, I just about did because I don't want any part of what that is about, that it feels so non-human to me. There are so many implications in that for me. It almost disgusted me. And a little bit it still does now. And so um that was my initial introduction into what uh a lot of the AI that we that we see now. So to say that I was a late adopter, and I'm not usually a late adopter, but AI really just kind of cuts me in a particular way. What now I will say, oh go ahead.
SPEAKER_01:Yeah, you use some very strong language, disgust. So what what disgusted you?
SPEAKER_05:The in the non-humanness of it.
SPEAKER_00:What about what what about that though is disgusting.
SPEAKER_05:I I I let me caveat this by saying I do use AI, so I'm gonna be like, I know it sounds like I'm talking out of both sides of my mouth. So I under, I understand that as I'm saying this. Just this whole the loss of people, the loss of heart, the loss of what I'm just gonna say, quite honestly, is the dumbing down in the way that I think many, if not most, people will end up using AI. And some of that we're seeing take place now is disgusting to me. It's disgusting. Life is hard. Sometimes it should be a struggle. We shouldn't always have every single thing we need without having to do the work to find out what it is or how it works. And I'm not saying there's not a place for AI, and I fully admit that I am still, there's so much to learn about it, but I am also watching in real time what I would say is the damage that it is doing and that it will continue to do. So much so that um I'm watching it now even on social media. Somebody will comment on something and somebody will chime in and say it's AI, but nobody could tell the difference. And so for me, there are a lot of social implications that are not healthy when we're looking at um a tool like AI. And we are saying to, and and I, and quite honestly, I think we're all just gonna ignore that because that's pretty much what we do, because it's faster, it's better, it everybody thinks it's cool. So we're all gonna use it, it leads to better things. Now, I'm not negating that there are some really important things happening with AI, and not negating that at all, but I'm saying when we get spun up into it's the best thing ever, I think as human beings, we lose track of who we are, sometimes even our purpose on this earth, or how something like this tool can actually be hurtful and harmful. And those conversations are fewer and far between than the conversation of let's do it, let's do more of it. Let's uh a whole industry is now built up on teaching people how to use AI. And it gets spun up into this whole thing. And there's a lot of that that I actually find disgusting. Oh, I'm having a good time today, let's go.
SPEAKER_01:So, really, my interpretation of that, Phil, is the common theme here is the human being. Right? Both the human being being negatively impacted, but the human being doing the impacting. And so to me, when I listen to you, again, with no judgment and fully understanding, and knowing we've had a conversation similar, but not this in depth. Everybody that's joining us in the stall, we we've not gone to this depth. So we're we're traversing new territory at a new level, the three of us. Um but there's two things that pop up into my mind. The internet, the very first time, like the internet, people said many of the same things. The internet was a tool to find things, to do things, and of course it became then for porn, for all these bad things. A gun, okay? A gun, a weapon of any sort, a knife, any weapon, right? It's it's it is the the hand of the user. And what I'm hearing, what I think I'm picking up is this is really about intent versus the technology itself. Is that correct?
SPEAKER_05:It is correct, but Alan add to that. It's always about the intent versus the technology. And we have proven over and over and over, and history from the beginning of time has proven over and over and over again that all new things and all new developments left unchecked or just left alone are not necessarily good. We think they are good because there's a whole system in place to say it's good. So it's it's not any different than any other tool that has come along. There is good and there is not good. And we, in my opinion, as a just a society of people, have yet to learn how to balance those two things. And I do believe this answers the question will I show up and use my voice on a podcast?
SPEAKER_01:Dang. Well, I uh what Mark, I I have to give Phyllis uh uh uh a ultimate mic drop because you may have just taken all of our shows, of all of our shows, I don't know that you and anyone until this moment has illustrated with such clarity the absolute conundrum of ultimate conundrums of complexity, which is the human beings propensity for good and evil. Wow. Shit, this show took a turn.
SPEAKER_05:I've been waiting all day for this one. I gotta tell you.
SPEAKER_01:And I'm gonna let you follow that up, Mark.
SPEAKER_00:I gotta follow Phyllis on that.
SPEAKER_01:Yeah.
SPEAKER_00:Sit on that sit on that lid and and and and and let it go and let it rip. Yeah, and let it rip. Um I'll I'll start by saying I understand your perspective and I have respect for where you're coming from. And I agree that people can use good things for bad, and we do every day. People do all the time, right? Um I I I think more though it's the fear of the unknown for you. You know what humans are going to do. In some sense, you have an idea as if someone's gonna choose good or bad or whatever on their own actions, and and in a lot of times they have limited impact because uh, you know, we're we're only in one place at one time. What technology has done, I mean, we're in three different spots having a conversation, all looking at each other. Um it it's connected people in a different way. It's our podcast goes out to I don't know, 30, 40 countries at this point. So with the use of technology, the use of good or evil can spread faster, right? And I think that that's the fear of of the thing. Where I find this to be a useful tool is there are a lot of mundane tasks uh or difficult. Tasks that can can really support doing good. So if I if I come at this with good intent and saying that we're going to use it for good, you know, I can ask these large language models to do some research for me that would take me hours, and it does it in seconds. And I can take that information and I can synthesize that into creating an educational product for somebody that's that's going to provide good. Um I didn't have that ability before to be able to quickly flesh out ideas, do some research, craft an email that sounds better. I'm a I'm I am a decent writer. I wouldn't say I'm a great writer, I'm a decent writer. Uh, but I'm, you know, I spend time on those things. I can I can put those emails quickly into something and say, hey, this is my audience. Am I on track here? And and it it gives me kind of a north star to say, yeah, you're you're doing pretty well, but you forgot a comma here. Like, oh, well, this was going to the president. So I'm glad I asked somebody where the comma was supposed to go. Um But to me, for uh in in listening to your response, it's it's a moral issue, it's a human issue, and it's and it's that that fear of the unknown, because that unknown could be for for for evil.
SPEAKER_05:So I'll jump in. I I'm not afraid. I I it's it's not fear of the unknown for me. And I don't know that I fully believe that so like even in the example of faster, this is what I'm getting at. Now all of the sud not all of the sudden, but we are we are faster is better, more is better. Having email correct your grammar is better. Is it? Is it better? I'm asking. Is it really better? Because why is it better? Why is faster better? Why is AI doing it better than the human being having to take the time, which we're not really allowed time anymore, because that's one thing the internet and all of the technology has has said. It takes too long. Everything takes too long, so now everything has to be faster. Well, let me ask you to find that. So now we have to do all of this.
SPEAKER_00:But let me ask you this now. Okay, when you drive into work, how long does it take you to drive?
SPEAKER_05:I'm gonna say like 25 minutes.
SPEAKER_00:How long would it take you to walk?
SPEAKER_05:It's not the same. I mean, I get what you're saying about the technology, I get it, but this is this is not the invention of the vehicle.
SPEAKER_00:Sure it is. It's just different.
SPEAKER_05:Okay, so it's different. I just don't think it's good. I get to I can decide what I think is what I agree good. Because I don't think that as people, I I think we have been shown time and time and time again how this will work. Um, and I don't think it's a dynamic tension. Yeah, I still feel the same way about it. I don't think it's a dynamic tension. I think it creates a whole new set of stuff that sets something else into motion that creates what we now see in a lot of people. And I will go as so far, I'm gonna get fired from something for saying this. I will go so far as to say, I think it's why people's cortisol levels are so high. I think it's why people can't sleep at night. I think it adds to people's uh central nervous systems being all jacked up in a million directions. I think it leads to some uh mental uh some issues with mental health, because there's so much happening so quickly. And the expectation is, yeah, we'll all use it. Everybody will love it, we'll all use it. And then what is happening to people's ability to be creative, to be able to think, to know where to put the fucking comma so that they don't have to depend on a technology. If the grid goes down, if the grid goes down, then what? We all can't maps. Yeah, I mean, I'm you all know I can't find my way out of a paper bag. But you know what? Miraculously, before Waze and before Google Maps or whatever, I somehow managed to find my way. I had to do it in my own way, in my own time.
SPEAKER_00:If we get to that point, we have bigger problems.
SPEAKER_05:Sure.
SPEAKER_00:Right. Like if if if if we lose all of this technology like we did when, you know, all of a sudden we lost all the technology that got us to the moon. If we were to lose all this technology, um we we something really bad happens.
SPEAKER_05:Okay, but I'll even say this people don't even know how to read a map. Nobody reads a map anymore. This is what I'm saying. I don't so I don't think that is good. I think that outcome of technology is not a good thing. I think we have to be able to do that.
SPEAKER_00:But is that that's not but but you're talking about technology in totality. I think in today's conversation, we're talking about automation. We're talking about AI.
SPEAKER_05:You even used it the other day. Maps. You said maps the other day when we were talking is like an outcome of technology and artificial intelligence. So I'm just using maps as a specific, a specific example of that.
SPEAKER_00:And it and it knows a lot about us. I got in the car um on Tuesday mornings I go to this Bible study thing. Uh I I got in the car on Tuesday and it knew that I was going to the coffee shop. Like, how did it know? I I don't put that address in. It's it's two seconds from my house. I know how to get there. But it knew I was going and thought that I was gonna go there and and put it in my map. But but then I think is technology. When what we're talking about here on on today's conversation, I feel like is is that either the large language model uh is is um you know other areas of of automation uh and artificial intelligence that I guess I guess it all gets wrapped in together. I don't know. Maybe it does.
SPEAKER_05:It gets why it's complex, right? I mean, that's why we're having the show, because it is complex.
SPEAKER_01:I'm gonna jump in because um I've been listening attentively, and what we really have drawn out to summarize is there's a couple of shows forthcoming. Okay. There's the sociological impact that we that we need to explore, the complexity of the sociological impact of a technology on on society. Um then there's the application of uh efficiency, modernization. Because part of what you said, Phyllis, um, and I'm I'm only joking, but I'm not. Part of uh I felt like is what people said when books were created. Like, oh my gosh, you start reading the books, you're gonna know and you're gonna learn. You gotta learn it this way. Um the funny part was Al was actually there when books were created.
SPEAKER_00:So he was part of those conversations.
SPEAKER_01:As as a as an artificial intelligence whose name is Al and looks like AI, I'm actually 732 years old. It's it's sweet. I'm in such damn good shape. But um, so so let's bring this to um let's bring this to kind of a focal point here. Um and you guys have shared, we we've inquired, we've in we've been inquisitive of you, and and I feel like, you know, and jokingly, Phil, but you've kind of been on trial here. No, no, it's all good. But for the record, Mark, when you think of AI, um, in in your approach, what is your what is your gut instinct and what is your emotion and what is your thought? Just you know, high level.
SPEAKER_00:High level, I I think that we need to embrace it. Um I don't I think it's just going to get more advanced. Every conversation from a business perspective I have is how how do we incorporate it into life? It is it is now part of humanity. So what what do we do with it? Um and so my philosophy is let's investigate it, let's embrace it. Um and sometimes you have to love the thing that you you know you don't understand necessarily, and and I don't I don't even know that the people who created it completely understand what they developed. Um and in fact there's some record of of them saying that. But I I I'm an embracer.
SPEAKER_01:And I'll look at it as weaponization and love. Um weaponization is the flawed aspects of AI to potentially do harm. Uh love is the expression of what things can do when your life gets potentially better, um, and how you can embrace that to make life better. Um you can teach somebody how to use a weapon in a good way, and it improves lives. Um, from weapons, good things have happened uh just as much as bad. And so that whole human condition you're talking about um goes back to conversations ranging from race to genocide to all these other bad things where there was bad actors and people were always in the way because we are humans. And so when I think about AI, I go, how can I become how can I use the tool in the best possible way for the most positive outcome? How can I educate others in a meaningful way? And how can I embrace it to do good, but also stand against bad? Because it's gonna get effed up. People are gonna bastardize it, they're gonna weaponize it, they're gonna do bad things. But I can't fight every battle, you know? I can't, I can't fight every battle no differently than I can fight every battle. This is for me, I'm saying, that I could fight every battle for the use of the internet to search how to build a bomb or how to, you know, molest somebody or how to commit a crime. Um, because the very same thing that people have used over the years to figure out how to break into houses and how to rob banks and how to do bad things are are now just being built and designed with a game plan on AI. But then again, there's what I just did today. I've created a whole interactive, immersive experience for value mapping called the value within that takes words of value, connects them to a company or an individual's mission, vision, and values. And through prompts that we've created that are very specific to my process, we are able to create visual images, music, sound, and an immersive experience. It is a beautiful, wholesome experience that lifts human value. Could do it without AI, but it would take months and months and months. Now with AI, I can do it in minutes. So what? What do you mean, so what?
SPEAKER_05:So what? So what you can do it in two minutes instead of it taking time. So what? Well, so what? So you're assuming there's something better on the end of being able to do it in two minutes instead of it taking months and months.
SPEAKER_01:No, no, no. It's we can do more of it and we can give it to more people and we can share it more effectively. And so your argument, and I I don't want us to go too far down that rabbit hole, but your argument would be like food, right? Artificial artificial.
SPEAKER_05:Yeah, my argument could be food. So that food is all jacked up and GM food food that is made not organically or not naturally is now okay. And and I'm gonna go back and say something and push back a little bit.
SPEAKER_02:It okay.
SPEAKER_05:It's not okay. Like you you can say it. I just take issue with it. That, yeah, I know the bad stuff's gonna happen, but you know, I can't take a stand on everything, so I'm just gonna look at the good stuff.
SPEAKER_01:I didn't say that. I didn't say that.
SPEAKER_05:That's what I heard you say.
SPEAKER_01:No, what I said was I'm gonna use it in my sphere of control and sphere of influence to do good. That's what I said.
SPEAKER_05:But before that, you put it in the same conversation as, yeah, bad stuff happens, bombs are a big bigger. Yeah, but you gotta pick your battles.
SPEAKER_01:You can't you gotta pick your battle.
SPEAKER_05:That's my point.
SPEAKER_01:Well, you can't fight every battle. You cannot fight every battle. You can't solve every problem, you can't answer every equation. You know that. I mean, look at the work that you guys do. You can't so the turning away right.
SPEAKER_05:So wet right. So when this is my whole point about AI. This is my whole entire point about it.
SPEAKER_01:Okay.
SPEAKER_05:Somebody put it out there and said it's the greatest thing ever. And then people started mixing and mingling with it. We can do faster. We can replace human beings. We don't have to look and pay any attention as long as we keep moving it forward without paying any attention to what is happening or its impact on people or what the potential impact on people is. That's the problem. So let's just say, just to say, and again, it goes back to my initial big statement. That's the human condition that we have. I just happen to think that this is I I've you already know. I've already said what I think about it.
SPEAKER_00:So who do you know that this is negatively impacted? Who in your sphere?
SPEAKER_05:I think we can look at what's happening right now in the job market and saying there are people who are negatively impacted by it. I think we can look at the data and the studies on children and see that they are being, some percentage of them are being negatively impacted. I think it is a false narrative that we have created that says newer and shinier and all technology is better, and therefore we will all use it. I think you are seeing it on college campuses with children who are using AI to write their papers and then just turning them in. We are, which means they're not learning anything, and it's okay for them not to learn anything.
SPEAKER_00:Well, it's no one's no one's saying it's okay.
SPEAKER_05:I see it with with people, younger people who just are getting out of college and saying, Yeah, I can do this really well, but you know, I can't, I really don't know how to spell, but I do this other thing really well. Hmm, I wonder. I want I uh and to be honest with you, I see it in myself because I already told you I use it. If I need to have an email written and written quickly, I'll put in what I think I want to have written and AI writes it. And literally the other day I thought this literally has to stop. Like I literally have to stop doing this. I'm not even gonna know how to write an email or how to just write my own words if this doesn't stop. So it's pick and choose. I'm not saying it's all bad, and I'm not saying that um that we're all not gonna use it. But I am saying that it is uh something.
SPEAKER_01:So so, Mark, what are you so excited about?
SPEAKER_00:I I I really like this uh this conversation and and and and tension. This is fun.
SPEAKER_03:It's pretty tense, isn't it? It's good, right?
SPEAKER_00:It's pretty fun.
SPEAKER_03:Yeah.
SPEAKER_01:Yeah, I like it. I like it. Toilet paper is actually running away from us at this point. It's going, no, no, I don't want to go near there.
SPEAKER_03:It's too tight. It's too tight. So all the toilets are flushing at the same time. I'm holding on to it.
SPEAKER_01:No, that's it's great. I love it. So there's one thing that is really the crux of successfully using AI. And I say successfully in the sense of using the tool, right? Like uh you can't a hammer, you have to swing it, right? So you got to use the tool the proper way. It won't knock the nail in if it's just sitting there. And so for AI, that the hammer is the the prompt, the question. It is how are you critically thinking to get to what you need to do? Um the the other nuances are layered with emotional intelligence. Both of those are things that all three of us uh possess very highly, and it's part of what connects us. But for me, just selfishly speaking, um, you know, I'm I'm a born inquisitor too. I love to investigate the truth. And in my own exploration, yes, Mark, I am using this to help me get better at productivity. Uh, yes, Phil, um I'm we're using it and sometimes prep for this show. But for me, as someone who's trying to really think through things deeply through value mapping and other other aspects of my life, even finances, relationships, um, I'm able to have a real an inquisition with myself, but I'm really having an inquisition with tons of other minds. Now, the usage of it does not in any way replace my friends, it does not replace any human being, and and all of us are deeply committed and connected to human beings. And so for me, as the user, my commitment is uh, you know, to continue to evolve my understanding, but never ever rely on it for things other than some insight and guidance. Um, but but the big takeaway from this first chapter here uh today for me and I think for the show is that we've really got uh some future shows about okay, if you really wanted to know about the complexity, what makes AI so complex from a functional standpoint? Uh secondly, what makes AI so complex from a sociological standpoint, which you have alluded to, Phyllis. And I honestly I think we maybe just a little bit. I think there's another conversation where we maybe interview some folks that you know were at the forge foray of the internet, you know, because I think those same questions were were being asked. Um okay. Now let's let's let's kind of shift here. Um, what are you using AI for, Phil? Like what what is it and how is it helping you aside from the negatives? You're using it in positive ways. How is it helping you? And and what is what is the technical complexity that you think people are running into?
SPEAKER_05:So in from a learning fashion, and you've alluded to it, Al, I'm learning to be much more um inquisitive. Uh it's not that I'm not naturally curious. I'm just not curious in the same way that you are. And so, in order to really use the tool, um, what I have learned, quite honestly, from the two of you is that you've got to put in the right set of prompts to get you on the right pathway. And so, in that ways, in that way, it is helpful for me because I can start thinking about something and then just keep refining my questions until I get to where I want to go. Um, as somebody who um it takes me a little bit longer to synthesize large amounts of information, it's helping me to train my brain in a different, in a different way. So that it doesn't take me so long to either get to the question I'm really trying to get an answer to, um, or it's really helping me think about ways that I can synthesize um information. And I'd be lying if I, you know, if I said anything other than, yeah, if I need a fast email written, I already said I'm gonna stick what I want in there and get something um that makes sense and sounds like me so that I can, that I can, that I can move, so I can move along. Today, I would like to say um that I used it because I'm working on a song um and I needed help figuring out how to get it um into a different key. So I was able to find like the resources and the to get it moved into a different key. So I probably use it very superficially compared to most people. Um, but those are the ways that I'm using it right now.
SPEAKER_01:I uh it's funny, you talk about the love hate. I so some of you may know if you're new to the show, uh I as does Mark. But I do professional voiceover. Um, have been doing it for decades and decades. Um, I got an audition call not long ago, and I got hired, quote unquote, based on the audition, and then I was like, oh hell no. The job was doing voiceover to train AI to do voiceover.
unknown:Oh wow.
SPEAKER_01:So and and they're all over the place. They're all over the place, right? So what AI has been doing is uh taking thousands millions and millions of voices and then synthesizing to create voices, and so it's just getting better. So as a voice actor, for example, um, as an on-camera person who's done training videos, as someone who does presentations, um there's some places I won't go, right? Because it's like, yeah, and of course I know that I don't want to drift back into the you know the destruction and the negative sides. I'm just saying that there's boundaries that I'll set because I'm you trying to use it to mainly improve knowledge base about things I don't know, aggregate large quantities of data, um, get things that are minutia done faster so I can focus on the bigger stuff. Um, and I'm trying to learn how to do it responsibly. I just did a course, University of North Florida had this great free course um that they offered, uh AI for work and life. And it was, I mean, everybody and their brother was posting that they took the certificate and they were posting on LinkedIn. So I'd finally decided to do it. Um, the other place that I'm loving using it is um I I want to be I send out birthday messages to friends. Um, also we do social media for the show. Um, we can get to the show faster using AI through Buzz Sprout, which is our platform, which aggregates a lot of that information. We can create videos faster. Um, and yes, there can be some laziness in there for sure, Phil. However, garbage in, garbage out. And that's what I've been saying since day one. My first understanding of AI was garbage in, garbage out. And so without going down that rabbit hole, saving that for another conversation, uh, I'm trying to learn how to limit the garbage so that it's gold that I'm putting in and platinum coming out. And one of the ways I'm doing that is in the video domain as well. Um, the other way that I'm super excited about it is I've got tons and tons and tons of pictures and videos. Well, yes, they're in Dropbox and Google Drive and all that, but what I want to be able to do is I want to take some of these and turn them into video memories with friends that I can send them. Well, that would be a hobby that I just would not have the time to do. Now I can make something special for my stallmates um and get it to them fairly quickly. And and AI will do the heavy lifting. And so those are the things I'm exploring and and looking at that's fun, professional, personal, and adaptive to my life.
SPEAKER_00:What about you, Mark? So I I I a number of ways. One, I mean, we do have to admit there's a number of of elements of this show that leverages AI. So the descriptions, we don't we allow Buzz Sprout to help us. Yes, we put in input and we make some changes, but it does a lot of the heavy lifting. Um outside of the show, uh, you know, like I mentioned earlier, uh depending if I feel comfortable with a particular email, if I'm sending it to a new group, I'll leverage it for that. I I like to leverage it for research. So there's typically what I'll do is some initial search on something that I'm looking for in some respective sites that I like to go to, and then I'll I'll prompt it to do a historical search of of those particular sites to generate a synopsis of a project that I might be working on. And so it it does some quick analytics for me and then it can reference back for me. So as I'm I'm digging deeper, if I'm gonna move forward with a project, it allows me to um reference that quickly because uh, you know, if I try to Google something, there's 9,000 places I could go. So it definitely helps me limit there. Um I also work um uh around and with it from a regulatory standpoint. So um my professional career, if you will, is in higher education. So I have to I have to look at how do students use AI and what can they use it for? Um what can associates of the institution that I work for use uh AI for? What's what's within those bounds? And so it's it's kind of testing those systems um from an educational standpoint because they will I think it's called hallucinate, um, but but they'll they'll provide they'll provide wrong answers or duplicative answers and things like that. So um, you know, I I use it for a lot of things, but I going back to what you said, Phyllis, I I I would say that most people use it for surface, right? I I don't know that you know, I I've heard people use it for coding, uh, but it has a lot of mistakes. I've heard people use it for you know presentations, deep presentations, um, uh, but that could be considered surface. Um but I I think it's for the the the I know I use it for things that would take me a substantial amount of time where that time would not be more valuable. So when you're saying earlier, you know, well, it's quicker, so what I'm doing it in the actions that it it truly is just more time. It would be walking to work versus driving to work. And I have a car, so I might as well drive. Um it's not on the things that are the most critical that came from my from my hand or my mind. Um so lots of things. Um I I use it to stay in touch uh with people, um to add captions on my videos that I make for here. Um, you know.
SPEAKER_01:So what Mark would be like at this point in the game, um and and we'll um actually no before actually I'm gonna I'm gonna do this first and then we'll go there. Oh god. Yeah.
SPEAKER_05:Feel toilet paper, potty humor coming up. I don't know. No, no, no.
SPEAKER_01:We are gonna go full on into the stall for the roll-up, baby. That's right. It is the roll-up. This is where if you think we unpack some shit earlier with Phil, we're gonna get down and dirty right now. So here we go.
SPEAKER_02:You have I not done enough?
SPEAKER_01:There is no time. You need to you need to spit fire answer this. We're gonna be really quick about it. Um, here we go. Question is this AI, it's doing only good. Okay, let's just imagine that. How could AI make your life better in the potty? One way AI could make your life better in the potty, in the stall.
SPEAKER_00:Oh, I already know that one.
SPEAKER_05:I was hoping you'd go first.
SPEAKER_00:Okay, so this is cool. I'll make it quick. In Japan, they have this thing, it's like a camera that shoots down into the into the commode, and it analyzes your doo-doo, and and and it, and and the and the technology wizards put it through a system and and see if how if you have any health issues. You're all good. Yeah.
SPEAKER_01:Oh my god, that's amazing, right? That's amazing. Hey, listen, what if they had that like in a urinal and it could tell you what your blood alcohol level is? That could possibly that's a great idea. See? Um I don't know. I I folks, I don't think of these questions in advance. So um, I don't know. Phyllis, you want to go? Or you want me to? I'll think of something.
SPEAKER_05:I don't have anything. Somebody help. Mark, do you have a second one?
SPEAKER_00:How can help me in the bathroom? You know, it it it it could uh help alert me when I'm running low on toilet paper and other bathrooms in my house.
SPEAKER_04:That's good. That's good.
SPEAKER_01:That's what I was gonna kind of say. Like, if it was like, and I there has to be something out there that's doing this. It's like like it senses the roles and and when it's getting low and it gives you an alert.
SPEAKER_05:Yeah. Or if somebody passes out in the bathroom, if somebody passes out in the bathroom, it could send an alarm.
SPEAKER_01:That would be like life alert.
SPEAKER_05:Yeah.
SPEAKER_01:I want, no, here you go. I want a uh I want an entertainment center that generates on command the ideal music for my mood in the potty. There we go. There you go. Assuming I'm only gonna be there for a very short time.
SPEAKER_00:But you might need something to relax so you can go a bit more smoothly or a bit more like heavy metal.
SPEAKER_01:Oh, here it is. Here it is, here it is. Some i it's gonna go back to what they're doing in Japan. Something that analyzes what you've been eating and progressively tells you, hey, listen, hey, Al, um you've had too much. I'm a type one diabetic. You've had too much sugar today. Make sure you check it and you know, watch your salt level or something like that. I like in the moment, analyzing urine and poop, and it it makes a day a correction to say, hey, look out for your blood sugar or whatever.
SPEAKER_00:I think a sound machine. So I program that I'm having guests over, but then you know, I've got an upset belly, it automatically knows how much sound to raise my stereo system in the house so people cannot hear me go to the bathroom.
SPEAKER_05:Now that I will take.
SPEAKER_01:You know what Phyllis would like?
SPEAKER_05:What?
SPEAKER_01:Some sort of AI fart machine.
SPEAKER_05:I would very much like an AI fart machine.
SPEAKER_01:Well, they have apps for that. They probably do. They do.
SPEAKER_05:Next time we do a show, you're gonna show up with some farting thing.
SPEAKER_01:Of course not. Have you uh have you thought of something, Phil?
SPEAKER_05:No, I have not. I kind of liked the last one where there's some kind of AI that like senses if you need coverage so that nobody can hear you go potty.
SPEAKER_01:That's funny. Like it's white noise. That's it's basically potty white noise. That's a brilliant. These things have to be out there. I know that, and I'm hoping some of you stallmates are gonna go out there and already Amazon's gonna be. Yes. It's kind of like an AI poopery.
SPEAKER_05:See AI for good.
SPEAKER_01:AI for good. Of all the things that Phyllis could have said AI for good, she winds up landing on making sure that your poo-poo doesn't smell like goo-goo. There you go. Good God. All right, uh, let's bring this puppy home. And uh, Phil, you're gonna go last on this, since you went first. You're gonna you're gonna be the bookend. So um Mark, you can start, and then I'll go, and then Phil will bring us home. The question is if you had to give uh um uh you uh someone advice that's totally new to to AI based on what your experiences have been, based on the conversation today, okay. Um and and and if you were to give some constructive, positive, but but constructive advice to somebody, what would be your advice? Like it has to be really brief, you know, but like hey, you're here's this thing. Um uh it you're gonna learn it. And once you learn it, at least at the at the early phases, here's my advice to you. And what what what advice would you give, Mark?
SPEAKER_00:I would say two things. One, get comfortable just playing around in whatever technologies you have an interest in, and two, educate yourself. Right, there's lots of books, there's lots of things that are published. Um make an opinion with an open mind and with with using the tools to the best of your ability and um with some thought and care. And and my advice would be outside of that, I think the third thing is limit your time. So set up some boundaries for yourself and say, I'm gonna spend 30 minutes today trying this new thing and learning as much as I can about it.
SPEAKER_01:Um I would say yes to limiting your time and setting boundaries, definitely. But the second thing is as you're learning, start learning by imagining you're having a conversation with the your heroes. Now, I'm not talking about your mom or your dad or your cousin, like people who are out there in the world. Like, for example, I had a conversation with Warren Buffett not long ago. Now, of course I didn't have a conversation with Warren Buffett, but Warren Buffett is published, and his words and his ideas have been all over the world. Um, so based on the collection of all of his readings, which I haven't had a chance to read, I was able to pull questions and ask him questions, and the responses were based on his writings, right? So I had a conversation with Warren Buffett. And so it was one of the most powerful experiences I've ever had on Chat JP, uh, on any so and um artificial intelligence. So I would say imagine the people you most want to have a conversation with um who are whose words and ideas are out there, and and then be the inquisit inquisitor. Ask them the questions you always wanted to ask them. It's a great way to learn because you're gonna be intrigued and interested in the subject matter, but it's also gonna teach you how to ask better questions, but you're controlling the narrative, as you always are, but you're controlling the narrative and asking the questions that are meaning to you. That's that's that would be my advice. And now, to bring us home, Phyllis Martin.
SPEAKER_05:I feel a little pressure because Mark usually brings us home and he says something like super wise, and there's the mic drop, and you know, so Mark, you might have to clean up whatever I'm about to say in just a second. So that feels like I'm disappointed. So here's what I'm gonna say. Like, I think everything we've done on this show, I wish I could say that everything is simple. There's an easy, there's a right or a wrong, there's a there's a yes or a no. But the reality is AI, like everything else, is complex. And you have to find, my advice would be you have to find your own way in it and to it. And some of that will be forced because it is, it is in the workplace, and companies are demanding isn't the right word. They are finding ways to do it, and we will all have to learn to use it. And so you have some choices there about how you choose to embrace that or not, but you've got to be clear with yourself ahead of time. So you really do have to give it some thought. And then I would say, in even in my own tension and passion, if you will, about this, that is always the trigger for me that there is some kind of growth happening in some direction. And that can also be the purpose of AI. And you can use the AI as a tool to help you grow in whatever direction you decide is important for you to grow in. And I think that is one of the positive benefits of having this, having this tool. Um, and I'm a hundred percent certain as we have follow-up shows on this in many of the domains, it will become actually more complex, not less complex, but within all complexity sit some kernels of gold and truth that we all can take and internalize and use for good, and then find a way um to be impactful in what is not good. So that we're thinking about ourselves, but we're also thinking about humanity in general at the same time.
SPEAKER_01:Don't you ever doubt that you can't drop the mic? Yeah.
SPEAKER_02:What?
SPEAKER_01:That was beautifully stated. I got not, Mark. I got nothing. It is we that's it. That's the way we close right now. This this is the complexity of toilet paper.
SPEAKER_02:Did you say toilet paper? Everything complicated. One big baby.