Developer Tea

The Scout Mindset with Julia Galef, Part Two

Episode Summary

In today's episode, I have the joy of interviewing Julia Galef. Julia and I talk about updating your beliefs, the difficulty of fighting our biases, seeking truth, and her new book, The Scout Mindset.

Episode Notes

In today's episode, I have the joy of interviewing Julia Galef. Julia and I talk about updating your beliefs, the difficulty of fighting our biases, seeking truth, and her new book, The Scout Mindset.

✨ Sponsor: LaunchDarkly

Today's episode is sponsored by LaunchDarkly. LaunchDarkly is today’s leading feature management platform, empowering your teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, you can deploy faster, reduce risk, and rest easy.

📮 Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com/contact.

If you would like to join the new experimental Discord group, reach out at developertea.com/contact, developertea@gmail.com, or @developertea on Twitter.

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Episode Transcription

Hey everyone, welcome to Developer Tea. Today's episode of Developer Tea is the second part of my interview with Julia Galef. If you missed out on the first part, make sure you go back and listen to that first part. Before you listen to this one, Julia is the author of a book called Scout Mindset, which is available on Amazon and in local bookseller retail stores and that kind of thing. And Julia is also the host of Rationally Speaking, which has been around much longer than this podcast has been around. So go and check that out in whatever podcasting app you're currently using. And if you don't want to miss out on the next episode of Developer Tea, which should be coming out in just a couple of days, it should be a Friday refill coming up next, then go ahead and subscribe to this podcast in your current podcasting app of choice. Thanks so much for listening. Let's get straight into this interview with Julia Galef. Yeah, I do believe that a lot of the important work on this is not so much about can we fix our brains to stop being biased? And I may be wrong about this. I think the important work is being done to understand how do we subvert that in our real actions in the world or in the things that we care about. How can we, you know, bias something else to balance it, right? Or... Compensation. Yeah, exactly. Yeah, yeah. Which I think Kahneman does too. Right, exactly. He does, he does. If his point is... I think to some extent he's trying to be humble. And, you know, it's an important point because when you write a book about rationality or irrationality, people often have... They're kind of suspicious that you think you're rational and you're telling other people they're irrational. And so I totally understand the impulse to... Try to avert that suspicion by saying, you know, oh, I can't overcome these biases myself. Which lends more credibility to the work amongst the right people, right? Right. I think it helps make people more receptive, especially if, you know, Kahneman's book, Thinking Fast and Slow, was about the existence of these biases. He wasn't trying to offer a solution. So he didn't actually need to convince people that it was possible to overcome them. It was very descriptive in nature. Right, yeah. But, you know, I would be surprised if he... If he hasn't made any progress at all in noticing these biases in himself and overcoming them, just talking to him... I've had lunch with him a couple of times and he came to a couple of workshops that I ran. He's quite good at avoiding overconfidence, which is another bias that he talks about a lot. And saying, well, you know, this is speculative or I can't be totally sure about this. Right. And so I think that's a good example of someone overcoming the innate bias of the self. The innate human tendency to overconfidence. So I would give him more credit than he would give himself, is what I would say. Yeah, there is pretty good evidence that merely having vocabulary changes behavior, right? Knowing, understanding, you know, that there is a term for this. Yeah. Can give you a chance to label something, which gives you another chance to give it some kind of observation. And this has actually been proven in a kind of a... Yeah. Parallel way. That if you have the name for, let's say, there's the, for example, a type of plant that is otherwise foreign to you, right? Uh-huh. That people who, where that plant is, this is so strange. Uh-huh. It's actually fruits that they used. They said that people who have vocabulary for that fruit notice the fruit more often. So they believe that it exists more readily. Right. More readily than people who don't have that vocabulary. Right. In other words... That seems very plausible to me. Yeah. It makes total sense. But when you apply, when you kind of try to, I guess, and I'm taking a little bit of liberty with study, but you can apply that to other things and say, hey, you know what? Overconfidence is more prevalent than we think because I know what it is. So I can recognize it because I have the words for it. That's right. Yeah. I think that's absolutely true. And I think even better than having the words is having, having kind of saliently, salient examples in your mind of what it looks like to be in soldier mindset and what it looks like to be in scout mindset. And so that was part of my goal in writing the book is, you know, I don't think that, I don't think that there's a set of words I can give someone that will magically make them change the way they think. But I do, I did pack the book with lots of examples. And so I was hoping that just increasing the salience of these examples would make people better at noticing themselves in soldier mindset and also better at having kind of... Like templates for, okay, this is a way you could react to criticism or this is a way you could react to evidence that contradicts something you believe that's different from my default way of reacting. So just having those templates as role models in your mind to do instead of your default, I think is really helpful. I found it anyway. That is such an interesting point you make. Many times on this show, I've talked about the importance of having a story to attach. And I think that's something that's really important. And I think that's something that's really important. And I think that's something that's really important. How so? In the sense that if I tell you what overconfidence is, just using kind of clinical definitions. Oh, yeah. Then you might, you know, understand it. You might try to draw those connections to something that you know. But our brains are not really, if I understand it correctly. They're not really designed for that. Yeah, they're not designed for that. We're designed to understand more practically how things... How things connect and how it impacts us directly. And so when we hear a story, it's no wonder that stories communicate much more effectively to people and move people both emotionally and in terms of changing their actions than, let's say, pure data would. Even data we need to wrap in some kind of more tangible descriptor that provides information as a padding for that data. Yeah, so well put. I think this is a really important and underappreciated point. And I've been trying to... When people ask what something is or the definition of something or... I've been trying to get better at giving... Explaining myself by way of pointing at examples instead of just giving an abstract definition. And it seems to be a lot more effective. I think so. Yeah. That's the way I remember things, right? Yeah, absolutely. I just... And we know that's how we remember things and learn things. And we know that's what makes... Memorization. Memorization also is... Yeah. And yet it doesn't... That alone didn't cause me to remember that principle when I was trying to communicate to people. So I just... I had to have someone make that connection for me explicitly. Yeah. And yeah, I think especially when the... Like, because we're... Humans are kind of social creatures and social learners, we do seem to be built for really easily copying the behaviors and the attitudes of people around us. Mm-hmm. And so I was trying to exploit that property of human psychology as well and give a bunch of examples of people behaving in ways that could be more easily copied once you have that example in mind. And so, for example, one small moment that really stuck with me and has helped me change the way I react was when a friend of mine... I guess he was... Someone was arguing with him and he... He realized he was wrong. And he said so, but in just this very cheerful, nonchalant way. He was like, oh, yep. I take back what I said before. You're right about this. Never mind. But he said it in such a relaxed and matter-of-fact way that it didn't... You know, often when people, quote-unquote, admit they were wrong about something, it sounds... Well, very often they don't even do it because they... Oh, it's wrapped in defense. Yeah. It's kind of sheepish or defensive or... It sounds like they're confessing a sin and they're kind of trying to atone. It's kind of a big deal. Attached it to their morality or something, right? Right. And, you know, sometimes I think, yes, being wrong means you screwed up somehow. But most of the time I think being wrong just means, no, you didn't do anything wrong. You were processing the information you had the best you could with the limited time and computational power that your brain has. And so you formed a conclusion that was wrong, but it was... It was... It was a perfectly justifiable thing to believe given the information you had. And you should not feel sheepish when you, you know, when you learn new information or when it's pointed out to you that you were missing something. It should just be cheerful and matter-of-fact. Like, oh, yep. Okay, I'm revising that view. And so, you know, I think I intellectually knew that, yes, being wrong doesn't mean you did something wrong. I think I... If you'd asked me before this moment, I would have said, yes, I agree with that. But having this very tangible example of someone reacting in that way... To learning they were wrong made it so much stickier and made it possible for me to react that way in the future as well. Yeah. So I have read, and I don't remember where, so I apologize. This could be complete garbage. That there's, you know, the kind of... If you were to look at this from a evolutionary psychology perspective, that the reason for this is social, right? If you are wrong about something and you, on average, are only living for 40 to 50... Maybe sometimes 60. That's the social credit that you receive is going to be how often is this person right? And if they're wrong, we can't really trust them. They're not going to be able to climb the social ladder. They're not going to be in leadership in our tribe. Because it's dangerous, right? It's dangerous to be wrong when being wrong means that you go without food for a whole season, right? So... Yeah. But now we can update that belief. Kind of cognitively, if not evolutionarily, we can say, hey, we can be cheerful about being wrong because it no longer means going without food for a season. Now it means that we can learn something, right? There is actually only upside to this, recognizing that the social signals are no longer, you know, valid. They don't make the same... There's not the same reason to... Somebody for being wrong that there might have been, you know, 10,000 years ago. Yeah. It's a very interesting and kind of compelling evolutionary argument. I just still... And I've made similar arguments in the past. But I have to admit I'm still kind of confused by how off our intuitive predictions seem to be about what happens when we say we were wrong about something. Yeah. We really do... Even in case... cases where being wrong actually did have stakes. Like, yeah, you were wrong about a decision that you made for your team or your company or something. And we feel like admitting we were wrong will cause everyone to hate us or shame us or something. We catastrophize it. Yeah. And yet the vast majority of the time in my experience and in the experience of other people who have talked to me about this, leaders of teams, CEOs, et cetera, they're just pleasantly surprised by how positively people react when they say, yeah, you know what, guys, I was wrong about that. And so, yeah, the people I've talked to who are unusually good at noticing when they were wrong and saying so, matter of factly, what they've told me is that they didn't start out this way. They started out feeling really averse to ever admitting they were wrong about something. And then they forced themselves to do it a few times and noticed with pleasant surprise, oh, this actually went way better than I thought. People reacted so much better than I thought they would. And so they did eventually get to the point where they could do it more easily. And so I think that's a really good way to start. But it took the repeated practice of seeing that the outcome wasn't nearly as bad as they kind of emotionally expected it was going to be. And I do think it's an interesting question why our brains seem to expect really bad outcomes for admitting we were wrong and practice that doesn't match reality. Yeah. And we protect ourselves sometimes in really obvious ways. It's very clear when somebody is being defensive about being wrong, which I feel like almost has an even more detrimental effect. Seems like it's hard to teach ourselves that that's actually worse than something, you know, certainly in some circles. I can't help but think my wife and I have been very intentional with our now almost four-year-old, which blows my mind because I'm four. But we teach him that being wrong is okay. And it has this funny effect where if we are wrong, it's hysterical to me. Uh-huh. He very cheerfully says, He very cheerfully lets us know that we were wrong. You were wrong, Dad. You know, it's this moment of like reminding me that this is fine. Like, it's okay. And it can be something that we can laugh about together. We can learn about, you know, and usually it's about the smallest things. And then he will, but the great part is that when he's wrong, he also says it the same way. He's kind of equalized this concept in his mind. And I feel I keep, every time it happens, I tell my, Wife, this is a parenting win. We've figured something out here that we need to write a book about or something one day because this is really important. That is a win. And you should write a book about that or at least popularize that because I think that's a really important principle of parenting that a lot of people, hasn't occurred to a lot of people. And as you were talking, I was remembering that I, my parents were also pretty good about this. And I noticed it even when I was, I don't know, seven years old or something. At the age of 22, they were still bringing up the evolution of evolution and evolution evolution, evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution So, yeah, it's cool to have some independent confirmation of that parenting trick. Welcome to the sub-podcast of Developer Tea. This is my parenting podcast. One more thing about parenting that we've learned recently and that I feel like is applicable is, oh, I just lost it. I just lost it. I had it in mind and it was very good. I was talking about my parents telling me they were wrong. Oh, the idea that, okay, yes, I remember now. So you mentioned this idea that your parents kind of revised their position with you. They came back. They admitted they were wrong, et cetera. So I read recently about the way that my child's brain works that's different than mine and how one of the biggest parenting mistakes you can make is assuming that your child's brain is effectively like a brain. Like an adult's brain, but just in a child's body, that he can process the same things that you can at the same speed that you can in particular. And what it mentioned was the idea that his registering, in this case, my son, Liam, that's why I keep on saying his, his registering of the information that I'm giving him, the words that I'm saying to him, is offset by a pretty significant margin. So it takes him about 30 seconds to understand really what I'm saying to him. And so when I get impatient within about 10 seconds, he's confused. He's not being obstinate. He's confused why I'm impatient because it hasn't even registered to him what exactly it is that I want from him. And so we've tried to understand more in terms of how do we try to think in the same way that he's thinking and give him advanced notice, for example. He loves. He's crazy about Mario. Advanced notice of, hey, you're going to have to turn off Mario in like five minutes from now. Time's coming. It's, you know, it's coming up. Rather than saying, all right, it's time to turn it off. And then him being like, what? No, there's no way I'm turning this off right now. You just said it. This is news to me. I had plans here. And the thing that really struck me was the idea that I was expecting something from him. That I could never let him expect from me. I was going to say, I would feel the same way, actually, if I giving somebody this now. Right. Exactly. If I'm told, here's the thing I'm going to expect you to do in the future, then I have time to adjust to it and expect it. It doesn't feel like it's being suddenly sprung on me. And so, yeah, I would want someone to treat me that way, too. Yeah, exactly. And it was very impactful for me from an empathy standpoint. Yeah. This is another human being. And I guess to get out of the parenting podcast and go back to our regular scheduled programming, this is true in other relationships. I think we are very prone to not recognizing what the other side, what it would feel like to be on the receiving end of whatever it is that we're putting out into the world. Yeah. That's not true for everybody. Some people are more aware than others. But certainly, we have this lens that prefers our own. And I imagine this is very much related to our soldier mindset in the sense that it's confirming what we believe is right. And we feel justified in our actions in a given moment. But we're easily willing to judge another person in their actions in that same moment. Right. Yeah. There's this expression that when I screw up, it's because I'm having a bad day. But when my coworker screws up, it's because he's incompetent. Yeah, exactly. Yeah. There are a lot of versions of that. And yeah, again, this is another thing where I think I'm probably better than average at least at cognitive empathy where I can try to understand why someone thinks what they think. Emotional empathy is a little bit different, although I also try to be good at that too. But I still catch myself failing at it. Like the other day, I was... I was trying to have a productive disagreement with someone online. And I was about to respond to them. And I don't remember all the context. I won't try to give it. But I was about to respond to them saying something like, so in your mind, such and such, that's just a coincidence. And I didn't think that it was a coincidence, but it seemed like that's what the person was arguing. And then I stopped and I heard my... I tried to listen to my words as if someone was saying them to me. And I realized, oh, the phrase, so in your mind. That sounds really kind of condescending or it sounds like I'm caricaturing their view. And I hadn't been aware that that's what I was doing when I was typing those words. But I was feeling kind of annoyed at them or kind of, you know, disgusted at their claims. And that came through in my words, even though I was trying to not let it. And so I really do have to consciously go through this check of how would this sound if someone said it to me. And I often realize that I'm unconsciously... I'm betraying my bias in the way that I expressed my disagreement, even though I thought I was being good about it. And then I have to revise it and make it better. So in your very ridiculously wrong mind... I can't imagine what someone could object to in that. That's good. We'll be right back with the final portion of my interview with Julia Galef. Right after we talk about today's sponsor, Launch Darkly. Launch Darkly. Launch Darkly is today's leading feature management platform, empowering your teams to safely deliver and control software through feature flags. And I want to go off script here for a second and talk a little bit about the fundamental value that Launch Darkly provides to you. If you're listening to this right now and you're thinking, oh, feature flags. Yeah, we already have that. We built that. Well, I want to give you just a moment of hopefully some advice. All right. If you're building your own feature flags, this is a very dangerous scenario to be in. Not only is it dangerous, but it's also not very extensible. You're not going to be able to integrate that with a bunch of other stuff. You're going to need somebody who knows that feature flag system inside and out. Now, feature flags, if you only had one or two. Right. Then then I can imagine you saying, OK, well, I'm not going to I'm not going to go through the process of integrating an entirely new product just for my one or two feature flags. But for the people who launch darkly makes the most sense. It's also the people who think that they need to build out a robust system of feature flags in their own software. There's a few problems with this. The biggest problem. The biggest problem is that feature flags are a huge opportunity for bugs to be introduced. And in order to mitigate that, you need to really invest a lot of time and energy. Right. That means that you're paying your developers. If you're a manager or if you are controlling a budget, you're paying your developers. They're spending time developing features, feature control systems rather than focusing on the software that matters. They're developing this kind of meta software. And it's not their bread and butter. It's not what you're supposed to be really good at doing. You're not actually investing in the product. You're just investing in control systems. And by the way, the moment that those fail. Right. Or the moment that that engineer leaves, if you don't have excellent documentation in place, which also costs time and also costs money and often goes stale. Well, you're once again, you're in a really tough scenario where these things are very important, by the way. Feature flags are very important to the running of your software, whether it's because you're releasing features in the time gate away from when the code is complete. Right. Or maybe you're releasing them partially to some users. LaunchDarkly can do all of this. It can do all of this. And they have SDKs. Literally, this says this on their website. They have SDKs for days. They have client SDKs for Android C++ for for Atom apps for iOS for Gatsby. JavaScript, of course. They also have server side things. JavaScript once again. But go, you know, Golang. They have Erlang. They have C++ on the server side. They have all of these SDKs. So you're certainly not going to be up a creek when it comes to integration. So go and check it out. Head over to LaunchDarkly.com. Once again, we went a little off script here because I wanted to convince you that you don't need to build your own feature flag system. Integrate. With LaunchDarkly. And you're going to get a lot of benefit with a much lower lift. One more reason here that I just thought of. If you have multiple clients, then you're going to have to implement those feature flags in all of those clients or all of those different platforms separately. That's a huge value add for those SDKs I was just listing off. So go and check it out. Head over to LaunchDarkly.com. Small businesses and huge enterprises are both relying on LaunchDarkly already. People like IBM. People like Glowforge. People like O'Reilly Media. Go and check it out. That's LaunchDarkly.com. Thanks again to LaunchDarkly for sponsoring today's episode of Developer Tea. So the book comes out tomorrow. Yes. And we're recording this. So check me out when this episode goes live, certainly. You also have been involved with the Center for Applied Rationality. Yes. Can you talk a little bit about what they – this is something I encountered a while back, by the way. And I thought it was really interesting. And I believe, if I remember correctly, I saw some videos that were all about actually taking the things that we've been talking about and doing what we were saying earlier, which is trying to figure out what we do about this stuff. It's not just about understanding what these distortions are or whatever. It's what do we do now? And I'd love for you to talk a little bit about that. But also, maybe as we're doing that, we can talk about some of the ways that I can recognize when I'm in that soldier mindset versus scout mindset if you have anything, any kind of final tool that you want to provide as an example of what's in the book. Sure. Yeah. Well, I'll just briefly say first that I co-founded the Center for Applied Rationality in early 2012. It's an educational nonprofit in the Bay Area that runs workshops on basically reasoning and decision-making, how to apply a bunch of these concepts from cognitive science or philosophy to your actual decision-making about your life and career and so on. And so I co-founded it in 2012 and helped run it and teach it workshops until I was 20. Wow. And I've been at CIFAR since I was, I guess, early 2016. So I'm not at CIFAR anymore. And they've pivoted to some extent to focusing more on researchers and researchers focusing on AI. So it's less of a general all-purpose educational nonprofit than it used to be. And so I don't, yeah, I can definitely talk about CIFAR, but I don't want people to assume that that will match the current mission of CIFAR. But yeah, it was, you know, we would take... Yeah. We would take principles about, like the thing we were talking about with Daniel Kahneman about how our predictions are systematically over-optimistic and assuming we're going to finish things faster than we will or, yeah, things will take less time than we expect. And trying to notice that and correct for it using techniques like reference class forecasting, which is essentially using the outside view, looking at previous examples or examples from other people to see how long those took. And just trying to find ways to apply that to improve your own decision-making and future. Yeah. And just trying to find ways to improve your own decision-making and planning at work. So things like that. And then to your question about practical ways of getting better at noticing whether you're in scout or soldier mindset in your own life. Yeah. I talk a lot about this in the book. And one kind of category of technique that we've touched on a little bit already in this conversation is a thought experiment where you, you know, there's different versions of thought experiments. One that I talked about earlier is the idea of thinking about the future. And one that I talked about earlier is the one where I asked myself, suppose this study had found the opposite results. So suppose it supported my views instead of opposing my views. How would I judge the methodology of that study in that case? And so that can help you notice when you're applying a different standard of rigor to evidence depending on the conclusions. And so I do things like that also when, I don't know, suppose I see an article online criticizing feminism or something. And I think that's a good way to do that. At least when you may have taken theijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijijij like, I don't know, conservatives or something. And so that kind of thought experiment can help you be more aware of the, the, can I accept this versus must I accept this, uh, uh, property of, of our, of our brains. I call it the selective skeptic test. Um, but then there's, there's other kinds of thought experiments too. Like there's a, an outsider test I talk about where you try to, you try to become more objective about a situation you're in your life that you're dealing with by imagining that someone else was in that situation and thinking about, well, how would I, what would I think that person should do if they were in the situation of trying to decide whether to, you know, quit grad school or not, or whether they need to fire this person or not. And it's really just quite striking to me still how different the situation can seem, like the right course of action can seem when all I change about the situation is that I'm not the person who's in the situation. And so I think that's a really important thing to think about is whether it's me or not, who's in it. Uh, so I think that kind of thought experiment can also be really instructive. That's excellent. I, there have been some that I've seen that are very similar to this, uh, this whole class of, of, um, basically taking this, trying to take yourself out of the equation in some way. It seems to be kind of a, a category of thought experiments. Yeah. Um, where if you can, another good example of this is, uh, what, if you're facing a dilemma, what advice would you give somebody else who's facing the same dilemma? Right. Um, and then why is it different? It's not, it's not necessarily invalid. Yeah, no, that's a great point. That's yeah. There, there can be, there can be, uh, disanalogies where, you know, okay, maybe, maybe their situation actually is different, or maybe you want to hold yourself to a higher standard than you would hold someone else or something, but you should at least be consciously aware of those differences. Exactly. So that you can ask yourself, do I think these, this is a valid reason to behave differently than I would tell someone else to. Right. And I do this regular, you know, and I think some of this is, it comes down to even simple things like preference. Example, uh, should I buy this very expensive guitar? Well, do you like guitars? I, I like guitars, but the person that I'm giving advice to, I probably would say no. Right. But I like guitars. So like, maybe I should, you know, um, not necessarily saying that everything is, you know, justifiable necessarily, but, uh, or, or that you should, you know, always use this as a crutch, uh, a way of, but becoming aware of, of your reasoning, I think is, is a huge step towards, uh, you know, potentially more effective thinking. Yeah. I hesitate to say anything as platitude level as that, but. No, I think, I think that's an important and underrated point, honestly, that we tend to feel if we ever do notice ourselves being biased or, you know, in soldiers, we tend to feel like, you know, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we're not, we tend to feel sheepish about that, or we feel disappointed in ourselves or we feel bad. And I think that's counterintuitively actually to your question that I dropped earlier about what's a counterintuitive thing, um, about your, your book. Um, I think in fact, you should feel good when you notice yourself being biased or engaging in motivated reasoning in soldier mindset, because, you know, soldier mindset is, it's very innate and it's very universal. It's just kind of baked into how the human mind works. And so if you don't notice it regularly, what's more likely that you are an exception to how all of humanity thinks or that you're just not very self-aware. And so I think noticing these things, noticing yourself doing this stuff is not a sign that you're unusually bad at reasoning. It's a sign that you're unusually good at self-awareness. And that's a crucial step on the path to actually changing the way you think. Yeah. Uh, there, I have had this big swing personally, and I'd be interested, you know, everybody goes through this their own way. Um, it was big swing away. I feel like for a little while I kind of treated this rational approach religiously, uh, that how so, uh, in the sense that my, you know, drive to become more rational was a moral obligation for me. Hmm. Choosing things that are irrational, like for example, spending money on a guitar simply because I like it is, is somehow wrong. Um, or that, that finding a rational, uh, pathway is possible in that kind of scenario. How, how can you wait your, um, you know, subjective appreciation for things? It's very difficult to do. And a lot of our human experience is very much a subjective experience. And so, you know, when we, when we try to take these subjective experience things and find a rational pathway, it's very easy to, uh, heap guilt on ourselves or, or much worse, you know, begin to pass judgment on other people when we see things that they're doing that are completely irrational. It sounds very much to me, you know, having grown up, uh, in the deep South, seeing religious environment all the time, it has the same feel to me as someone who's kind of looking at somebody, um, with, with a glare, uh, that, that has a tattoo, you know, like, um, uh, that, that it has that same feel of, well, this really doesn't matter very much. Right. But this person is, is taking, taking a route that from my very objective position, which is not objective at all, but I feel it's objective is wrong, right? It's, it's wrong in the sense that they're trying to do something, um, that I don't, for some reason, I don't know, I don't know, I don't know, I don't know, I don't know, I don't know, I don't know, I don't know, I don't know, I don't know, I don't believe they should. And the should is coming from my understanding of a rational path. And I've just seen that become a very, and I also fell into that trap for myself thinking, okay, rationality is, is the goal, but I don't know that. I think truth and rationality have a large, a large overlap, but because the human experience is not purely rational, I don't think that it's one in the same. I don't think it's, you know, a perfect circle overlap, uh, certainly. Well, that's very interesting. I think, I think the way a lot of other people understand what it means to be rational is different from what I, how I understand it or what I mean by the word. And in my, when I talk about rationality, it's not something that excludes like buying a guitar because that makes you happy. There's not that. I don't, I don't see that as irrational, but I know that a lot of people might call that irrational because you can't justify it in terms other than just your own enjoyment. But I think your own enjoyment is a perfectly valid reason to do things. Um, uh, a different thing that I would be more inclined to call irrational is if you, you have strong reason to expect that you will regret buying the guitar. Like, you know, you, that there are other things that you, uh, really need the money for that actually are more important to you than the guitar. Um, but you do it anyway, because in that moment, you just really want it. And you're, you're, you're, you're, you're, you're, you're, you're, you're, you're, you're, you're, you're, you're, kind of ignoring the broader picture. So I might call that irrational. Although even so I like, there are a lot of cases in which it might seem like that's what's happening on the outside, but when you really dig into the details, it's, it actually makes, makes much more sense what the person is doing. Um, but, but I just wanted to contrast those two situations where buying something because it makes you happy or gives you enjoyment is there's nothing inherently irrational about that at all. I think that's actually pretty rational. Um, but doing something that you, yeah, I was just repeating myself. What were you going to say? Oh, I was just going to say that the, um, I guess the, the part of my brain that breaks down, uh, a little bit for, for my experience on this has been that I try to figure out, you know, when I hear rational, I hear, uh, specific or explicit, uh, uh, uh, discreet, maybe the right word for it, right. I want the exact, you know, where is the tipping point on this guitar purchase, you know, uh, where it becomes, it flips from rational to irrational. Is it a, is there a way? And because I can't really pinpoint that, that's where I say, or, or that's what has given me this ground to feel like, okay, if I can't pinpoint a specific tipping point, uh, on that scale of this is a perfectly rational decision to buy this guitar to, this is absolutely insane. What are you doing? Um, and there's the, you know, there has to be theoretically, there would be a point there, right. But at some, you know, in some world, uh, all points on that scale could make sense, you know, for, for a given person. And so that's what has given me the, this feeling that the drive to find that specific point maybe is the error, right. It's not necessarily the desire to be rational that I want to depart from. It's, it's the drive to say, well, once you've, you know, uh, spent that sixth dollar rather than the fifth one, the sixth dollar is really where you go over the edge. Uh, and the fifth one was fine, right. Yeah. And making it more binary than it needs to be. Yeah. I definitely don't think real life is the sort of thing where there would be these discrete, uh, cutoffs that you could draw where it's, you know, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, decision before the cutoff. And then you spend one cent more and it's a terrible decision or most things in life I think are kind of spectrums where, uh, I don't know, I guess it depends on how you're conceiving of a good or bad decision. In theory, there could be a tipping point where, I don't know, it's a little too abstract for me to think about clearly, I think, but, but as a general rule, I think things are messy and you have to be satisfied with just, you know, using heuristics, and trying to take your best guess and making rough estimates. And that, that's not irrational. That's just inevitable. Like we don't have perfect information and we don't have infinite computing power and time. And so this is the best we could possibly do. I don't think we should feel bad about that. Yeah. This that's great advice. It's probably something I needed to hear. Yeah. Julia, thank you so much for going over on time. Oh, my pleasure. Yeah. This was such a fun conversation. And I typically ask the, these two quick end questions. If you have the, the, you know, a couple of seconds here. The first question that I like to ask is what do you wish more people would ask you about? Oh, I, I guess I, a thing I don't often get to talk about that would be fun if people asked me about is my, like what I've learned about having good podcast conversations, myself. Maybe that's too meta for you, but it's a thing I think about so much, but I never, no one ever actually asks me about it. So yeah, that's, that's one thing, or I guess about how to have good disagreements online. That's something I also think about a lot, but it doesn't tend to come up naturally in converse in interviews. Yeah, that makes sense. It is difficult. I imagine to say, well, how do I go and tell people that they're wrong? That's, that's kind of a hard, a hard thing to, to organically, Yeah. You know, one thing that I've found, which I suspect you already do to some extent, but maybe this won't be apparent to some of your listeners, but you know, the way I do my podcast does inherently involve disagreeing with people a lot. And I do tend to disagree with people a lot, just, you know, socially or online. So that's kind of unavoidable, but there are other things I think you can do. Yeah. to soften the blow of disagreement and make people more open to it. And that includes just your tone, like just being friendly and warm, I think helps a lot. But also I think it's helpful to give what I would call honest signals of good faith, where an honest signal is something that is hard for someone to fake. So an honest signal of good faith disagreement might be something like, like pointing out things that I'm uncertain about, or just voluntarily bringing up like, you know, you know, here's what I think, but, you know, I can't be sure whether such and such or, or voluntarily bringing up points that support their side, even if you don't agree with them saying, you know, well, that doesn't seem right to me. Although I, I would agree that it holds true in such and such cases, like those kinds of things are, I think, a signal to the other person that you genuinely are trying to just share perspectives or understand their way of thinking, or trying to, you know, work together to understand the difference. And I think that's a really good way to do that. And I think that's a really good way to do that. And I think that's a really good way to do that. And I think that's a really good way to do that. And I think that's a really good way to do that. And I think that's, you know, shoot them down. And so you can still disagree with people without getting a ton of pushback or defensiveness from them if you go out of your way to give these other signals of good faith and camaraderie. Yeah, that's a really good point. As you're saying that, part of me felt like one of the biggest things I miss in myself is recognizing when I'm not actually doing it in good faith. Yes. Well, that's the thing. You have to actually be doing it and not just trying to show that you're doing it. Right. Right. And it's kind of this faux, and I see this online quite a lot, this faux approach as it's as if you're trying to be genuine, but it pretty quickly falls apart. I know. It's like when people say, I'm genuinely curious. And then they ask a question that's totally pointed and leading. And, you know, like, I'm genuinely curious. How can anyone be so stupid as to think that? The classic one is, well, I just I just think it's interesting, you know, curious. You know, I want to hear more about that. Right. Yeah, I encountered that quite a bit. So so whenever we follow up, maybe with another episode, we can do a whole discussion on how we can maybe be better at. Disagreeing even with ourselves sometimes. Maybe that's a healthy, healthy. Yeah. Yeah. Julia, thank you so much. One final question here. If you had, you know, 30 seconds to give advice to software engineers, which we really haven't touched on explicitly too much in this episode, but that is the audience here. What would you tell them? And I'll give you a little more guidance here in order to become more aware of this idea of how we can be more inclusive. And I think that's a really good idea of finding a clearer map of the territory. Well, so another piece of advice that I didn't talk about, I talked about thought experiments, but there's another piece of advice that might be might appeal more to to software developers than to your average person. And so I'll share that now. And that is the idea of betting on your beliefs, or at least thinking about how you would bet on your beliefs, because often, you know, we tell ourselves things that kind of sound plausible. But when we're forced to put skin in the game and think about, you know, how would I still stand by this belief if I had something at stake, something to lose that can often force you to realize, oh, actually, I'm not as confident in that as I thought I was. Or, you know, actually, my view is something different than I thought it was when I didn't have skin in the game. And a bet can be anything. It doesn't have to be, you know, betting money like you would at a poker table or something. It can just be any kind of stakes. So, you know, if the thing you're telling yourself is, our servers are highly secure, I'm confident in that, then imagining a bet might look like, okay, suppose that I was going to hire a hacker to try to break into our servers. And, you know, I have to pay $1,000 if the hacker can do it in five hours or something. And you imagine that very concrete situation. And just notice, do I feel excited about taking this bet? Or do I feel a little bit nervous? And if you feel a little bit nervous, maybe that's a sign that, you know, oh, maybe I'm not quite as confident that our servers are secure as I thought I was when there weren't stakes. Yeah, that's really good. Another really good bet to make on the servers being secure is your Friday night, which is a very realistic thing. Right, right. The server goes down at 5.05 on a Friday. Is that really what you want to risk here? That's right. Yeah. And often, you know, often there are actually stakes for us being wrong. We're just not, the stakes are very abstract to us in the moment. We don't make them explicit. Yeah, exactly. So you have to just really think concretely, okay, here's the thing that happens if I'm wrong. And think about it concretely and notice whether you feel like you want to take that risk or not. Yeah, absolutely. Julia, thank you so much for all of the advice and the very thoughtful conversation and for pushing me on my own perspectives. And I really appreciate the time that you spent. And everybody, it's been a pleasure. Thank you. You can read more about the book on there too. Excellent. Thank you so much, Julia. I'll talk to you soon. My pleasure. Bye. Thank you so much for listening to today's episode of Developer Tea, the second part of my interview with Julia Galef. Of course, if you missed out on that first part, you might want to go back and listen to it. It'll make this one make a whole lot more sense to you. Thanks so much for listening to this show. Week in, week out, we do three episodes a week. So make sure if you don't want to miss out on future episodes like this one to subscribe to whatever podcast you're listening to. And I'll see you in the next one. Bye. Enjoy your tea.