In today's episode, I have the joy of interviewing Julia Galef. Julia and I talk about updating your beliefs, the difficulty of fighting our biases, seeking truth, and her new book, The Scout Mindset.
In today's episode, I have the joy of interviewing Julia Galef. Julia and I talk about updating your beliefs, the difficulty of fighting our biases, seeking truth, and her new book, The Scout Mindset.
Today's episode is sponsored by LaunchDarkly. LaunchDarkly is today’s leading feature management platform, empowering your teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, you can deploy faster, reduce risk, and rest easy.
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com/contact.
If you would like to join the new experimental Discord group, reach out at developertea.com/contact, developertea@gmail.com, or @developertea on Twitter.
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
I'm very excited to be joined by today's guest, Julia Califf. Julia is the author of The Scout Mindset, which is out in pretty much anywhere that you buy books. Of course, you can find it on Amazon and local booksellers close to you. And Julia is also the host of a long-running podcast called Rationally Speaking. I'm excited to share this interview with you because Julia is such a clear thinker and she has such a unique talent in discussions of challenging what you have to say while still inviting you to say more. In this episode, we talk about quite a few things. For example, updating your beliefs, scientific paradigm shifts. Let me have a quick discussion. About Danny Kahneman, who could possibly guess that we would talk about Danny Kahneman on this episode, but we did indeed. Julia is one of the many authors that we've had on this show. And the reason that we have people like Julia on Developer Tea is because, as you know, the big goal of this show is to help driven developers find clarity, perspective, purpose in their careers. And one of the most important things that you can do in finding clarity and finding perspective and ultimately those things leading to help you find purpose is to understand how to think better, how to build a better thinking machine. So much of our content is pointed towards that. And the Scout Mindset is yet another kind of way of framing this idea, this thinking machine that we have. And I'm going to let Julia describe it. But let's get straight into this interview with Julia Galef. Julia, welcome to Developer Tea. Thank you for coming. Thank you. My pleasure. Good to be here. So you, we were just talking before the show. And typically when I have a guest on that has their own podcast, I tend to be the veteran in the room just because I've been doing podcasting since, you know, for six years or something. But you totally eclipsed me. You've been doing podcasting for how long? Since the beginning of 2010. And if my age, you know, if my podcasting age doesn't show me just saying doing podcasting, I guess you've been the host of a podcast, I guess, since 2010. That podcast being Rationally Speaking. Can you tell us just a little bit about that podcast? And then we'll talk about some other stuff that you're doing. Yeah. So Rationally Speaking is, originally was co-hosting it with a philosopher, professor of philosophy named Massimo Pagliucci. So we co-founded and co-hosted it together for the first five years or so. And then since about 2015, I've been hosting it solo. And it's about every two weeks. And each episode is mostly an interview, or for the most part, each episode is an interview with a thinker of some kind, a scientist or an author or a philosopher. And it's kind of a conversational episode where I either am trying to explore some topic that I'm really curious about, or trying to understand, like, how does consciousness work? Or what is willpower? And, you know, why do some people seem to have more of it than others? Or the guest comes on with a thesis, you know, they wrote a book making an argument that I think is interesting and worth diving into, and that I, you know, don't fully agree with. And so we have a kind of friendly debate about that. And I think it's really interesting. And I think it's really exciting. And I think it's really exciting. And I think it's really exciting. And I think it's really exciting. And I think it's really exciting. And I think it's really exciting. And I'm really aiming to focus on guests and issues where we can kind of collaboratively figure a thing out together and kind of explore, you know, exactly where do our disagreements lie and why do we disagree? So basically, to have the debate feel like more of a collaborative effort to seek truth together and less like a battle. Right. To see who's going to win this particular discussion that has no, you know, no way of proving who's going to win. is right necessarily. Yeah. I mean, well, certainly I think sometimes I have evidence on my side to some degree and other times I just have intuitions. But yeah, there's a limit to how much you can do in a podcast. So I aim more to just try to get to the bottom of what I call our cruxes of disagreement. So I don't know. For example, I had a guest on a couple of years ago who wrote a book that I liked but substantially disagreed with. It was Rob Reich, who's a political science professor at Stanford. And he wrote a book. I'm blanking on the title now, but it was essentially arguing against large scale philanthropy. So he was criticizing billionaires who give away large sums of money to try to do good in the world. And I've worked with and I'm a big fan of several organizations that give away billionaires money and try to do good in the world. And so we had, I think, a really interesting discussion about this and kind of got to the bottom of one of our cruxes, which was just that we have different value systems, essentially. My value system, my approach to ethics is really pretty consequentialist in the sense that I want things to happen that cause good outcomes in the world. And I want to prevent actions that have bad outcomes in the world. Which maybe sounds a little bit obvious when I say it, but that's not... A lot of people's moral systems include some consequentialism, but also include other things like deontology. Some things are just right or wrong, regardless of their consequences. And I think, I don't want to misrepresent Robert Reich's argument here, but my memory of my impression of his argument was that it's just kind of wrong in some ways for billionaires to... To get status and praise for giving away money. Even if their money is actually doing good, it's still wrong for them to get that status and praise. And so he was kind of... He was more focused on who deserves praise in society and less on what is the consequence of them giving away their money. Anyway, that was a long explanation, but I was just trying to describe the goal of these conversations as to kind of get down to the root of why do we have different views on this? And I think that's a really important topic. And often it's a value disagreement. Sometimes it's just about we were using words differently. Sometimes we just have different predictions about how the world works, and that's causing our disagreement. But I find it really interesting and valuable to get down to those roots, even if we can't resolve the disagreement in one hour. Yeah. I mean, I can only imagine that these kinds of conversations have probably had a pretty major impact on the way you see the world. And I'm wondering, do you have a particular memory or a moment, and it can be on the podcast or not, where you felt like you had... The reason I'm asking this question, I think this is a fairly rare experience. I think this experience is pretty important when it does happen, where you have a sudden shift or kind of a brightening moment where you realize something that you didn't realize before. Maybe it was counterintuitive to you, but the way you see it, it's a fairly rare experience. I think it's a fairly rare experience. I think it's a fairly rare experience. I think it's a fairly rare experience. I think it's a fairly rare experience. Lights kind of switched on about something. Do you remember a particular moment like that? Oh, like a moment when I kind of had insight into something that had been more opaque to me before? Yeah. I think some people might see this as a change in mind, but I think it's actually more like a shift in... Or it's a learning moment. It's not necessarily that you were convinced for different reasons. It's that you were convinced for different reasons. It's that you learned something new and you updated your belief about something. Yeah. Well, I love the way you put that. As you might have expected, I would, because that is also how I encourage people to think about changing their mind or revising their beliefs. It doesn't have to be, and most of the time it's not going to be, a 180-degree flip where you originally believed X and now you believe the opposite of X. A much more realistic... Yeah. ...thing to shoot for is these subtle incremental shifts in your way of thinking, where someone brings up a hypothesis that... Or someone makes a claim that you don't necessarily think is true, so you're not fully convinced, but it's something that hadn't occurred to you before. And so now, for the first time, you're considering this as a possibility, a possible hypothesis that you hadn't been paying attention to before. Or someone points out a potential... Yeah. ...caveat or an exception to something you believe, and you hadn't noticed that exception before. And so it doesn't mean you don't hold your belief anymore, but now you're paying attention to how it may not be true in all cases. Maybe it's only true in most cases. And so that's also a valuable shift. So I think these kinds of shifts, or as I often call them, updates to your thinking, are really valuable. And over time, they will often kind of accumulate to a significant kind of paradigm. Yeah. Because you're looking at a paradigm shift in the way you see something. But you shouldn't be only looking for paradigm shifts because they're really made out of these little moments. Does that resonate with how you're seeing this? Yes, absolutely. Just to kind of add one more layer on it, the model that I use for this that I feel like is fairly well understood by most people is seeing Newtonian physics as kind of a universal truth. And then suddenly, we find new evidence to lay out the pattern. And I think that's a really good way to kind of be only looking for paradigm shifts because they're really made out of these little moments. Does that resonate with how you're seeing this? Yes, absolutely. Just to kind of add one more layer on it, the model that I use for this that I feel like is fairly well understood by most people is seeing Newtonian physics as kind of a universal truth. And then suddenly we find new evidence to layer on top of it that it doesn't always work exactly. It's not continuous all the way up to infinity. There are other things going on. So Newtonian physics is going to work in some scenarios and in others not so much. And this is something that people kind of click with it. Oh, yeah, that's right. I can see how for all practical purposes, when I only knew about Newtonian physics, there was no reason for me to have had any kind of shift or updating my belief there. But then when there's evidence presented, it's not necessarily that hard to say, well, this isn't the whole picture. Right. You're just seeing part of the picture. That's a great analogy. And it's actually, it made me realize that I should clarify the phrase paradigm shift because I think the phrase has become kind of a buzzword in business in the last couple decades, referring to any shift in the way you're doing things. Yes. Like, allegedly a large shift, but oftentimes a small shift that's being spun as a large shift. But really what I meant to refer to by paradigm shift is the way science changes its mind, which follows the process you were kind of describing. Oh, yeah, like a meta paradigm shift almost. Well, I don't know if I would call it a meta. Maybe in some cases a meta paradigm shift. But yeah, it's a phrase that was coined by a philosopher of science named Thomas Kuhn in The Structure of Scientific Revolutions. And he was describing this process that science follows when it kind of collectively changes its mind about something like Newtonian physics or like, well, an earlier paradigm shift was whether the sun revolves around the earth or the earth revolves around the sun, where there's this reigning paradigm, like the sun revolves around the earth. And then gradually some people start to notice anomalies, like observations that don't make sense, assuming the reigning paradigm is true. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right.! the sun and all the planets trace circular orbits around the earth. And so at first, what happens is these observations are kind of dismissed or ignored because everyone's so confident that the reigning paradigm is true. And then gradually they start to accumulate and science enters into kind of a state of confusion where, you know, people are no longer sure if the reigning paradigm is true, but they don't have anything great to replace it with. And they try to revise the reigning paradigm by saying, okay, well, maybe, you know, yes, planets revolve around the earth, but maybe they don't follow perfect circles. Maybe there are these little epicycles that they follow that would produce these weird paths across the night sky. And then eventually someone comes up with a new paradigm that makes all of your data make sense again, which in this case is the planets don't revolve around the earth, the planets revolve around the sun, and that produces the paths that we see across the night sky. So, you know, I like this metaphor. In thinking about just how I see the world, you know, I'll have a reigning paradigm, like, I don't know, having to do with how I see, how I view social interactions or how I think about what is good or bad and gradually little anomalies will accumulate. And I think it's important to notice and pay attention to those anomalies instead of just ignoring them. And sometimes I just end up becoming more confused, but other times, you know, those anomalies accumulate and, you know, they're not the same thing. Those anomalies accumulate and, you know, they're not the same thing. Those anomalies accumulate and eventually I'm able to kind of make better sense of them in a new paradigm than I did in my old one. So, yeah, it was a long winded way of agreeing with you, but I really liked your example. That makes total sense to me. And I think I've experienced a lot of that confusion side of things for myself, especially if I don't invest a lot of time in trying to clarify a position that I have. Yeah. The confusion mounts as things just kind of as I let my brain naturally process through things rather than saying, OK, you know, intentionally sit down and try to figure out where do I stand? On? On this particular thing. If I instead, you know, let stimuli come through, whatever you want to call it, if I experience the world or if I read a book, you know, whatever it is that I'm consuming and then I move on with my day. Sometimes somebody will ask me what I think about something and it's relevant to whatever it is that I was just consuming. And I have no idea. I do not know how to answer the question. It's like my brain has computed both sides and I can see both sides. Right. Three. 30 sides or 50 sides. I can see the validity in all of them and I don't attach myself to any of them. And that's really tough. That can be really hard for me, actually. Have you ever experienced this? Yeah. I mean, it is tough because allowing those anomalies in, as you nicely put it, it kind of requires you to hold your views in this superposition where you're, you know, something seems true to you. But. Simultaneously, you recognize that it doesn't fit with other data you have and you have to kind of resist the temptation to collapse the superposition and force your, you know, shoehorn the information into your preexisting worldview, which it may not actually naturally fit into. And just kind of exist in that state of. Not knowing. Not being sure how to look at things. Yeah. And just being okay with that for a while. And you can have, you know, you can still say, well, I'm. I'm leaning towards this particular view. Like there's this, the story I love. That I actually wrote about in my book. About a. His name is Jerry Taylor. And he was a. Anti-climate change activist. So he worked at this think tank, a libertarian think tank called the Cato Institute. And his main job was to go on talk shows and downplay the threat of climate change and basically say, you know, this has really been overblown. And we, you know, the, the, the climate change doomsayers aren't giving you the full story. And actually the, the earth isn't, we don't have good evidence. The earth is warming or that it's due to human activity. So this was his main job. And then his first anomaly came after one of his talk show appearances where. The person he was debating, who's a climate change activist named Joe Rom said to him backstage, you know, something you said seemed wrong to me. You, you. You claimed that the. Pattern of earth. Warming has not lived up to the predictions made by like in the original testimony given to Congress by people who are worried about climate change. But I think that's not true. If you go back and read the testimony, you've misremembered or misrepresented what the prediction actually was. And. And so Jerry Taylor. Kind of expecting himself to be proven right. Went back to the, to look at the testimony. And realize, oh, actually I did this misrepresent it. That's interesting. Well, but this. This point was given to me by a scientist who I respect who's skeptical of climate change. And so he went back to talk to the scientist. And the scientist. Kind of hemmed and hawed and didn't have a really good answer for him for why he had. Misrepresented this. And so this kind of stymied. Jerry Taylor, because here was this scientist he had trusted and respected who had kind of given him misleading information. And. And so that didn't on its own cause him to suddenly be worried about climate change, but it did make him a little bit more. Confused or concerned about. The quality of the evidence that he was relying on. And so he just started checking. More carefully. Every time he would make a claim or. Or or see a study cited against climate change, he would follow up on it. And he was often kind of disappointed in the quality of the evidence that he found. And so this just gradually over time started shifting him into being more open to the possibility that maybe actually there is something to this climate change issue after all. And. I could. I could tell the full story if you want. But but the summary, since I've already been monologuing for a while, is that he eventually did change his mind that the final kind of paradigm shift happened. And now he's he quit the Cato Institute. And now he's a professional climate change activist trying to convince people that climate change is actually a big deal and that we should be concerned about it. And I think he's the only professional climate change skeptic who actually switched sides. Yeah. Well, and it makes sense. There are very. Few people who certainly deep into their professional lives change something as contentious in terms of cultural belief. Yeah. And and central to their kind of identity, certainly their right identity. I thought that was also really impressive. It wasn't just a random side belief you had. We'll get right back to our discussion with Julia Galef right after we talk about today's sponsor, Launch Darkly. Launch Darkly is today's leading feature management platform. It'll empower your team to deliver safely and control software through feature flags by separating code deployments from feature releases. You can deploy faster, reduce risk and rest easy. Whether you're an established enterprise like Intuit or a small or medium business like Glowforge, thousands of companies of all sizes rely on Launch Darkly. At the pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl pl something like that. This is a nightmare. This is a nightmare. LaunchDarkly can help you avoid this. It can make releases snooze-fests. You can start deploying more frequently and securely with less stress. They also have some in-progress features. You may, for example, want to automatically roll back. This is something that I've talked about with them. You want to roll back some of the work that you've done automatically based on some kind of error, some specific kind of error, you can do that. There's integrations with some of the tools that you're already using. Go and check it out at launchdarkly.com. One example of somebody who's using LaunchDarkly, IBM. IBM went from deploying twice a week. IBM is huge. They went from deploying just twice a week to over a hundred times a day. That's a lot of deployments. And they must have a high level of confidence to be able to do this. We've been able to roll out new features at a pace that would have been unheard of a couple of years ago, said IBM's Kubernetes delivery lead, Michael McKay. Once again, head over to launchdarkly.com to get started today. Thank you again to LaunchDarkly for sponsoring today's episode of Developer Tea. Let's talk about your book for a second. Great. You haven't even introduced the book. So you had a book and the book has come out, correct? It is published. It's actually coming. Well, it's coming out Tuesday, April 13th, which as we're taping this is tomorrow. Yeah, tomorrow it comes out as we're recording this. But by the time this is out, the book will be out. And can you just kind of give us a 50,000 foot view and then we'll get into some of the details of what the book is about? Sure. Yeah. It's called The Scout. Mindset. And that is my term for the motivation to see things as they are and not as you wish they were. So basically to be or to try to be intellectually honest and objective and just to be curious about what's actually true about a given question or situation. And the metaphor is it's part of the framing metaphor of the book in which I say that humans are very often by default in what I call soldier mindset, where our motivation is to defend our preexisting beliefs or defend things we want to believe against any evidence that might threaten them. And so, you know, the scout is not scout mindset is an alternative to that because the scout's role is not to go out and attack or defend. It's to go out and see what's really there to try to form as accurate a map of the territory or of the situation as possible. So the book is about, you know, why why are we so often in soldier mindset as our default? And how can we shift scout mindset instead? And why is that a thing we would we should want to do? As I hear you describe this, my brain immediately goes to a lot of the kind of canonical literature on the various biases that are related to this. And I'm sure that you talk about these in the book, things like confirmation bias or the idea that we're seeking out information. And it's interesting. I noticed myself. Doing this and it's in subtle ways. I think this is the important thing that a lot of people miss. And I'd like to hear your take on this, but subtle ways that I seek information that confirms what I'm saying. And I've noticed this when I Google the question that I have, if I believe the positive side, I Google the positive side. Right. In other words, like if I sometimes I'll Google, but I'll give you a really benign example. I've been doing powerlifting, for a couple of months. Oh, nice. And if I've already bought, let's say, a particular type of protein, I'll Google the benefits of that protein rather than Googling the downsides to it. Right. Right. It's it's I want to make, you know, and the I guess the. The the what is it, the purist side of me is saying, well, I'm learning about this, making sure. Right. Confirming that I made a good decision here. Right. Yeah. Go on. Go on. Oh, I was just going to say, I'm kind of tricking myself into believing that I'm actually seeking out the truth. I'm seeking information. But because I've kind of subtly, sometimes subtly to the point that I can't even detect it, adjusted the search. And when I say the search, I don't just mean on Google. I mean, kind of the the search for knowledge on this particular subject. Right. Now, I'm served those things, but also kind of making this problem even worse is that Google has learned that I like this thing. So Google is going to continuously give me the same, you know, things that they think is going to play. And they ever tried opening up an incognito window just to see if your results are different. Oh, absolutely. Occasionally. Yeah. Interesting. It's scary, though. Right. It's the same feeling of like, well, I don't want somebody to tell me this. I don't want to have to think too hard either. Right. It would be it'd be one thing if I was seeking out this information. And when I say I, by the way, I'm talking about the lazy brain, meaning not sure. Sure. Yeah. I it'd be one thing if I had no information at all. If I was starting from ground zero, then of course I'm you know, let's weigh both sides of this. But I've already spent my money. You know, I've kind of bought into this. I've taken it. I just took it, you know, 10 minutes ago. And I don't want to feel bad about that. And I think that drives so much of our behavior. and I just love this concept as the basis for this book. Can you share a little bit more about some of the more, I love talking about things that are counterintuitive or otherwise were shifting for you. What was something that you learned in the process of writing this book? Oh, that's a great question. And I'm going to get to that in just a minute. I just wanted to to applaud your description of soldier mindset in yourself because I just, you said it really perfectly. And it reminded me of the best definition of what I call soldier mindset and what cognitive scientists often call directionally motivated reasoning. And so I thought you might appreciate this definition. It comes from a cognitive scientist named Tom Gilovich. And he says that when there's something that we want to believe, we view it through the lens of, can I accept this? But when there's something we don't want to believe, we view it through the lens, must I accept? And so our standards, our standards of evidence or the amount of time we'll spend investigating a claim and looking for holes in it. Those are, those are very different. They're very asymmetric depending on whether it's something that we want to believe or we don't. But what that ends up feeling like is even when there's something you don't want to believe and you're asking yourself, must I believe this? And you're kind of looking at it critically and looking for flaws in it. It feels like you're being a, a rigorous critical thinker, right? And you are in a sense, it's just that you're not applying that same standard of rigorous critical thinking to the things you want to believe. And so the end result is that you end up having in your head, a bunch of beliefs that are disproportionately things you wanted to believe because you're applying this asymmetric standard of rigor or, or evidence. Through the same mechanical actions too, which I think exacerbates the problem, right? You're, you're reading white. I'm very much, uh, reading white paper. I'm reading white paper. I'm reading white paper. I'm reading white paper. I'm reading white papers on both of these, uh, sides of the equation here, right? I'm looking for good evidence. And in one, on the one hand, you're exactly right. On the one hand, I'm looking for the flaws in the evidence. On the other hand, I'm looking for something that, that I can accept and close my tab and go back to whatever I was doing before. Right. Yeah. And in fact, I noticed myself doing this as I was writing the book. Um, I was, I came across a study that, that purported to show that scout mindset made you less successful in life. I mean, that's my summary of it, but that was kind of the, the implication of the study. And so of course I, my, I read this headline and my eyes immediately narrowed and I was like, all right, let's see, let's check out their methodology and see if this study is actually stands up. And I read the methodology section and actually it was a pretty poorly done study. Um, and I, I don't think anyone should, um, should update on it. But then I asked, well, suppose I had come across the same study with the same methodology, but it had found the opposite conclusion. Suppose I found the study showing that scout mindset makes you successful in life. What would my reaction have been in that world? And I realized, Oh, yeah, exactly. I was, I was like, Oh, I would, you know, set it aside and be like, okay, I'm going to spend three pages talking about this study in my book. And so it just, the thought experiment made me realize I kind of needed to up my game for a little bit. And so I did that. And I did that. And I did that. At the same time, I knew that evolution was bringing evolution and evolution bringing evolution and evolution bringing evolution and evolution bringing evolution bringing evolution evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing evolution bringing This is one of the reasons the book took so long to write. So, yeah, I think, you know, the point of that being that it can be really easy, even if you're very aware, well aware of soldier mindset and scout mindset, as clearly I am because I'm writing a book about it. And it's still a very unconscious habit of mind that you have to kind of be on the lookout for and catch yourself doing and merely knowing about the existence of motivated reasoning or soldier mindset is not enough in itself to prevent you from doing this. Yeah. As I believe Danny Kahneman famously has said over and over that he is, he falls prey to biases more than his own. Than his friends do or something. I can't remember exactly what he said. I forget his exact wording too, but he, he, well, he said a couple things. One thing he said is that just knowing about, about, you know, the biases and human reasoning is not enough to prevent him from doing it. And he's, he also has some funny stories of himself falling prey to biases that he's, you know, written textbooks about. Yeah. It seems like one about buying furniture or something. Isn't it? I don't remember that one. I was thinking of the one where he was, he was working on a textbook, writing a textbook with some people and they were trying to predict how long it would take them to finish the first draft of the textbook. And they, their estimate was, was way over us optimistic. And it ended up taking, I think, I don't know, six times longer than they thought it would, or maybe they never even finished. I don't remember. But, you know, but, but his point was he is well aware. He's well aware of how people's predictions about how long things are going to take are over optimistic. And he knows that you should look at the outside view, which is his term for, you know, how long do things like this typically take? Like when other people, when other groups write textbooks like this. Depersonalize it in a way. Right. Yeah. Or when I've done similar things in the past, how long has it taken? That's kind of the outside view. Whereas the inside view is just how long does it seem like this should take me? When I think about doing it, how long do I think it's going to take? Yeah. And, and the outside view for textbooks was, was, you know, about six times longer than, they thought it would take. And so he was like, if I had just relied on the outside view, the way I tell people to, then I would have been accurate. But instead I just stuck to the inside view. So that was a funny example of how just knowing about these biases is not enough to prevent you from, from committing them. Right. And then another thing that he has said, which I kind of have an interesting disagreement with, is that he said he's pessimistic about our ability to overcome these innate biases. And he's pointed to research. Interesting. And he's also suggesting that teaching people critical thinking is not, it doesn't actually make them better critical thinkers. And so my disagreement with this is, well, I guess the crux of the disagreement is that the studies testing whether people can overcome biases, they're mostly done on, you know, college students who just volunteered for this study for half an hour of their time. So there's some kind of selection. And so I think that's a really interesting point. And I think that's a really interesting point. Well, there, they're not people who are trying to improve their thinking over a longer period of time, which is, I think, a much more realistic expectation for what would it take to change our innate habits of thinking. You know, when you just think about that question, I think it's kind of obvious that, no, of course, just reading an article isn't going to automatically change the way you think. Like, if you go to therapy, if you see a cognitive therapist, and you have, you're aware that you have these kind of cognitive distortions in the way you see the world, like, you just, you tend to jump to conclusions, or you tend to focus only on the negatives and not on the positives, or, you know, there's all sorts of cognitive distortions that we all have to some degree. And you can, to some extent, change those habits of thought. You can get better at recognizing, oh, I'm doing that thing again, where I completely downplay all of my positives and just focus on the negatives. And you can, to some extent, with practice, you know, overcome that and learn to step outside of that and go, no, I, you know, I remember, I've done this before. And, you know, I need to remind myself of the things that I did right, not just the things I did wrong. But that process takes time. It's not just something where you read an article, and all of a sudden, you can expect yourself to be, you know, seeing things in a balanced way. And so I don't actually think any of the studies that people like Kahneman point to as evidence about our ability to improve our critical thinking, I don't think any of those studies actually tested the thing that we should expect would have a, make a difference. So, yeah, I think, I think this is an underappreciation. I think it's an underappreciated point that these biases are, they're ingrained habits of thought that can be changed, but it just takes a little more long-term effort and practice to change them. Thank you so much for listening to today's episode. The first part of my interview with Julia Galef. If you enjoyed this episode, then certainly make sure you are subscribed in whatever podcasting app you're currently using, so you don't miss out on the second part of this interview. Thank you again to today's sponsor, LaunchDarkly. Head over to LaunchDarkly.com to start resting easy when you deploy your features. Thanks so much for listening to this episode. If you want to join the Discord community, head over to DeveloperTea.com slash Discord. We discuss things that we talk about on these episodes, but we also discuss things that never make it into a Developer Tea episode. I give advice and other great software engineers with even more experience than I have are giving advice in there as well. Thank you. It's a good time. You'll find support, and you'll find a lot of people who really want the best for the others that are in that group. Thanks so much for listening, and until next time, enjoy your tea. Thank you.