Developer Tea

The Scout Mindset with Julia Galef, Part One

Episode Summary

In today's episode, I have the joy of interviewing Julia Galef. Julia and I talk about updating your beliefs, the difficulty of fighting our biases, seeking truth, and her new book, The Scout Mindset.

Episode Notes

In today's episode, I have the joy of interviewing Julia Galef. Julia and I talk about updating your beliefs, the difficulty of fighting our biases, seeking truth, and her new book, The Scout Mindset.

✨ Sponsor: LaunchDarkly

Today's episode is sponsored by LaunchDarkly. LaunchDarkly is today’s leading feature management platform, empowering your teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, you can deploy faster, reduce risk, and rest easy.

📮 Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com/contact.

If you would like to join the new experimental Discord group, reach out at developertea.com/contact, developertea@gmail.com, or @developertea on Twitter.

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Episode Transcription

I'm very excited to be joined by today's guest, Julia Galef. Julia is the author of the Scout Mindset, which is out in pretty much anywhere the U.I. books. Of course, you can find it in Amazon and local booksellers close to you. And Julia is also the host of a long-running podcast called Rationally Speaking. I'm excited to share this interview with you because Julia is such a clear thinker and she has such a unique talent in discussions of challenging what you have to say while still inviting you to say more. In this episode, we talk about quite a few things. For example, updating your beliefs, scientific paradigm shifts. We have a quick discussion about Danny Coniman, who could possibly guess that we would talk about Danny Coniman on this episode, but we did indeed. Julia is one of the many authors that we've had on this show. And the reason that we have people like Julia on Developer Tea is because, as you know, the big goal of this show is to help driven developers find clarity, perspective purpose in their careers. And one of the most important things that you can do in finding clarity and finding perspective, and ultimately those things leading to help you find purpose, is to understand how to think better, how to build a better thinking machine. So much of our content is pointed towards that. And the scout mindset is yet another way of framing this idea, this thinking machine that we have. And I'm going to let Julia describe it. But let's get straight into this interview with Julia Galef. Julia, welcome to Developer Tea. Thank you for coming. Thank you. My pleasure. Good to be here. So you, we were just talking about for the show. And typically when I have a guest on that has their own podcast, I tend to be the veteran in the room just because I've been doing podcasting since, you know, for six years or something, but you totally eclipse me. You've been doing podcasting for how long since the beginning of 2010. And if my age, you know, if my podcasting age doesn't show me just saying, doing podcasting, I guess is a, you've been a host of a podcast, I guess, since 2010, that podcast being rationally speaking. Can you tell us just a little bit about that podcast and then we'll talk about some other stuff that you do. Yeah. So rationally speaking is I originally was co-hosting it with a philosopher, professor of philosophy, named Massima Pillucci. So we co-founded and co-hosted it together for the first five years or so. And then since about 2015, I've been hosting it solo. And it's it's about every two weeks. And the each episode is mostly an interview or for the most part, each episode is an interview with a, a thinker of some kind of scientist or an author or a philosopher. And it's kind of a conversational episode where I either am trying to explore some topic that I'm really curious about or trying to understand like how does consciousness work or what is willpower and you know, why do some people seem to have more of it than others. Or the guest comes on with a thesis. You know, they wrote a book making an argument that I think is interesting and worth diving into and that I don't fully agree with. And so we have kind of friendly debate about that. And I'm really aiming to focus on guests and issues where we can kind of collaboratively figure a thing out together. And kind of explore, you know, exactly where do our disagreements lie and why do we disagree? So basically to have the debate feel like more of a collaborative effort to seek truth together and less like a battle. Right. So see who's going to win this particular discussion that has no, you know, no way of proving who is right necessarily. Yeah, I mean, well, certainly I think sometimes I have evidence on my side to some degree. And other times I just, you know, have intuitions. But yeah, there's a limit to how much you can do in a podcast. So so I aim more to just try to get to the bottom of what I call our cruxes of disagreement. So, I don't know, for example, I had a guest on a couple years ago who wrote a book that I liked, but substantially disagreed with. It was Rob Reich, who's a political science professor at Stanford and he wrote a book. I'm blanking on the title now, but it was, it was essentially arguing against large scale philanthropy. So he was criticizing billionaires who give away large sums of money to try to do good in the world. And, and I've worked with and, and I'm a big fan of several organizations that, that give away billionaires money and try to do good in the world. And so, you know, we had a, I think a really interesting discussion about this and kind of got to the bottom of one of our cruxes, which is just that we have different, different value systems essentially. Like my, my value system, my approach to ethics is really pretty consequentialist in the sense that I, I want things to happen that cause good outcomes in the world. And, you know, I want to prevent actions that have bad outcomes in the world, which maybe sounds a little bit obvious when I say it, but you know, that's not a lot of people's moral systems. And so, I think that's a lot of people's moral systems, including some consequentialism, but also include other things like, like deontology, like some things are just right or wrong, regardless of their consequences. And I think I don't want to misrepresent Robert Reich's argument here, but my, my memory of my impression of his argument was that, you know, it's just kind of wrong in some ways for billionaires to, but status and praise for giving away money, even if their money is actually doing good, it's still wrong for them to get that status and praise. And so, he was kind of, he was more focused on who deserves praise in society, unless on what is the consequence of them giving away their money. That was a long explanation, but I was just trying to describe the goal of these conversations is to kind of get down to the root of, why do we have different views on this topic? And often it's a value disagreement. Sometimes it's just about, we were using words differently. Sometimes we just have a different, different predictions about how the world works, and that's causing our disagreement. But I find it really interesting and valuable to get down to those roots, even if we can't resolve the disagreement in one hour. And I think that's how to have a pretty major impact on the way you see the world. And I'm wondering, do you have a particular memory or a moment, and it can be on the podcast or not, where you felt like you had, because the reason I'm asking this question, I think this is a fairly rare experience. I think it's, this experience is pretty important when it does happen, where you have a sudden shift or kind of a brightening moment, where you realize something that you didn't realize before, maybe it was counterintuitive to you, but the lights kind of switched on about something. Do you remember a particular moment like that? Like a moment when I kind of had incited to something that had not had been more opaque to me before. Yeah, I think some people might see this as a change in mind, but I think it's actually more like a shift in, or, you know, it's a learning moment. It's not necessarily that you were convinced for different reasons is that you learned something new and you updated your belief about something. Well, that's, I love the way you put that as you might have expected I would because that that is also how I encourage people to think about changing their mind or revising their beliefs as not, it doesn't have to be, and most of the time it's not going to be a 180 degree flip, where you originally believed acts and now you believe the opposite of acts. A much more realistic thing to shoot for is these subtle incremental shifts in your way of thinking, you know, where someone, someone brings up a hypothesis that, or someone makes a claim that you don't necessarily think is true, so you're not fully convinced, but it's something that hadn't occurred to you before. So now for the first time you're considering this as a possibility, a possible hypothesis that you hadn't been paying attention to before, or, you know, someone points out a potential, a caveat or an exception to something you believe, and you hadn't noticed that exception before. And so it doesn't mean you don't hold your belief anymore, but, you know, now you're paying attention to how it may not be true in all cases, maybe it's only true in most cases, and so that's also a valuable shift. So I think these, these kinds of shifts are, as I often call them updates to your thinking are, are really valuable and, and over time, they will often kind of accumulate to a significant kind of paradigm shift in the way you see something. But you shouldn't be only looking for paradigm shifts because they're really made out of these little moments. Is that, does that resonate with how you're seeing this? Yes, absolutely. Just to kind of add one more layer on it, the model that I use for this, that I feel like is, it's fairly well understood by most people, is seeing Newtonian physics as kind of universal truth. And then suddenly we find new evidence to layer on top of it that it doesn't always work exactly that, you know, it's not continuous all the way up to infinity. There are other things going on. So Newtonian physics is going to work in some scenarios and in others, not so much. Yes. This is something that people kind of click with it. Oh, yeah, that's right. I can see how, you know, once you learn for all practical purposes, when I was, you know, when I only knew about Newtonian physics, there was no reason for me to have had any kind of shift or updating my belief there. But then when there's evidence presented, it's not necessarily that hard to say, well, this isn't the whole picture. Right. That's a great, it's a great analogy. And, and it's actually, it made me realize that I should clarify the phrase paradigm shift because I think the phrase has become kind of a buzzword in business in the last couple of decades, and I think it's a very good way of referring to any, any shift in the way you're doing things allegedly a large shift, but oftentimes a small shift that's being spun as a large shift. But really what I meant to refer to by paradigm shift is the way science changes its mind, which follows the process, you were kind of describing. Metaparadime shift almost. Well, I don't know if I would call it a metaparadime shift, but yeah, the, it's a, it's a phrase that was coined by a philosopher of science named Thomas Kuhn in the structure of scientific revolutions. And he was describing this process that science follows when it kind of collectively changes its mind about something like Newtonian physics or like, well, an earlier paradigm shift was whether the sun revolves around the earth or the earth revolves around the sun, where there's this raining paradigm. Like the sun revolves around the earth and then gradually some people start to notice anomalies like like observations that don't make sense, assuming the raining paradigm is true. And so one such anomaly in that case was the observation that the path that Mars fall the trace to cross the night sky had kind of a weird kink in it, the retrograde movement where Mars seemed to move backwards and then reverse course again and start moving forwards. And this did not fit with the raining paradigm that the the sun and all the planets trace circular orbits around the earth. And so at first what happens is these observations are kind of dismissed or ignored because everyone's so confident that the raining paradigm is true. And then gradually they start to accumulate and science enters into kind of a state of confusion where, you know, no, people are no longer sure if the raining paradigm is true, but they don't have anything great to replace it with. And they try to revise the raining paradigm by saying, OK, well, maybe, you know, yes, planets revolve around the earth, but maybe they don't follow perfect circles. Maybe they're these little epicycles that they they follow that would produce these weird paths across the night sky. And then eventually someone comes up with a new paradigm that makes all of your data makes sense again, which in this case is the planets don't revolve around the earth, the planets revolve around the sun. And that produces the past that we see across the night sky. So, so, you know, I like this metaphor in thinking about just how I see the world, you know, I'll have a raining paradigm like, I don't know, having to do with how I see how I view social interactions or how I think about what is good or bad. And gradually little anomalies will accumulate and I think it's important to notice and pay attention to those anomalies instead of just ignoring them. And sometimes I just end up becoming more confused, but other times, you know, those anomalies accumulate and eventually I'm able to kind of make better sense of them in a new paradigm than I did in my old one. So, yeah, it was a long winded way of agreeing with you, but I really like to your example. That makes total sense to me and I think I've experienced a lot of that confusion side of things for myself, especially if I continue, you know, if I don't invest a lot of time in trying to clarify a position that I have. Yeah, the confusion mounts as things just kind of as I let my brain naturally process through things rather than saying, okay, you know, intentionally sit down and try to figure out where do I stand on this particular thing. If I instead, you know, let stimuli, you know, come through, whatever you want to call it, if I experience the world or if I read about, you know, whatever it is that I'm consuming and then I move on with my day. Sometimes somebody will ask me when I think about something and it's relevant to whatever it is that I was just consuming and I do have no idea. I do not know how to answer the question. It's like my brain has computed both sides and I can see both sides or I just three sides or 50 sides. I can see the validity in all of them and I don't attach myself to any of them and that's really, that can be really hard for me. Yeah, I mean, it is tough because allowing those anomalies in as you nicely put it, it kind of requires you to hold your views in this super position where you're, you know, something seems true to you, but simultaneously you recognize that it doesn't fit with other data you have and you have to kind of resist the temptation to collapse the super position and force your. You know, shoehorn the information into your pre existing world view, which it may not actually naturally fit into and just kind of exist in that state of not not being sure how to look at things and just being okay with that for a while and you can have, you know, you can still say, well, I'm, I'm leaning towards this particular view like there's this the story I love that I actually wrote about in my book about a. He is, his name is Jerry Taylor and he was a anti climate change activist so he worked at this thing tank, a libertarian thing, thing called the Kato Institute and his main job was to go on talk shows and downplay the threat of climate change and basically say, you know, this has really been overblown and we, you know, the, the climate change doom sayers aren't giving you the full story and actually the, the earth isn't we don't have good evidence the earth is warming or that it's due to human activity. So this is his main job and then his first anomaly came after one of his talk show appearances where the person he was debating, who's a climate change activist named Joe Rom said to him backstage, you know, something you said seemed wrong to me, you, you claimed that the pattern of earth warming has not lived up to the predictions made by like in the original testimony, you know, you know, you can't get it. And the original testimony given to Congress by people who are worried about climate change, but I think that's not true if you go back and read the testimony you've misremembered or misrepresented what the prediction actually was. And and so Jerry Taylor, kind of expecting himself to be proven right went back to the to look at the testimony and realize, oh, actually I did misrepresented that's interesting. But this, this point was given to me by a scientist who I respect who skeptical of climate change and so he went back to talk to the scientist and the scientist kind of hemmed and hot and didn't have a really good answer for him for why he had misrepresented this. And so this kind of steamy Jerry Taylor because here was this scientist he had trusted and respected who had kind of given him misleading information. And so that didn't on its own cause him to suddenly be worried about climate change, but it did make him a little bit more confused or concerned about the quality of the evidence that he was relying on. And so he just started checking more carefully every time he would make a claim or or or see a study cited against climate change he would follow up on it and he was often kind of disappointed in the quality of the evidence that he found. And so this just gradually over time started shifting him into being more open to the possibility that maybe actually there is something to this climate change issue after all. And I could I could tell the full story if you want, but but this the summary since I've already been monologue for a while is that he eventually did change his mind that the final kind of paradigm shift happened and now he's he quit the Kato Institute now he's a professional climate change activist trying to convince people that climate change is actually a big deal. And that we should be concerned about it and I think he's the only professional. Like climate change skeptic who actually switched sides. Yeah. Well, and it makes sense there are very few people who certainly deep into their professional lives. Change something as contentious in terms of cultural belief. Yeah, and central to their kind of identity certainly the right identity that I thought that was also really impressive. It wasn't just a random side belief you had. We'll get right back to our discussion with Julia Galef right after we talk about today's sponsor, Launch Darkly. Launch Darkly is today's leading feature management platform to empower your team to deliver safely and control software through feature flags by separating code deployments from feature releases you can deploy faster, reduce risk and rest easy. Whether you're an established enterprise like into it or a small or medium business like Glowforge thousands of companies of all sizes rely on launch darkly right now to control their entire feature lifecycle and avoid anxiety fueled sleepless nights. And you've probably had some of these these anxiety fueled sleepless nights on a release day. Even if you don't do release days on Fridays if you do release days on Wednesdays sometimes we have these bugs that don't pop up until two days later so in the middle of your weekend. Here you are debugging your release or trying to roll it back manually or something like that. This is a nightmare. Launch Darkly can help you avoid this. It can make releases newsfest you can start deploying more frequently and securely with less stress. They also have some in kind of in progress features. You may for example want to automatically roll back. This is something that I've talked about with them. You want to roll back some of your some of the work that you've done automatically based on some kind of error some specific kind of error you can do that. There's integrations with some of the tools that you're already using go and check it out at launch darkly dot com. One example of somebody who's using launch darkly IBM IBM went from deploying twice a week IBM is huge they went from deploying just twice a week to over a hundred times a day. That's a lot of deployments and they must have a high level of confidence to be able to do this. We've been able to roll out new features at a pace that would have been unheard of a couple of years ago said IBM's Kubernetes delivery lead Michael McCay. Once again head over to launch darkly dot com to get started today. Thank you again to launch darkly for sponsoring today's episode of Developer Tea. Let's talk about your book for a second. Great. Since we haven't really even introduced the book. So you had a book and the book is come out correct. It is. It's actually coming. Well it's coming out Tuesday April 13th. Okay. So it will be published tomorrow. Yes. Tomorrow it comes out as we're as we're recording this. Right. But by the time this is out the book will be out. And can you just kind of give us a 50,000 for view and then we'll get into some of the details of what the book is about. Sure. Yeah. It's called the scout mindset. And that is my term for the motivation to see things as they are and not as you wish they were. So basically to be or to try to be intellectually honest and objective. And just to be curious about what's actually true about a given question or a situation. And the metaphor is it's part of the framing metaphor of the book in which I say that humans are very often by default in what I call soldier mindset where our motivation is to defend our pre existing beliefs or defend things we want to believe against any evidence that might threaten them. And so, you know, the scout is not scout mindset is an alternative to that because the scout's role is not to go out and attack or defend. It's to go out and see what's really there to try to form as accurate a map of the territory or of the situation as possible. So the book is about, you know, why why are we so often in soldier mindset as our default and how can we shift towards scout mindset instead. And why is that a thing we would we should want to do. As I hear you describe this, my brain immediately goes to a lot of the kind of canonical literature on the subject on various biases that are related to this. And I'm sure that you talk about these in the book. Things like confirmation bias or the idea that we're seeking out information. And it's interesting. I noticed myself doing this and it's in subtle ways. I think this is the important thing that a lot of people miss. And I'd like to hear your take on this. But subtle ways that I seek information that confirms what I'm saying. And I've noticed this. When I Google the question that I have. If I believe the positive side. I Google the positive side. Right. In other words, like if I sometimes I'll Google. I'll give it really, but nine example. I've been doing powerlifting for a couple of months. Oh, nice. And if I've already bought, let's say a particular type of protein. I'll Google the benefits of that protein rather than googling the downsides to it. Right. Right. It's it's a there. I want to make you know, and the I guess the. The the what is it? What is it the purest side of me is saying, well, I'm learning about this. Making sure right. Confirming that I made a good decision here. Right. Yeah, go on. Oh, I was just going to say I'm kind of tricking myself into believing that I'm actually seeking out the truth. I'm seeking information. But because I've kind of subtly sometimes subtly to the point that I can't even detect it. I've just adjusted this search. And when I say the search, I don't just mean on Google. I mean kind of the search for knowledge on this particular subject. Right. Now, I'm served those things, but also kind of making this probably worse. Is that Google has learned that I like this thing. So Google is going to continuously give me the same, you know, things that they think is going to play. And they never tried opening up an incognito window just to see if your results are different. Oh, absolutely. Yeah. It's extremely interesting. It's scary though, right? It's the same feeling of like, well, I don't want somebody to tell me this. I don't want to have to think too hard either. Right. It would be okay. It would be one thing if I was seeking out this information. When I say I, by the way, I'm talking about the lazy brain. Not the sure. I mean, I, it'd be one thing if I had no information at all, if I was starting from ground zero. Then of course, I'm, you know, let's wave both sides of this. But I've already spent my money, you know, I've kind of bought into this. I've taken it. I just took it, you know, 10 minutes ago. And I don't want to feel bad about that. And I think that drives so much of our behavior. And I just love this, this concept is the basis for this book. Can you, can you share a little bit more about some of the more, I love talking about things that are counterintuitive or otherwise. Yeah. We're shifting for you. What was something that you learned in the process of writing this book? Oh, that's a great question. And I'm going to get to that in just a minute. I just wanted to, to applaud your description of soldier mindset in yourself. Because I just, you said it really perfectly. And it reminded me of the, the best definition of what I call soldier mindset and what cognitive scientist often called directionally motivated reasoning. And so I thought you might appreciate this definition. It comes from a cognitive scientist named Tom Gillovich. And he says that when we, when there's something that we want to believe, we view it through the lens of can I accept this? And so our standards, our standards of evidence or the amount of time we'll spend investigating a claim and looking for holes in it. Those are, those are very different. They're very asymmetric, depending on whether it's something that we want to believe or we don't. But what that ends up feeling like is even when there's something you don't want to believe in, you're asking yourself must I believe this and you're kind of looking at it critically and looking for the right answer. And so you've got it critically and looking for flaws in it. It feels like you're being a rigorous critical thinker, right? Yeah. And you are in a sense it's just that you're not applying that same standard of rigorous critical thinking to the things you want to believe. And so the end result is that you end up having in your head a bunch of beliefs that are disproportionately things you wanted to believe because you're applying this asymmetric standard of rigor or, or, or evidence. And so the same mechanical actions too, which I think exacerbates the problem, right? You're reading white, I'm very much reading white papers on both of these sides of the equation here, right? I'm looking for good evidence. And in one, on the one hand, you're exactly right. On the one hand, I'm looking for the flaws in the evidence. On the one hand, I'm looking for something that I can accept and close my tab and go back to whatever I was doing before. Right. Yeah. And in fact, I noticed myself doing this as I was writing the book. I was, I came across a study that, that purported to show that scout minds that made you less successful in life. I mean, that's my summary of it, but that was kind of the implication of the study. I read this headline and my eyes immediately narrowed and I was like, all right, let's see, let's check out their methodology and see if this study is actually stands up. And I read the methodology section and actually it was a pretty poorly done study. And I don't think anyone should, should update on it. But then I asked myself, well, suppose I had come across the same study with the same methodology, but it had found the opposite conclusion. And I found the study showing that scout minds that makes you successful in life. What would my reaction have been in that world? And I realized, you find it's way into the references. Yeah. Exactly. I was like, oh, I would, you know, set it aside and be like, okay, I'm going to spend three pages talking about this study in my book. And so it just, the thought experiment made me realize I kind of needed to up my game for how skeptical or critical I was being of the evidence that supported my thesis. And so I went back through a bunch of studies I had bookmarked to talk about in the book and I read their methodology sections through the same kind of critical lens. And realize that a lot of them were actually pretty bad also. And I did not feel comfortable relying on them in my book. And so I had to scrap a bunch of sections. This is one of the reasons the book took so long to write. Yeah. So, yeah, I think, you know, the point of that being that it can be really easy, even if you're very aware, well aware of soldier mindset and scout mindset as clearly I am, because I'm writing a book about it, it's still a very unconscious habit of mind that you have to kind of be on the lookout for. And catch yourself doing and merely knowing about the existence of motivated reasoning or soldier mindset is not enough in itself to prevent you from doing this. Yeah. As I believe Danny Coniman famously has said over and over that he is, he falls prey to biases more than his than his than his friends do or something. I can't remember exactly what he said. He's exactly wording two, but he, well, he said a couple things. One thing he said is that just knowing about about, you know, the biases and human reasoning is not enough to prevent him from doing it. And he also has some funny stories of himself falling prey to biases that he's, you know, written textbooks about. Yeah. What's buying furniture or something is in there. I don't remember that one. I was thinking of the one where he was, he was working on a textbook writing a textbook with some people and they were trying to predict how long it would take them to finish the first draft of the textbook. And they, their estimate was, was way over us optimistic and it ended up taking, I think, six times longer than they thought it would. And finished, I don't remember, but, you know, but, but his point was he is well aware of how people's predictions about how long things are going to take are over optimistic. And he knows that you should look at the outside view, which is his term for, you know, how long do things like this typically take? Like when other people want other groups, right, textbooks like this. Right. Yeah. Or when I've done similar things in the past, how long has it taken? That's kind of the outside view. Whereas the inside view is just how long does it seem like this should take me when I think about doing it, how long do I think it's going to take? And, and the outside view for textbooks was, was, you know, about six times longer than they thought it would take. And so he was like, if I had just relied on the outside view the way I tell people to, then I would have been accurate. But instead, I just stuck to the inside view. So that was a funny example of how just knowing about these biases is not enough to prevent you from, from committing them. Right. Another thing that he's said, which I kind of have an interesting disagreement with is that he said he's pessimistic about our ability to overcome these innate biases. And he's pointed to research suggesting that teaching people critical thinking is not, it doesn't actually make them better critical thinkers. And so my disagreement with this is, well, I guess the crux of the disagreement is that the study is testing whether people can overcome biases. They're mostly done on, you know, college students who just volunteered for this study for half an hour of their time. So there's some kind of selection bias there. They're not people who are trying to improve their thinking over a longer period of time, which I think a much more realistic expectation for what would it take to change our innate habits of thinking. You know, when you just think about that question, I think it's kind of obvious that no, of course, just reading an article isn't going to automatically change the way you think. Like if you go to therapy, you see a cognitive therapist. And you have, you're aware that you have these kind of cognitive distortions in the way you see the world, like you just, you tend to jump to conclusions or you tend to focus only on the negatives and not on the positives or, you know, this all sorts of cognitive distortions that we all have to some degree. And you can to some extent change those habits of thought you can get better at recognizing, oh, I'm doing that thing again, where I completely downplay all of my positives and just focus on the negatives. And you can to some extent with practice, you know, overcome that and learn to step outside of that and go, no, I remember I've done this before and, you know, I need to remind myself of the things that I did right, not just the things I did wrong, but that process takes time. It's not just something where you read an article and all of a sudden you can expect yourself to be, you know, seeing things in a balanced way. And so I don't actually think any of the studies that people like Connemon point to as evidence about our ability to improve our critical thinking, I don't think any of those studies actually tested the thing that that we should expect would have a make a difference. So yeah, I think, I think this is an under appreciated point that these biases are there ingrained habits of thought that can be changed, but it just takes a little more long term effort and practice to change them. Thank you so much for listening to today's episode. The first part of my interview with Julia Galef, if you enjoyed this episode, then certainly make sure you are subscribed in whatever podcasting up you're currently using. So you don't miss out on the second part of this interview. Thank you again to today's sponsor, Launch Darkly, head over to launchdarkly.com to start resting easy when you deploy your features. Thanks so much for listening to this episode. If you want to join the Discord community, head over to developertea.com slash discord. We discuss things that we talk about on these episodes, but we also discuss things that never make it into a Developer Tea episode. I give advice and other great software engineers with even more experience than I have are giving advice in there as well. It's a good time. You'll find support and you'll find a lot of people who really want the best for the others that are in that group. Thanks so much for listening and until next time, enjoy your tea.