What if every decision you made had a bet attached to it? That's
In today's episode we talk to Annie Duke about the decisions we make like bets. Annie is a former professional poker player and today she focuses on consulting to help people make better decisions throughout their life.
In this part 2 we dig into different way to approach thinking in bets. If you missed part 1 of this episode check it out here: https://spec.fm/podcasts/developer-tea/292556.
If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
This is a daily challenge designed help you become more self-aware and be a better developer so you can have a positive impact on the people around you. Check it out and give it a try at https://www.teabreakchallenge.com/.
Instantly deploy and manage an SSD server in the Linode Cloud. Get a server running in seconds with your choice of Linux distro, resources, and node location. Developer Tea listeners can get a $20 credit and try it out for free when you visit: linode.com/developertea and use promo code: developertea2019
P.s. They're also hiring! Visit https://www.linode.com/careers to see what careers are available to you.
You have to give people freedom to be able to sort of experiment along in there because that's the way that you find new paths. And that's the way that you find like efficiencies that you couldn't find before or ways that things are more elegant than you could have otherwise seen or things where you can speed things up, you know, or places where you can slow things down. For a significant portion of her life, Annie Duke made her living on decisions. Annie is a former poker player, but now she is consulting businesses on how to make better decisions. It isn't always intuitive whether you know how to make the right decision, first of all. And secondly, whether the decision that you made was right. If it was a good one, Annie focuses on exactly that. Ways of making better decisions based on information, available data, and not only making the decision, but also evaluating decisions that have already been made. We talk about this in today's episode. And if you haven't listened to the first episode in this interview, I encourage you to go back and listen to that first. That's where we started this discussion on making better decisions. Annie also wrote a book called Thinking in Bets. I encourage you to check it out. The paperback version of this book will be out on May 7th. That's just a few weeks away. So go and check it out. Thank you again to Annie for coming on the show. Let's get into the interview with Annie Duke. And that's obviously in terms of something really simple. If you get into something more complex, like are users going to like the feature? I mean, now you're talking about something. That's really probabilistic, right? And so what happens if we start to judge people on the quality of the outcome, right? So the users didn't like it. Therefore, you made a bad decision to even develop this feature. Which makes you gun shy, right? I think there's this interesting thing that's kind of emerging here. And one piece of the puzzle is you have to have properly aligned incentives. You can't have a developer that's wanting to do the easiest thing, right? Right. In a game of poker, for example, your incentives are directly aligned with your performance in most cases. So if you win, then good things happen. If you lose, then bad things happen. And not everything is going to be, unfortunately, not everything is going to be that cleanly separated. So for a developer, it could be, and I've actually heard stories of this, that your incentives are actually encouraging you to do things. Slowly. And that's not good. You know, you have to inspect the incentives. So let's assume, though, that the incentives are at least roughly aligned so that success on the product means good things for the developer. If you have that as your basis, then judging these outcomes, really what you're doing is you're making people afraid to act. And so they're trying to find, you know, what is it that they're judging me for? Right. Either they're judging me for making a decision at all, or they're judging me for something that I can't really affect, that I can't change. Something that's, you know, fundamental to my identity or, you know, and it's very difficult to make a better decision after that. Instead, you feel more paralyzed. Yeah. So there's so much good stuff in there. So it sort of makes me sort of think about two different branches. So let me go off on one branch first. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. See you were See you were See you were See you were See you were is it slowing people down? One of the things that I try to get people to think about in terms of decisions is really imagine what's the cost to reverse. So no decision is completely reversible because there's some time that you can't get back, right? Sadly, we can't travel back in time. But some decisions are much more reversible than others. So we can think about, for example, if I order something in a restaurant and it doesn't turn out well, it's not a big deal because I get another meal in four hours or five hours, whatever. So I'm assuming I don't get food poisoning. It's not such a big deal if the food wasn't so good. But if I move to a new city, now the cost to reverse is a lot, right? So we can think about that in terms of any decision we make, but we think about that in terms of coding. Along the way, if you break some stuff along the way, you're going to get a lot of money. So I think it's relatively easy for you to go back and sort of find out what was wrong and fix it, right? So what we want to do is think about when is the cost to reverse really high versus when is the cost to reverse really low? And when the cost to reverse is really low, you should have people acting fast and breaking things. Because in acting fast and breaking things, you're collecting so much information from the world. So now we can get back to how do we extract? The stuff we don't know from the world and get it into the do-no box. Well, part of it is by poking at the world all the time. So if we recognize the situations under which the cost to reverse is low, what we shouldn't do is start speeding up in those situations because the speedier we are, and the more stuff we're trying, and the more stuff we're sort of like pushing at, the more we're figuring out what works and doesn't work, which allows us now to start taking that information when it really matters, right? Mm-hmm. Or making some change that is really, really important. Like you're putting a big investment in like one particular feature that you can now make better decisions about that because you've tested a bunch of stuff along the way. So that can help. Is the situation of the manager really thinking about how are they communicating to the people who are working for them? I want you to go fast here. Because it's not a big deal, right? Like if it doesn't work, that's actually good for us because we learn from it. And the cost of it not working is low because we're still in testing. We haven't released it to anybody. Or maybe sometimes you release it, but you know that the customer isn't going to be that mad and the cost to reverse is really low, right? So that would be sort of in the agile development, right? Like you can release something and if it's broken, it's not that big a deal. So once you sort of recognize that, you can start to move fast and you can start to push against the world and get it to work. Yeah. And get them and get the world to tell you stuff that's actually going to help you be a better decision maker, which releases you from that kind of outcome dependence. So that's kind of number one is really go through that decision process and say, how much does it cost me to fix this? If it's low, okay. Just do it. All right. So that's kind of number one. Number two is that I heard you say that I just like want to bite into is this idea of people being gun shy and people kind of starting to really the way that I would think about is not make risky decisions, right? Not do stuff that's new. To try, just be willing to try stuff. So one of the issues that resulting creates is this, that if we know that people are going to judge us on the quality of the outcome, and in general, we know that what we're really going to be judged on is the quality of bad outcomes, right? I mean, that's what we're afraid of. We're afraid that we're going to have a bad outcome and then someone's going to say, hey, you stink. You made terrible decisions. So we can start there and then we can take it a step further and say, oh, there's also a really interesting thing, which is that sometimes we don't get mad at somebody when they have a bad outcome. Mm-hmm. There's certain times when we don't get mad at somebody when they have a bad outcome. And those times are when somebody has done the thing that everybody always does. The status quo, what we expect them to do. Exactly. So that's kind of that example of like, if I go through a red light, I mean, if I go through a green light rather, and I'm following the traffic laws and I go through a green light, this is what everybody does. This is your status quo decision. Like I'm following the rules. And I get in a car accident, nobody cares. I mean, they care that I, hopefully they care they're in a car accident, but they don't care in terms of, oh, I'm going to get in a car accident. They don't care in terms of my decision-making. They're not saying you're a bad decision-maker. You should never drive again. So, and we can like, I mean, here is like another example, like you don't have ways and you're going to the airport and you're with your partner in the car and you have to make the flight on time and you go the usual way that you normally go. And there's an accident on the road and like literally traffic isn't moving. And you end up missing the flight. I mean, obviously you're both stressed, but you know, nobody's screaming like, I can't believe you. You're so stupid. Why would you go this way? You made us miss the flight. Like there's no, none of that is happening. But if you get in the car with your partner and you say, I'm going to take, I have a shortcut. I have a new way to go. And the same thing happens. There's an accident. I mean, you don't have control over an accident and the traffic is at a standstill and you end up missing your flight. We know that. And so, you know, it's like, I'm going to take a shortcut. I know what's happening. It's your stupid shortcut. Or if Waze is telling you to go a different way and you say, no, no, I know better than Waze. Yeah. Right. Exactly. So, so what that ends up happening is like, we don't, we don't, it kind of gets us into this box where what we think about is, well, I know that when things work out poorly, that I'm going to get yelled at. Right. I'm going to be told I did a bad job. Now let me think about how to stay out of the room when that happens. So one way to stay out of the room is to take a lot of time. So what you're doing is you're trying to increase the probability of a good result by, by really tinkering around in it, but you're costing yourself time by doing that. And you're not thinking clearly about what the trade-off between time is and success. So like, maybe it's going to work, you know, if you take a week, it's going to work 93% of the time. And if you take two weeks, it's going to work 95% of the time. As a manager, you should want the person to be willing to take a week on it. Right. I mean, that's obviously an extreme example, but as a manager, I would imagine, by the way, if they take a week on it, it's going to work 80% of the time. If they take two weeks on it, it's going to work 92% of the time. I assume there, you would rather them take a week as well. So you want to figure out like, and I don't know what those trade-offs are. I don't know what those trade-offs are. I don't know what those trade-offs are. But you sort of want to, you want to have them maximize like, this balance between, you know, success and time. And people are going to tend not to do that. They're going to take too much time because they're trying to stay out of the room. Yeah, we're avoiding that judgment. They're trying to stay out of that, like you did a bad job. They're also going to tend to do things in the way that people normally do it. So if there's some creative way that they could actually code this, that might actually create a breakthrough for your company, because now there's this new way to do things. They're going to be much less likely to try that. Especially if it's risky. Yeah. Right. Because if it doesn't work, people are going to be like, why'd you code it in this bizarre way? And they know that they're going to be much more likely to be blamed versus I did it in the usual way and it didn't work. What could I do? And now you're kind of putting people in a box where they don't want to take chances anymore. And they don't want to get creative and they don't want to try new things. And, and it, you know, all of a sudden it takes a lot longer. For innovations in the way that people are doing these things to actually come through, because people don't feel like they can do that because they're going to get resulted on. And then, and then the third piece of the puzzle that actually kind of drives this is that like, let's say that you have some feature that, that you're releasing and you have some idea about how the market is going to respond to it. And the market responds much worse than you expected. You know, everybody's in a room talking about like, what the hell is this? What the hell went wrong? Why, where, you know, what, what happened here? How could we have avoided it? You know, everybody's kind of going through this process, right? There's this like big postmortem of, about this, you know, thing that's happened. But if you release a feature and you have some idea of how the market is going to respond to it and the market responds much better than you expected, there is no such media. Yeah. Right. Now. There's two really bad things that come from that. Thing number one is obvious. You're telling people to avoid that outcome. That there isn't the same kind of. Attention. Sort of like. Yeah. Right. Reaction to, right. That you're going to be in a room getting grilled if there's a bad outcome. But if there's a good outcome. It's just another Monday. It's just another Monday. Right. But also, so that, that's just kind of generally from like a behavioral standpoint, from a psychological standpoint. But let's think about it from your own decision making and your ability. If we think about like those internal audits of our own knowledge, you had some forecasts about how the market was going to respond. If the market responds way better than you expected, that means that either it would, you know, you had some idea of the distribution and it just was, they just, it was within your prediction, but just at the upper end of your prediction. Right. Or it could be that you had a little bit of a, you know, you had a little bit of a, you know, you had a little bit of a, you know, you had a little bit of a, you know, you had a little bit of a, you know, you had a little bit of a, you know, your prediction, the way that you had modeled how the market would respond to the feature is actually off in some way. Right. So if you're thinking about future decisions for releasing features, it's just as important if the market under responds to the feature, as if the market over responds to the feature for you exploring why it is that your prediction was different than what actually happened, because that's going to change the way that you decide about what features you're going to respond to. And so, you know, you're going to have to think about that. to release in the future, how you're going to allocate your resources to feature development, because that's what helps you predict what will and won't work and what you want to spend your time on. So when the market over responds, it may mean that you're under allocating your resources to certain features because you don't actually have an accurate model of the market. Yeah. Wow. Right. And that's actually really important. So by not digging in and saying, whoa, wait a minute, this was really weird. We thought that we'd get this response, but like, people are crazy for this thing. What were we missing? Were we missing anything? Did it succeed for reasons that we didn't think it would succeed for? Or was this within the bounds of what we predicted? You want to ask all of those questions because that's going to drive your future decisions, right? But we don't ask because if we ask, it feels like we're turning a win into a loss. And we really like wins. We just want to say, oh, we did a great job. We're really smart. Look at that. Like everybody did so great. And by asking the question of what did we miss, it feels like we're turning that into a loss and therefore we avoid it. But that's like just as important a question to ask when you win as when you lose. That's absolutely critical because what you're trying to do is improve your... Right. ...prediction machine, not improve your business. Correct. And as a result in the future, improve your business. But if you just say, hey, this is a win, I'll take it. That's problematic because it's this idea that you're defining a floor in your prediction, but you're not defining a ceiling. You're saying, okay, anything goes as long as it's better than this, right? But that doesn't give you a better prediction machine. Right. Especially like you're saying, if you're serially successful, you could be learning more from your successes. That's exactly right. And you can think about it as like, again, like you're the manpower, the people you have working for you, time. These are all resource allocation questions. And what you want to be is allocating your resources really well. You also want to be thinking about what are the things that you... ...want to be releasing? What is it that you want to be working on? So we have, what are the options? Remember, your beliefs are driving what your options are. And if you're really digging into these kind of overperformance situations, like when you overperform what your prediction is, it may open up options that you didn't otherwise see. And then it's also going to change the way you allocate your resources. I mean, you can think about it this way. If I have a dollar to invest, right? And I invested in something... ...that I think is going to make me $1.50. And I actually lose 50 cents instead. Okay. So that's super important for me to know because it may be that I don't want to allocate my dollar to that thing anymore. It may be that my model was wrong. It could be that I just got unlucky, obviously. But I want to sort of explore that because I don't want to over-allocate to that option again, if that's what the world is telling me. But think about it from the other side. If I have a dollar to allocate to that thing, I don't want to over-allocate to that option again. I don't want to allocate. And I allocate it to something that I think is going to make me $1.50 and I make $3. Well, I need to know that because I don't want to think in the future that I'm only supposed to allocate a dollar to it. Right? Maybe I'm supposed to allocate $2 to that because actually it's a bigger win than I suspected. So I want to be able to see that so that as I'm thinking about how do I allocate my dollars? How do I allocate my time? How do I allocate my manpower? That I'm properly allocated to that. And I'm not supposed to allocate $2 to that. I'm supposed to allocate $2 to that. I'm supposed to allocate $2 to that. I'm supposed to allocate $2 to that. I'm supposed to allocate it to the things that are likely to have positive returns and less likely to put it into the things that have negative returns. And I can't do that unless I'm really paying attention when things go way better than expected. Yeah. And you have to balance this with resulting too, right? You have to be able to say, okay, maybe things just went way better than expected as a result of luck. But what you can't do is ignore that altogether. You can't say, oh, no, who knows why? It went better. You need to be able to look at things and say, okay, we can only attribute this to luck. For all of the information that we have available, all we can say is that it was lucky, right? Maybe that's a perfectly reasonable outcome. But the point is to actually walk through that exercise. Yeah. I would say that actually having the willingness to dig into the wins is kind of the anti-resulting approach. So the resulting approach is I won, I'm great. Yeah. Right. Yeah. Yeah. Yeah. Like, oh, look, we did so well. We performed so well. Obviously, our decision-making is awesome and we're amazing. Yeah. Period. The, whoa, did we miss something? We had this great result, but it was so much better than we expected. Yeah. What did we miss? Did we miss anything? Maybe we didn't, but maybe we did. That's actually the anti-resulting. That's saying I'm not going to sit and look at how it turned out. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Right. But then what's really nice about that is that you get that on the opposite side, because what that allows is that when something underperforms, that you're not just saying automatically, oh, that was a bad decision, because you're asking the same question. You're using the same measurements. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. What did we miss? Maybe we didn't miss anything. Maybe we just got unlucky. I don't know. Maybe we lost because there was something going on. Or maybe something shifted in the market after we released the feature that we couldn't have predicted. Right? So now you're sort of treating it as let's sort of get to the base reasons as opposed to just assuming it was a bad outcome. We all have to have our pants on fire now trying to figure, you know. And it was a good outcome. We should all just open up the champagne. Absolutely. Today's episode is sponsored by Linode. And I'm going to go off script for a moment and say thank you to Linode. Linode has been such a huge supporter of development communities, not only developer tea, but other development communities. And if you've been doing this for very long, especially in a professional atmosphere, if you've gone to conferences, you've probably seen Linode everywhere. And that's because Linode is built. By developers. It's a company of developers. And they build products for other developers. They have lightning quick SSD servers. At only $5 a month. This is such a fantastic deal. But on top of that, they're offering $20 worth of credit to new customers. Head over to Linode.com slash developer tea and use the code developer tea 2019. That's developer tea 2019 at checkout. Thank you again. To Linode for sponsoring today's episode and plenty of episodes in the past. So that's that is such a key critical takeaway from this episode. I don't want anybody to miss it. And, you know, if you're listening to this episode and maybe tuned out for a little bit, rewind like 10 minutes, because that was truly, I think, a critical point for developers not to miss here. And that is to to look at your failures and successes very similarly in terms of the way that you judge them. And I'd love to ask you, I know we're running up on time here and I don't want to go over any further. So, you know, perfect. All right. So I have I have two more, hopefully simple questions for you. First one, can you kind of walk through one or two? I know you have a list of six or seven kind of practical methods to to kind of avoid resulting, for example. I'd love for you to walk. Through one or two of them, maybe the decision swear jar, for example. I'd love to hear more about that. Or so. So, OK, so a couple of things. What we just talked about in terms of the way that you're treating good and bad results is actually one of the best ways to to avoid resulting. It actually kind of acts as a vaccine because it changes. It changes what people sort of think of as a result, if that makes sense. Like it. People think about a result is like, did I win or lose? Period. But now what you're doing is you're kind of changing the definition of a result to wasn't unexpected. And that's what you care about. And so that just sort of shifts people's mindset is like change changes the way that you think about the world in a way that actually can very much help you with the resulting. That's excellent. Yeah. Yeah. So. So two things. So. So the decision. And then and then I'll just quickly talk about a third thing, which is kind of prospective way to avoid resulting. But the decision swear jar is noticing what are the cues for you that might suggest that you're resulting. Right. So. And that those cues might be different for you than they would for me. But. I get a bad result and I'm like, I can't believe I made such a bad mistake. What was I thinking? Yeah. Right. So that that would be something that I would say that would definitely cue cue up resulting. So. So you're kind of rewinding back from resulting and seeing, OK, what are the kind of the triggers for my resulting or the surrounding behaviors? Like this. Right. The things that I think are the things that I say to other people or like you can also do that for things that that how I'm judging other people. Right. So somebody has a you know, somebody has a bad result. You're like, I can't believe that person's such an idiot. Right. Like. That would be one of those things. Such an awful thing to say. But like, look, we're all human. Those things go through our heads. Right. We say them about ourselves. It's equal opportunity. Can't believe I'm such an idiot. You know, or or you find yourself when somebody has a good result just saying like, oh, they're great. I'm going to put them on every single project. Right. OK, well, I mean, it was one really good result. And they may be great. They may be great. They could be your best person. But that one result doesn't necessarily tell you that. Right. So if you think about where are you making these kinds of black and white judgments based on outcomes, you can you can actually create a list. Of those things that you find yourself saying. Make a list of them. And then now that becomes like a swear jar. Right. Which is when you hear yourself either thinking that or saying it out loud, that this becomes a trigger to say, hold on a second. Let me step back from this. Is that really the right conclusion to draw? Right. Like, I know that I had a good outcome or I know that I had a bad outcome. I know this person had a good outcome or bad outcome. But is that really so connected that I can walk back and make this judgment? With certainty about what the quality of their decision making is. So it would be a cue to actually go and examine the process as opposed to making that snap judgment. So you can do this with resulting, but you can do this with other things. You can do this for what are the cues that you're really in an emotional state of mind? Because we don't want to make a lot of decisions when we're feeling emotional. I mean, you can figure out what those things are. One of them for me is like, I can't believe I can't believe it. It's so unfair. Whenever I say like, that was so unfair. I know that I'm in the wrong part of my brain. So you can figure out what what those kinds of things are for you as well. And so you can imagine you can do this at any time, any time, any place where you feel like, you know, and obviously, you know, encoding, you can get really frustrated. You know, you can get you can. Get stuck in a piece of code that's getting really, really frustrated. It's very likely that there's going to be some emotional stuff going on, that there's going to be things that you're saying to yourself or out loud or to other people that are going to be pretty consistent in those situations. Write them down. Because those can then become cues for you to put a dollar in the jar. And in this case, that dollar is, hey, step back and think about the process. Try to think about what's rational. How much is emotion driving you? How much is resulting driving? Sort of what you're thinking so that you get this interrupt to that that habit of mind. That's the decision swear jar. But then the other thing that I just sort of want to think about is so when I'm talking about, like, how are you treating the outcomes, good or bad, and pegging on unexpectedness instead, that would be kind of a retrospective way to deal with resulting. Like, how are you dealing with outcomes after the fact? And that's kind of true of others. The swear jar, too. This is kind of an after the fact way to help you with these kinds of ways that we process outcome. But you can do some before the fact work. So the better off you are at kind of foreseeing what the range of possible outcomes are, the less likely you are to take a particular outcome and put too much weight on it, which is really what you're kind of doing with resulting. So that just has to do with really good perspective planning. And in particular, when you're doing. When you're doing perspective planning, it's really good to do perspective planning that that stress testing. So there's two things that you can do. One is do a post a premortem, which is to say, I'm releasing this feature. I released this feature and it failed. The market really hated it. And then have people and you want them to do this separately. Write a narrative as to why it failed. And have them do that separately. And then. Because you don't want them to infect each other with their ideas and then come together and talk about that. And what that will do is it will it will expose things that you might not otherwise think of by actually imagining saying it. We know it failed. We released it and it was like a dud or we released it and it completely broke. Right. That would be another thing that you could do. And then really have people walk through and write a power, you know, their best narrative as to why that happened. So. So that's called a premortem. The other thing that you can do, which is from my friend, Dan Egan, he calls it the Damien game. I call it like the Dr. Evil game is to imagine that you're releasing a feature and you're an evil developer who wants to make sure it fails. What are the things that you would do to make sure it fails? But there's a constraint, which is any of the individual decisions that you make on their own have to look reasonable. Obviously, in the aggregate, they won't look reasonable. Hmm. Interesting. So any individual choice you make has to look reasonable and make people go play that game. That's very interesting. Yeah. So. Right. It's a super interesting game. I'm so happy Dan Egan taught me it. Yeah. Right. The constraint is interesting. Right. So you could think about that, for example, like if I were thinking about I wanted to lose weight and I'm Dr. Evil. Here's something I can do. I'm too busy in the morning to put healthy food in my bag. Now, on an individual day. That could be a reasonable choice. But we know that that's very likely to create failure if I repeat that decision over and over again. So now I can look at that and I can say, aha, Dr. Evil would do that to me. So how can I make sure that I don't do that? So what it allows you to do is see if Dr. Evil would do that, then how do I not do that? And what you'll find when you play this Dr. Evil game is that you're doing a lot of things that Dr. Evil would do. Because it has to be believable. Right. It has to be. Right. Because it has to be believable. You end up doing a lot of things. So you don't realize how much self-sabotaging you're doing until you play this Dr. Evil game. And then you're like, oh, no, I'm doing a lot of Dr. Evil things to myself. So what those two things in combination do, the premortem and the Dr. Evil game, is they allow you to see better what the future might hold, where the breaking points might be, where the stress points are, such that two things can happen. One is that you can say, OK, here's all these places where I can lower. Here's the places where the probability of these bad behaviors occurring. And I can increase the probability of good behaviors occurring. Here's the places where luck is going to intervene and I have no choice about it. So I can try to figure out ways to hedge against the luck. Right. Things that can sort of like fill in those gaps and help me even if that unlucky thing happens. Or I could think about what's my reaction going to be if that unlucky thing happens. So I already know in advance how I'm going to react to it so that I'm not. So I'm being nimble as opposed to. You know, pants on fire. Or I could think about could I reduce the chances that that unlucky thing happens. Right. And so and so now I have a much clearer view of the future so that when something doesn't work out, I'm much less likely to look at that and say, oh, well, I must have made a bad decision. I'm more likely to say this was included in my plan. I saw this. This is this is so important to to. I mean, we've talked about premortems on this show before, actually. The idea that, you know. You think differently backwards than you do forwards. It's hard to predict, but it's easy to to kind of reflect. Right. And so if you can trick your brain into thinking that you're reflecting, then it maybe shifts into a different mode. I'm not a neuroscientist, but I imagine that there's a different process happening that causes us to think differently and perhaps more effectively in those scenarios. Well, there are. And the main thing is the main way that I would put it like that. The. The analogy that I like or the metaphor that I like is if you're standing at the base of the mountain, all you can see is the base of the mountain. Right. It's very hard to see the path that would be most efficient to get you to the top. But if you're standing at the top of the mountain, now you can see everything. Including the path and other paths. And other paths. Exactly. So you can see all the different ways up the mountain. You can see where the obstacles might be. You can see what the most efficient way up the mountain is. So that's the way that I kind of view it. And I. And cognitively. It works that way. When you're. When you're thinking ahead, trying to predict. Usually the state of the world right now. And the problems that you have sitting right in front of you play a huge outsized role. Just like the base of the mountain does. That makes it very hard for you to see the path beyond that. Or the path. Or the possible paths beyond that. Whereas if you can get yourself to be standing at the top of the mountain. In other words, thinking backwards. You're much more likely to see the. The whole scope. And it'll open up different. Different paths. This has been such an excellent conversation. I want to respect your time. Constraint. And. Go ahead and move. Me. Towards the end here. And. Just thank you so much for your time. Any. And. Of course, there are other. Practical things that you can find in. The. The book. Thinking in bets. And the paperback is coming out. It'll be coming out shortly after this episode airs. Can you tell us a little bit about that? Oh, yeah. So. The paperback version of the book is coming out right at the beginning of May. I. May 7th. I should know that for sure. But I'm going to go. I'm just going to go with May 7th. How about that? May 7th. It is. It's just now May 7th. No, but I think that's actually correct. So, yeah, I'm really excited about it. I mean, the hardback has been out since February of 2018. And obviously, this is going to give people a different way to consume the book. And I'm told by my publisher. Some people really love paperbacks. And so they actually wait for the paperback. But better for planes. No, but I'm actually really excited. I mean, I'm excited in general in terms of the way that people have responded to the book. And I'm very excited that it's going to be coming out in this new format. Yeah, it's. I plan to get a copy of the paperback for the traveling portion. I do have the hardback on my shelf. Excellent cover and just a wonderful book. So thank you for that. Thank you for that. I have a couple. Very kind of quick, quick fire questions for you. Just two of them. The first one is, what do you wish more people would ask you about? What do I wish more people would ask me about? You know, it's so hard for me to answer that because I feel like. Very often, I don't know. I don't know what it is that I wish more people would ask me about because. You know, for example, like what you told me about today, about thinking about beliefs and values and sort of like inflating them. And so feeling like you didn't want people. You didn't want to sort of expose your beliefs to the outside world. That's something that I didn't know that I wanted to be asked about. Sure. But then I was asked about it today. And that kind of opened up a new way of thinking. And, oh, that's an interesting way to think about it. I can see how that might be an issue for people. So I don't really want to. It's hard for me to answer that question because I feel like that's what the world reveals to me. Is the things that I would like to be asked about that people don't ask me about. So like as an example, like in another podcast I recently did, somebody asked me, what's the kindest thing that somebody has ever done for me? And I'd never been asked that. And I didn't know that I wanted to be asked that. But I was really excited that I got asked that because I got to answer a question about my amazing graduate school advisor who did the kindest thing anybody's ever done for me. So it's kind of a meta answer. Then you hope that people ask you about things that you don't expect to be asked about. Right. Exactly. Exactly. Great. And the last question that I'd like to ask you is if you could give software developers just a couple of seconds of kind of final advice, no matter where they are in their careers, what would you tell them? Interesting question. I think it would be similar advice for most people, which is I think that at the beginning of everybody's career, and actually even when you're older, you think you're supposed to have everything figured out. And you're supposed to know exactly what you want to do. And you have a desire to have much more certainty in your own knowledge. Because I think that you tend to equate certainty with competence. If I'm sure of what I know and who I am and what I want to do, that I'm also competent. And what I would hope is that they would be much more open-minded, first of all, to... What other people might think, particularly those people who disagree with them. Because I think there's a lot to be learned that people's beliefs are going to change. And the more open you are to what the world sort of has to offer you, the more quickly that you get to actual competence, as opposed to sort of perceived competence. That if you sort of feel like I need to know exactly what I'm supposed to be doing right now, first of all, I think that creates a lot of stress. Number one. Undue stress, because you actually don't. But also... Mm-hmm. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Because it might cause you to miss opportunities that might be sitting in front of you, because you're so focused on a particular path. So, you know, I'd like for every person who's sort of at the beginning of their career to basically keep their eyes out. You know, be open to finding things that interest you, to making shifts in the way that you think, to celebrating when you change your mind. And to really redefine for yourself what competence means. to not, I know everything and it's set in stone and I know what I want to do. And I know that the things that I believe are right, as opposed to competences, recognizing what you don't know and keeping your eyes out for that. That's excellent advice. And I think a lot of people who are listening to the show right now are going to find a positive sense of conviction and appreciation for what you've shared on the show today. Thank you so much for being with me today, Annie. Okay. Thank you so much. A huge thank you to Annie Duke for joining me on Developer Tea to talk about making better decisions. I hope you have come away with some actionable ways to make better decisions in your career, in your personal life, and ultimately to think better about decisions, about judging other people's decision-making. And this really, for me, this is a process of learning how little I actually know and how much I have left to learn. So thank you again to Annie for reminding me that there's so much left to do. Thank you again to today's sponsor, Linode. Head over to linode.com slash developer tea and use the code developer tea 2019. That's developer tea 2019 at checkout. If you found today's episode or, any other episode of developer tea valuable, if it's added value to your career, your personal life, then the best way that you can give back to the show is to tell others about the show. You can do this in two main ways. One is to simply share this episode with a friend. And another way is to leave a review on iTunes. This helps other people decide whether or not they want to listen to developer tea when they run across it in iTunes. And it lets iTunes algorithm know that there are people who like developer tea. Thank you so much for listening to today's episode. A huge thank you to our network spec.fm and our producer, Sarah Jackson. And until next time, enjoy your tea.