What was the last time you felt 100% certain about a conclusion you made? In today's episode, we're talking about conclusions and how we come to them. Today's episode will challenge you to ask more questions before landing on conclusions
What was the last time you felt 100% certain about a conclusion you made? We're talking about conclusions and how we come to them, and in today's episode I'll challenging you to ask more questions before landing on conclusions
DigitalOcean is the easiest cloud platform to run and scale your applications. From effortless administration tools to robust compute, storage, and networking services, DigitalOcean provides an all-in-one cloud platform to help developers and their teams save time while running and scaling their applications.
Build your next app on DigitalOcean. Get started today with a free $100 credit at do.co/tea. It only takes a few minutes to get up and running.
If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
What was the last time that you felt incredibly certain about a conclusion you were making? If you're like most people, this happens fairly often. You think your conclusions are relatively correct. You have a high confidence interval, high confidence level in your own conclusions. This is actually what we talked about in the last episode of Developer Tea.E. talking about our over-confidence bias. And this happens to all of us. On a regular basis, we try to come up with a conclusion. Very often, we're using something like the scientific method. We're using the right steps to reach our conclusion. We're controlling as many variables as we can and we're observing the variables that we are changing as well as the dependent variables. And of course, there is a lot of nuance along the way. For example, how do you test the hypothesis? That's really where we often get hung up. And for the sake of today's episode, that next piece, how do you evaluate what you tested? Very often, what we're looking for in any kind of scientific, kind of experimental environment is we're looking for a direct cause and effect relationship. When you do x, y occurs. And specifically, we're looking for those relationships that we can directly identify, not only correlation, in other words, when x increases, y decreases. That would be what's called a negative correlation. That's not the only thing we're looking for. We're looking to eliminate or isolate x as the cause of y. And this is where things can go wrong. We're going to talk about how things go wrong right after we talk about today's sponsor, DigitalOcean. DigitalOcean is going to give you a listener of Developer Tea, $100 worth of credit on their platform. DigitalOcean has the easiest cloud platform to run and scale applications from effortless administration tools to robust compute storage and networking services. DigitalOcean provides an all-in-one cloud platform to help developers and their team save time when running and scaling their applications. DigitalOcean has chosen to go with a straightforward, affordable pricing method. This is leaving complex pricing structures behind. And instead, you're going to have a per month flat pricing structure across all of their global data center regions. Again, they're going to give you $100 worth of credit just for being a Developer Tealistener, head over to do.co slash t. That's do.co slash t e a to redeem that $100 worth of credit. Thank you, again, DigitalOcean for sponsoring today's episode of Developer Tea. So there's a lot of ways that things can go wrong when we're talking about going from understanding a correlation to isolating a direct cause for that. And how this applies to your code? Well, you may be looking at particular things that line up for a bug or you may be looking at user interactions that are based on some specific feature. And you're trying to gather information about why people are doing what they're doing or why is this code breaking down the way that it is. Why are we getting this error in these particular cases? And so often what we do is we gather information. And as we're gathering this information, we start building up a picture of why trying to answer this question. We've talked recently on the show about all of this, actually, we've talked about the power of questions. We've talked about, you know, why are narratives about what happened, why they're so powerful. And so as we're gathering this information, it's very difficult to do so with completely fresh, unbiased eyes. It's difficult to parse through this information without adding meaning to it. And it's difficult to see correlations without immediately labeling those causations. And so here's what happens. We gather a lot of evidence for our position. We gather a lot of evidence for what we think happened. And this may not be a self-serving position. We may actually be, you know, heading down a reasonable path that's even supported by the evidence. And unfortunately, we use this evidence as a stand-in for undeniable proof. This happens all of the time not only in our professional environments with code, but also in our interpersonal relationships. For example, we might try to draw out meaning from somebody's actions and from somebody's words by gathering all of that as evidence. And then trying to understand why those things happen the way they happen, why that person acted the way that they did. And by taking this evidence and compiling it together, we create a reasonable thing that we believe to be the truth. Now, unfortunately, sometimes our reasonable evidence-based version of the truth- is just simply not true. Something else caused those same events to occur. And this is very difficult to see because we don't always understand our blind spots. We don't see all of the possibilities. It's very difficult for us to see any other possibility than the one that makes the most sense. So how can we defeat this problem? Well, it's important, number one, to stay vigilant about being certain. Or a better way of saying that is, be less certain. Be willing to be uncertain about any of your conclusions. Until you have hard proof, and hopefully quite a bit of hard proof, then being certain is usually damaging. It's usually not a good place to be. Not because certainty is bad, but because so often we are wrong. And when we are wrong, that certainty kind of hinders us from becoming more right, from learning and gathering more evidence and evaluating that evidence more effectively. So be vigilant about remaining uncertain until you are very certain until you have undeniable proof of whatever it is that you are hypothesizing about. Of course, it would be kind of silly for me not to mention that the scientific community, the academic scientific community, they call this theories for a reason. Some of our most important scientific advances are still considered theories because for whatever reason, as usually technical reasons, they can't be proven entirely. It's important to recognize that even if you are uncertain, that doesn't mean that you cannot act. If you have a level of confidence that doesn't equal 100%, that doesn't mean that you're crippled, you can act based on a reasonable degree of confidence. You don't need certainty to be able to act. Secondly, when possible, I encourage you to have someone else evaluating the same information that you are evaluating without the prior knowledge of your incoming bias. In other words, if you already have a hunch, for example, of what is causing a bug and you're digging through logs, don't share that hunch with the other person. This may sound counterintuitive. You may think, well, we need to collaborate. I think it's this. They need to know that I think it's this so they can confirm what I'm saying. Instead of sharing that hunch, give them the opportunity to develop their own hypothesis. Once both of you have developed a reasonable hypothesis based on the evidence that you have in front of you, then compare your hypotheses. Sometimes this is cost prohibitive. We don't need to have two people digging through the logs for a very simple bug, just because we don't ever want to claim false certainty. But in cases that it does require some further thinking if it's a justifiable amount of time, I encourage you to have someone else come in and have no incoming bias, no incoming story, no preconceived notion of what may be wrong. Thank you so much for listening to today's episode of Developer Tea. I hope this was interesting and insightful as well as challenging. Thank you again to today's sponsor Digital Ocean. Digital Ocean is going to give you $100 worth of credit just for being a Developer Tea listener. However, to D-O dot C-O slash T that's D-O dot C-O slash T-E-A. Thanks again for listening. If you don't want to miss out on future episodes, make sure you subscribe in whatever podcasting app you're listening to right now. Until next time, enjoy your tea.