Developer Tea

Why We Believe Ourselves (Even When We're Proven Wrong)

Episode Summary

Why do we believe we are right, even when it's easy to see we're wrong? There are psychological reasons, and there are basic biological and even logical reasons why believing you are right is easier than questioning yourself.

Episode Notes

Why do we believe we are right, even when it's easy to see we're wrong?

There are psychological reasons, and there are basic biological and even logical reasons why believing you are right is easier than questioning yourself.

✨ Sponsor:

With over 91 million episodes, Listen Notes is my new favorite way to find podcasts. Whether I'm researching an author or trying to find something in a niche topic, Listen Notes has the search engine to make it happen.  Search for almost anything, for free, right now! Head over to

📮 Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at:

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Episode Transcription

Why is it so easy to be tricked into believing that we are right? That's what we're talking about in today's episode of Developer Tea. My name is Jonathan Cutrell and my goal on this show is to do developers like you find clarity, perspective, and purpose in their careers. There are a host of biases and cognitive distortions, whatever you want to call these things, basically wronged thinking, thinking that doesn't match up with reality, that reinforce this idea, that we believe we are correct. We believe that we have the right belief. We assume that our perspective is the clearest or the most accurate. Very rarely do we assume that we're wrong. Now sometimes this is not true, of course, in subjects where we have little to no knowledge, it's much easier to accept that we are probably wrong or at least more wrong than somebody who is a known expert. Some of these things are easy to accept, but even in those scenarios, it is still pretty likely that we have an overconfidence. That's actually one of the biases, overconfidence bias. We have overconfidence, we have a lot of confirmation bias where we seek out information that we believe is going to confirm our previously held beliefs. So with this host of biases and we're not going to go through all of them, we might mention a few more, but with all of these biases that are reminding us that we are believing that we are right more often than we are actually right, it makes sense to kind of look at this from a meta perspective. Why is it that it's easy to trick ourselves into believing that we're right? And we're going to talk about this from two different angles today. The first is the psychological angle. Why is it so easy to believe that we are right more often than we're wrong? Well, very often there is little consequence, little negative consequence to being wrong. That might sound untrue or it may not ring true at least, but it is true because it's a universal concept or a universal kind of shared reality that we all have that many of us are wrong a lot of the time. All of us are wrong a lot of the time. And there's not a huge negative consequence because here's the critical factor. It's hard to know. It's very difficult to know who is right or who is wrong. And why is this? Well, most of the time this has to do with context. Virtually never do we face situations where we can test whether we are right or wrong with 100% certainty. We're going to talk a little bit more about that when we talk about the second angle a little later in the show. But this reality is persistent that it's difficult to prove whether you're right or wrong and very often rhetoric can make someone perceive that you are right. In fact, that's essentially what you're doing to yourself. You're convincing yourself that you are right through some kind of self rhetoric. A rhetoric meaning that you are using some kind of reasoning to convince. And this is a very hard obstacle to get over because we have a hard time understanding the difference between contextually relevant or contextually valid and universally correct or what is actually right or true. At a high level and this becomes a philosophical discussion more than a kind of nuts and bolts correctness discussion because we very rarely run into situations where we can even access a universal truth. And so it's easy to convince ourselves of something even when we're wrong because from some vantage point our thoughts are valid. Even if that vantage point is skewed or unfair. Even if that vantage point is incredibly biased. If you were to put yourself into those biased shoes, then what's being said might seem completely reasonable. And so understanding that all of our judgments about what is right or what is correct, what is valid, all of these things are incredibly skewed by our perception. But let's say that we do come across a situation where we can figure out where we can show for sure whether you are right or wrong about a given assertion. That's what we're going to talk about right after we talk about today's sponsor. Listen notes. So I want to paint this scenario very simply for you. If you have ever tried to find anything on any of the major podcasting applications, you have probably left a little bit disappointed because podcasts are kind of hard to find on the platform applications like apples podcast app. These are very difficult to find and there's a ton of content out there so you know that there's a lot out there to find. But the way that the searches play out on these on the kind of the big apps, it tend to be really poor results. Listen notes has fixed this problem. It's the best podcast search engine. It searches over 91 million episodes and 1.9 million podcasts. Like this is an incredible amount of content. For example, if I search mental models, I immediately get 10,000 results. All right, mental models. I'm doing it right now live as we're talking here. Mental models gives me 10,000 results. And if I scroll down to the bottom of the page, I've got 10 pages worth of episodes about mental models. Now, you may be wondering, well, I have my podcast. I know what I'm going to listen to. Well, you may not be using podcasts for everything that you could be using them for. For example, let's say that you're trying to decide whether or not you should buy a particular book. Well, it might make sense for you to go and listen to a couple of interviews with that author. But it's kind of hard to find those interviews sometimes, right. And I'm going to be using the time I've ended up resorting to YouTube when I actually did want a podcast. Listen notes is going to be my new tool for finding those episodes. Go and check it out. Head over to to get started today. Thanks again to listen notes for sponsoring today's episode of Developer Tea. So we trick ourselves into believing that we are right. A lot of the time. In fact, most of the time, we probably believe that we are right. That's the average person's kind of perception. And you've probably heard this before. In fact, I think we've said it on the show before the average person believes that they are smarter than the median and to illustrate what this means. Essentially, imagine talking to 100 people. And let's say 90 of them believe that they are in the top 10 smartest people in that group. This is obviously illogical. It's not possible. And we still believe it. Why is it that we believe in our own superiority or at the very least in our own validity? It may not necessarily be true that we are always better than other people. But we are often considering ourselves correct or right in a given context. So what exactly is happening here? Well, we already talked about some of the psychological parts of this. But now I want to talk about the other side of the puzzle. This is the kind of scientific or logical side of what it means to be right or wrong. Most of what we talk about when we're discussing whether somebody has a valid idea or not, our theories is theories of how a given system works. Maybe it's a zoomed in system that you're working on in your particular atmosphere, right at work or at home within your culture. Or maybe the theory is about something that goes beyond your work or culture but is a part of it. This is most commonly what we're talking about. We might talk about what we believe caused a particular outage or what we believed caused a particular bug or maybe what we believe caused a particular success with our work or with our personal lives. And so we come up with these theories, we present them either to ourselves or to others. And then we kind of go on our way. It's easy once we've presented a cohesive theory to simply believe it. So let's dig in a little bit more to why that is. And there's a very simple reason here, very simple reason why it's easy to believe our own theories. We are very good. Humans are very good at since making. We are able to create a narrative, a story, some kind of frame, a lens around a given event around a given concept that explains it. We've done it for essentially all of recorded history. Much of our earliest ways of doing this were using things like mythology. And mythology would help describe what happened or the etymology of a given part of our culture, part of our world. And because those explanations held up on their own, in other words, if you only looked at the explanation, it was cohesive and it made sense within itself, it was easy to believe them. So what we're looking at here is logical cohesion for a given idea. In other words, did what you said make sense with itself? Did you set up a frame that didn't contradict itself? And we take this signal of not self-contradicting as a point of correctness. Because if we can create a good description, and our brains are optimized for accepting that good description, partially for the sake of efficiency. We're okay accepting a good enough description because if it describes it, then that reduces, and here's the critical factor, it reduces the uncertainty that we otherwise would have to accept. Think about this for a second. If we can explain something easily and quickly, that is certainly easier on our brains, purely from an energy standpoint, it's easier to accept that good enough description. It's much easier than if you were to have to say, okay, well, actually, I want to test this. I'm going to verify this, and want to see if I can break this. Can this hold up to scrutiny? And this is the critical question. This is the critical question. And the fact that makes it very difficult to walk that path, the reason why our brains are kind of reticent to go down this pathway of trying to scrutinize a given story, a given narrative, is because it's nearly impossible to have verification when you are correct. It is very easy to falsify something. In other words, to look at a theory and explain why that theory doesn't work. On the flip side, it is nearly impossible to prove with 100% certainty the correctness of a given theory. And so instead of dealing with that endless level of ambiguity and uncertainty, which is uncomfortable and incredibly inefficient for our brains, we accept something that makes sense and we move on. Now, the unfortunate part is that we very rarely go back and update those beliefs. If we were to encounter information that falsifies that original belief, it's much easier to hold on to the original belief than it is to update it. This is where we can really do a lot of work. It's where we can really become better versions of ourselves, where we can actually grow is trying to relearn the process of learning, the process of updating and existing belief. And hopefully we can eventually get to the place as individuals where we are skeptical of those original stories, of those cohesive but not necessarily true beliefs, where we can scrutinize those beliefs and we can bring in some other perspective to see if it holds up in multiple places rather than just in that local context. So this is what I recommend, I recommend two things. The first is to pay very close attention to your resistance to updating a belief. If you have an existing belief and you encounter some information and you're quick to say that your original belief was correct. One of the key kind of trigger phrases you can look for is I still think. I still think this is a very common phrase that precedes some confirmation of an original belief, belief that you had before encountering some new information. I still think instead and you don't have to abandon your original belief. Instead what you should think about is how much did this change your original belief? Not did it change it entirely, but I am 10% less confident in my original belief. This is one way to think about how do you assimilate new information and allow it to change and update your pre-existing prior beliefs? The second thing that I recommend is to always search for falsifying information. This is a critical skill for software engineers. If you cannot falsify your narrative, in other words, if you can't find out why what you're saying is wrong, if you can't seek that out, then you're likely to chase so many red herrings, for example, when you're chasing down a bug. It's very easy to believe the wrong thing and spin your wheels believing the wrong thing because the narrative is cohesive. You believe that the bug was caused by something and you're so convinced of it that you're unwilling to consider other scenarios, other causes for that same bug. Instead of trying to rule in your beliefs, always be seeking to rule them out. Find better explanations for events that are happening for the sustainability of your projects. Find better explanations means ruling out the bad ones. Thank you so much for listening to today's episode of Developer Tea. Thank you again to Listen Notes, this is a brand new sponsor for this show. It's going to become a favorite tool of mine, I believe. Listen, you can find episodes, there's over 91 million episodes to search through, but you can find episodes on almost anything. Of course, you can find Developer Teaepisodes on there so you can cross reference topics that we talk about here with other podcasts. Go and check it out, head over to Thanks so much for listening to this episode of Developer Tea. One more quick announcement. We are starting an experiment with the Developer Teacommunity, the people who listen to this show. We are opening up a Discord server. We've already started this, the community is starting to grow. It's very small right now. It's a great time to get in and start developing relationships with this community. Like I said, it's an experiment, it's brand new. If you want an invite to this Discord, reach out directly to me on Twitter. If I'm on, slash Developer Teaor slash J-Catrel, you can DM me on either one of those, and I will send you the link in the description. Thanks so much for listening to today's episode and until next time, enjoy your tea. Bye.