Developer Tea

3 Practical Steps for Avoiding Narrative Biases

Episode Summary

One of our brain's main jobs is to predict our safety and survival, and stories are one of the best ways to pass down a framework to predict the future based on past experiences. In today's episode, we're talking about the power of narrative and ways to break out of story-driven thinking.

Episode Notes

One of our brain's main jobs is to predict our safety and survival, and stories are one of the best ways to pass down a framework to predict the future based on past experiences. In today's episode, we're talking about the power of narrative and ways to break out of story-driven thinking.

Today's Episode is Brought To you by: Linode

Instantly deploy and manage an SSD server in the Linode Cloud. Get a server running in seconds with your choice of Linux distro, resources, and node location. Developer Tea listeners can get a $20 credit and try it out for free when you visit: linode.com/developertea and use promo code: DeveloperTea2018

P.s. They're also hiring! Visit https://www.linode.com/careers to see what careers are available to you.

Get in touch

If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Episode Transcription

How do humans adapt to stories? Humans are effectively addicted to stories. It's not so much that we are chemically addicted to something that happens when we hear a story, but more that we make sense of the world by understanding everything as it flows from one thing to the next. Our brains need to understand the world in terms of stories. For a lot of reasons, perhaps the most important kind of core reason, so that we can do some pattern matching. We can see other similar stories and predict the outcomes of those stories. And as we've talked about in past episodes, one of the brain's main jobs is to predict something, particularly predicting whether or not we will be safe for survival. So stories are useful because they provide a straightforward way of kind of creating characters, of casting characters into certain roles. And when you know the roles that a character plays and you know the story, well that gives you a framework for predicting what will happen. And perhaps equally importantly, it gives you a framework for evaluating what happened in the past. Humans are actually very good at this. We provide reasons for virtually everything that we see going on. And sometimes we're right. Sometimes we're reasoning, even if it is immediate and automatic, even if we didn't really think it through, sometimes our reasoning is good enough. Our reasoning is approximating the truth to a degree that it's useful. We can act on that information. Other times we are grossly incorrect. The stories that we tell ourselves about whatever happened or whatever will happen, their wrong wholesale. The characters are wrong, the casting is wrong, the plot is wrong. And we still act on these stories. Even when we're given information that should really break apart whatever that story is, that timeline, that theory that we have, we still rely more on stories than we do on raw information. We talked about this in the past when we talked about the power of narratives. In today's episode, I want to focus on some tangible ways that you can break out of this story-driven thinking. Now you're not going to break out of it entirely. This is built into your brain over millions of years of evolution of the human brain. We have these things that are kind of embedded. And this is largely because our brains are more concerned with survival than they are accuracy. Being right. Problem solving at a higher level. So our society and our culture has outgrown or outpaced the growth of our brains in a lot of ways. And we aren't going to focus on whether or not that's true. That's a different argument for a different time. Instead, I want to focus some ways that you can break out of relying on the convenience of a narrative. My name is Jonathan Cutrell. You're listening to Developer Tea. My goal in this show is to help driven developers connect to their career purpose so they can do better work and have a positive influence on the people around them. Today's episode is sponsored by Linode. With Linode, you can get up and running with a Linux server in the cloud in just a few minutes. You'll launch it from Linode's new cloud.linode.com beta manager. You can get started with Linode for as little as $5 a month, which is going to give you a one gigabyte of RAM server. It's on SSD storage. You get 24.7 friendly support and a 40 gigabit internal network. So if you ever spun up another instance, for example, those two instances could easily communicate with each other at a very high bandwidth. You can transfer a lot of information at a very short period of time between two Linode nodes. Linode is a company of developers for developers. They open source even their beta manager. And it's actually open for comments. You can have a head over to gethub.com slash Linode. And if you wanted to go directly to that project, it's slash Linode slash manager to see an app that is both open source. And it's also an app that you could have built with Linode's public beta API. They have a lot of awesome stuff going on. And by the way, they are hiring. If you are a developer and you're interested in this kind of stuff, head over to Linode.com slash careers to check that out. To get started with $20 worth of credit, I guess we haven't mentioned that yet. Linode's going to give you $20 worth of credit, which is essentially four free months on the intro tier. To get that $20 worth of credit, use the code developerT2018 at checkout. Head over to Linode.com slash developerT. That's all one word. Linode.com slash Developer Tea to get started today. Thank you again, Linode for sponsoring developerT. So we tell ourselves stories. And these stories are often approximating something based on our perception. But here is where we start having problems, breakdowns. You've heard the idea that there are two sides to every story. But the truth is that there are perhaps close to an infinite number of sides to every story, depending on your perception. Starting with basic things like the fallibility of our memory and not understanding how other people behave and assuming that we have all of the relevant information. Unfortunately, this just isn't true. We can't look back far enough into the past to see what lead to this particular event, whatever the event is. And we can't look into the future. We can't rerun the same event over and over to test our assumptions. And so we tell the story based on a very limited perception, based on very limited memory, and certainly based on a lack of information. And I want to zoom in on this specific reality. That is very difficult to grasp. The amount of information that is associated with really any given event that we experience in our lives is vast. It's so vast that there is really not a possible way to capture all of that information. To understand all of the factors that lead up to a given event, you'd really have to rewind all the way through history. You'd have to not only understand the events immediately, or the facts immediately surrounding the event, but also all of the background and all of the lead up that explains those core first-hand facts. And we already mentioned that stories can be useful. It's not necessary to have every single fact to be able to get a general idea, a working idea of what is happening. But unfortunately, we also don't have a way to measure how good those stories approximate reality. We don't have a way to measure how much randomness is to blame for a given event, a given outcome. It's incredibly difficult to run some kind of experiment on many of the things that we would care about as developers, many of the events that we would care about as developers. So many of our errors come from our misguided concept of our own perception, our own infallibility of perceiving. And we perpetuate this problem on a daily basis when we look back, for example. And we had an idea that something may happen. And then if that thing does happen, we look back and we bolster our own beliefs, and our own confidence in our ability to tell the future. We overestimate just how convinced we were that that thing would occur. When we look back and something that we thought would happen doesn't happen, we grossly underestimate and sometimes entirely erase that we had originally thought one thing and reality didn't line up. To summarize the way that our brains work, we are constantly telling ourselves, even unintentionally, that we're correct, that we have something figured out, that our beliefs are the right ones, that our perception is complete, that it's unfettered, that our approach is the best approach. Studies even show that people have difficulty remembering a time when they didn't believe what they currently believe. This problem is almost universal amongst humans in different capacities or in different amounts. Perhaps the most important and most actionable reality, as it relates to the stories that we tell ourselves, is that we usually draw from the most immediately available information. Very rarely do we try, step out of our own perception, very rarely do we try to go beyond what we would normally be able to perceive. Very rarely do we collect more information than we feel is necessary. Instead, humans drive to make sense. We drive to resolve cognitive dissonance, to resolve what is unknown by creating a story based off of what is known. Available evidence, then often overshadows and perhaps even completely eclipses the possibility of unavailable evidence. This is the driving concept behind the phrase, what you see is all there is. We've mentioned this phrase a handful of times on the show before. It is originated by Danny Coniman in his book Thinking Fast and Slow. Hopefully you're not tired of hearing about that book, but this concept is prevailing for developers especially. I want to discuss ways to short circuit or hopefully find a way to bias in the opposite direction away from what you see is all there is problems. What you see is all there is is not a bias in and of itself. It actually drives multiple other biases. I'm going to give you three very practical ways to try to bias in the opposite direction, to try to avoid just seeking a resolution to your cognitive dissonance. Instead, become comfortable with the unknown and remain curious throughout your career, throughout your problem solving, throughout your bug hunting, whatever it is that you're doing in a given situation that could be negatively affected by a propensity to believe a story rather than seeking some kind of truth. The first practical step that I have for you is to always systematically ask, not sometimes, but always systematically ask, what am I not seeing? What is not visible? What is invisible? What evidence is not available to me? This is a trigger to remind you that what you see in front of you may not necessarily encompass all that is available to see. Perhaps there are invisible factors that you don't see. I'll give you a practical example of this. I recently experienced a bug and this particular bug was infamous. In fact, I can tell you what it is. It was an Oauth bug, a Google specific Google Oauth bug and we've been trying to trace this down and we found this article online where somebody was talking about the specific bug and they had 11 kind of factors, things that it could be. Now, as a developer, when you find an article like this, it is very easy to fall victim to the idea that the answer must be one of those things. This seems strange that we would allow ourselves to jump so quickly to this conclusion, but it's also easy to believe. It's easy to believe that someone who has gone through the time to list all of these these potential problems that they've ruled out, other problems, other possible issues. In this case, we didn't have any of the problems on that list. If we had just taken that story and believed it, instead of questioning how comprehensive is this list, is there any way that it isn't something on this list? What is invisible to us right now? If we didn't take the time to say that, then we could have chased down every single one of the items on that list. Instead, we did indeed, very early. We came to the conclusion that this list is not an authoritative list. This list is not covering every possible cause for this particular Oauth bug. The same is true for any other bug or thing that you're trying to solve. When you look for others who have had similar issues, we've all done this before. We Google a problem that we're having. We find somebody on Stack Overflow that seems to be having virtually the same problem. We implement whatever it is that they say is the solution, whatever that top answer that worked, whatever the top rated one is. We'll implement it and then we find out, well, no, that didn't fix our problem. That tendency to pattern match, which is incredibly useful, incredibly valuable. It can also lead us to believe something that is not true about our code, about our work, about reality. So always procedurally ask the question, what am I not seeing? What is invisible? What can't be seen? What evidence have we not collected yet? Okay, practical advice number two. Never discount the role of randomness. Machines are often as predictable as we would like for them to be. When we run our code and it produces the same results a hundred times, we can reasonably expect it to produce those same results on time 101. Even with this high level of predictability, when we add in the unpredictability of human error, when we add in the unpredictability of interplay between multiple machines that is defined by humans, when we consider factors that are outside of that kind of controlled ecosystem, we quickly run into randomness and entropy that cause things that we would never expect or could predict. The problems that you face in your career will go beyond computing. It will go beyond code. Most of the problems that you face and arguably all of them are at their core, people problems. And so when you start dealing with other people, randomness and entropy start playing a bigger role. We as developers often discount randomness and entropy largely because of the stability of our platforms. Many other professions don't have that kind of luxury, but even we don't actually have that kind of luxury. If most of what we do in our careers is largely based on relationships, then we have to learn that the role of randomness is still incredibly important for developers. For example, you may be turned down for five jobs. Let's say you're a new developer, brand new graduate, and you're looking for a job. You could be turned down for five jobs. And it have nothing to do with your skills, with your application process. It could be that the role of randomness played a bigger factor than any of those other things. Now the role of randomness doesn't mean that we throw our hands up and we quit. Instead, we view everything as playing the odds. You could be the best qualified candidate and still not get the job. But if you continue to work hard, the odds are in your favor. If you continue to apply and not allow randomness to beat you that one time and then create unnecessary negative meaning out of that one experience, if instead you play the odds and you continue to move forward, your chances of success are more dependable than your random chances of failure. Think about that for a second. Mathematically, randomness should be relatively equally distributed. That means as far as randomness is concerned, we are all equally lucky or unlucky. On the flip side, you are creating weight in a given direction. By continuing to work hard, you're not equally distributing your likelihood of success. You're weighing it in the positive direction. So eventually, you are statistically very likely to succeed in those scenarios. Practical advice number three. Introduce new constraints, new methods and new opinions into your process as often as possible. When you introduce new ways of thinking, the ways that you develop stories, the information that you have to draw on that pool gets larger. So inevitably, when your brain still resorts to telling you a story, those stories become better. They become more accurate. Even though they aren't always going to be perfectly accurate, you're kind of raising your baseline. By increasing your exposure to, for example, multiple programming languages, multiple types of bugs, multiple types of working environments, many other people's perceptions. Now, you're working knowledge, you're working memory, what you are exposed to, the available evidence that you have when you tell yourself a story is much more broad. And therefore, you're kind of spreading out your risk. You're increasing the number of references that you have to pull from. On that note, if you are building an engineering team or any kind of team, seek diversity. Not just because it's a fair thing to do and because it's good for humanity, but also because it's actually better for your team. For this very reason, your baseline heuristics will improve simply because you have more available experiences, you have a broader range of things to pull from and automatically your assumptions become better. If you're not building a team that I encourage you as an individual developer to seek out a broad range of experience, both with other people and on your own, with different types of technology and also with different types of problems, different domains. It would be silly for me not to mention that you can get a lot of this kind of exposure simply through reading. Reading books of other people's cultures and reading books about other people's problems and the way they solve them, other industries, history, really across the spectrum. Reading is going to increase your experiences at kind of the lowest cost ratio that you have. So reading will improve your baseline decision making. Thank you so much for listening to today's episode of Developer Tea. Thank you again to Lenoid for sponsoring today's episode. You can get started with $20 worth of credit. Lenoid is providing to you as a developer that's essentially four months for free on their starter plan. Use the code Developer Tea 2018 and check out head over to Lenoid.com slash Developer Tea. That's linode.com slash Developer Tea. All one word. Thanks so much for listening and until next time, enjoy your tea.