In today's episode, we discuss factors that lead to decision variance when constructing software.
Giving someone a broad software problem is a little like asking them to plant a tree. In today's episode, we talk about how different mental effects can cause variance in decision-making for complex and compound decisions.
Sentry tells you about errors in your code before your customers have a chance to encounter them.
Not only do we tell you about them, we also give you all the details you’ll need to be able to fix them. You’ll see exactly how many users have been impacted by a bug, the stack trace, the commit that the error was released as part of, the engineer who wrote the line of code that is currently busted, and a lot more.
Give it a try for yourself at Sentry.io
If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
This is a daily challenge designed help you become more self-aware and be a better developer so you can have a positive impact on the people around you. Check it out and give it a try at https://www.teabreakchallenge.com/
In the last few episodes, we've been discussing the construction of software, how it happens, not in its final forms in code, but rather well before that, how we develop beliefs and models for the world, and how we derive so much of our action by asking implicit questions or answering explicit questions, and even how we trick ourselves by answering questions that aren't being asked. In today's episode, we're going to continue this discussion on how we develop software, and we're going to dive a little bit further down towards the actual writing of the software in today's episode, and specifically the design of the software itself. On a day-to-day basis, how are we choosing how we're going? How are we going to accomplish whatever we need to accomplish in the software that we're writing? That's what we're talking about in today's episode. My name is Jonathan Cutrellian, and you're listening to Developer Tea. This has been a series on the construction of software. My goal on this show is to help driven developers like you connect to your career purpose and help you do better work so you can have a positive influence on the people around you. We like to believe that if you were to hand the same software problem to three developers at a given company, that there wouldn't be a lot of variance between how those three developers solve that problem. Now, this may be more true if the problem is extremely narrow-scoped, and if the problem isn't actually framed as a broad problem, but instead as a list of specific kind of leading problems, the kind that result in a specific set of features. But this is very... unlikely to take place. The truth is, given sufficiently complex problems, individuals are going to solve them in complexly different ways. And part of the reason for this, and perhaps part of the reason for everything we're talking about in this series, is the models that we discussed in the last episode. If you didn't listen to that episode, I encourage you to go back and listen to that, but also do a little bit of research on mental models. And how we form our beliefs. So that's kind of the backdrop for all of these discussions. But it should be noted that if you zoom in a little bit, and even if we had similar models and beliefs in the way that we view the software, in the way that we view the world, we may come out with different answers to the same problem. This is often because the problem is articulated in a broad way, and the specifics of that problem are not the same. The problem is articulated in a broad way. The problem can be expressed in very different ways. You can think about it kind of like this. A problem that you receive for a given software project is kind of like being told to go and plant a tree. How do you plant a tree exactly? And what tree do you choose? How long should you be spending planting the tree? Can you plant multiple trees so that if one tree is planted, it's going to be a tree that doesn't grow properly? You have a fallback tree? What kind of soil should you be using? And are you actually just cultivating what nature is already doing? Because nature also plants trees. At what stage are you planting the tree? Is it at the stage of the seed or the seedling? Or perhaps even bigger? These are just some of the questions for a fairly simple problem like planting a tree. And the answer is, yes, it's a problem. And the answers to these questions could lead you down very different paths. Now imagine taking a much more complex problem set and walking down the line of reasoning for all of the features that would express the solution to that problem. Of course, going from one expression of a solution to another is going to result in variance. This happens even when the same person is solving the same kind of problem. Two times in a row. We're going to take a quick sponsor break, and then we're going to come back and talk about some of the specific ways that create variance in the way that we make decisions. Even if it's the same person making the same kind of decision from one day to the next. Today's episode is sponsored by Cinefix. Cinefix is a company that's been working on a new technology for a long time. It's a company that's been working on a new technology for a long time. They've been working on a new technology for a long time. They've been working on a new technology for a long time. They've been working on a new technology for a long time. Your code is broken and Sentry is going to help you fix it. Relying on customers to report errors that are in your code, this is kind of treating customers like an off-site QA team, and you're not even paying them for that. In fact, they're most likely to leave your product altogether without ever reporting any of those problems in the first place. Ideally, we could solve this ahead of time. We could solve it with great testing, with a really good QA process, but there's no way we're going to cover every scenario. Our tests are not going to be complete, and we're not even going to be able to, for example, simulate the right kinds of load on our application. We can't simulate the real thing entirely. And we can't predict all the things that people are going to do with our application either. And until we can predict the future, we can't predict the future. Responding to real events is one of the best strategies for dealing with bugs. You shouldn't just have one weapon in your arsenal in the fight against these bugs. You have to approach it from many different angles, and Sentry provides you with an excellent angle to approach it from. Sentry helps you catch bugs before your users see them. You'll get immediate alerts in whatever alert channels that you're already using, like, for example, Slack or Pushnotes, and you can also get more information about that error. For example, the full stack trace and the commit to the code that is responsible for that error, so you can fix it quickly. Go get started at sentry.io. Thank you again to Sentry for sponsoring today's episode of Developer Tea. We'll see you next time. Thanks for watching. I'll see you next time. Bye. Bye.! See you next time. after that kind of discussion, you are likely to break away from what you may have even had an affinity for previously. You're likely to push against that specific practice. So in that scenario, you have a lot of volatility, right? You can have swinging opinions that change based on conversations that you have. Another more long-term or closer to permanent effect that we can observe both as developers and just as humans is the confirmation bias. And there's a lot of other effects and biases. This is a very well-studied phenomenon in psychology. But the basic idea is that if you already have a belief, and especially if you have really strong beliefs, you can have a lot of confidence in yourself. And if you have a belief reinforced that belief, and if you have committed to that belief in a somewhat public setting, for example, amongst your co-workers, it is very difficult for you to change that belief and then act on that change and go back on those public commitments. So if you, for example, believed very strongly in one particular direction or a paradigm, for solving a given problem, and then you get new information, maybe the problem shifts a little bit, maybe you didn't have all the information up front, or maybe your perspective shifts a little bit, right? And you have a new way of thinking about the problem. And your old belief is less in line with what the evidence is showing you. You are likely to do two things. One, reject that evidence and hold on to it. And the other thing is that you're likely to do two things. One, hold on to that old belief, even though you can cognitively separate that it's probably not the best belief to hang on to. And another thing that you'll likely do is seek out people or evidence that supports your previously held belief. And this can obviously have major impacts on software development timelines. It can have major impacts on how well you and your team work together. And of course, it's going to have major impacts on your own life. And so, if you're going to be able to do two things at once, you're going to have to find your own way of doing it. going to have to do is find your ability to actually solve the problems that are in front of you to solve. The third example, and there are plenty more. So it's very important that you, you know, continuously try to learn more about these kinds of things about how we make decisions as developers. But the third phenomenon that I want to discuss is called the possibility effect. And we're talking about some things related to the possibility effect. But the basic idea of the possibility effect is that we're going to have to be able to solve the problems that are in front of us. The possibility effect is that if something is possible, even if it is extremely improbable, we still see the difference between impossible and possible, no matter how unlikely, as a major difference. The possibility effect is relevant for a number of reasons. Specifically, I want to point out one example, and that's optimization. As developers, we are very, very drawn to the concept of optimizing our code. And this isn't necessarily a bad thing. We are told to learn about how, you know, algorithms perform, for example, we're told to understand how to create an adequately optimum program. And this happens in a bunch of different stages. But very often developers apply this concept of optimization, almost as if it's a moral rule. And so what we end up with is a number of developers working on optimizing code that either doesn't need to be optimized, or they're optimizing the wrong part of the code, and could stand to gain much better optimization in other places of the code. And the reason this is related to the possibility effect, there's a couple of reasons here. One, if a developer sees that there is a route to optimization, very often developers are tempted to take that route, even if there are better ways they could be spending their time, or if it compromises the integrity of the readability of that same piece of code. The other reason this is related to the possibility effect is that developers are often looking at numbers to determine the success of their optimizations. So if we go from let's say 20 milliseconds to 11 milliseconds, then this seems like a major jump in performance. The problem is that we are zoomed in on these numbers, and we're only looking at the numbers within their own context. We need to understand what is the optimum number for this piece of code to run at. Rather than just saying, we know we can make it faster and faster in the process, we need to understand what is the optimum number for this piece of code. Instead, we should understand, is this code that's going to run once, for example, and optimizing it any further is a waste of energy and resources? There are a variety of biases dealing with calculations and numbers that are worth looking at. And it's not just biases, it's also these kind of psychological effects and phenomenon that cause us to see numbers in distorted ways. I encourage you to go and Google about this, read a little bit about it, because you're going to run into many situations where you're dealing with numbers as a developer, and getting a handle on how to see those numbers more clearly is going to help you in the long run. Thank you so much for listening to today's episode of Developer Tea. I hope you're enjoying this series on how we construct software, kind of the mental processes and the models and the questions and decisions that we have on our plates. As developers, I encourage you to continue doing some more digging on the topics that we bring up in these episodes. These are some of my favorite topics that we talk about on the show. And I know that they are going to be valuable to you in your career as well. Thank you again to today's sponsor Sentry. To get started finding bugs before your users see them, head over to sentry.io to sign up today. If you haven't signed up for the Tea Break Challenge, I encourage you to head over to teabreak.io. And sign up today. The Tea Break Challenge is daily soft skills exercises that get delivered to your email. Go and check it out teabreakchallenge.com. If you haven't seen the other shows on spec.fm, the spec network was created for designers and developers like you who are looking to level up in your career. Go and check it out spec.fm. Thank you so much for listening. And until next time, enjoy your tea. See you soon.