Developer Tea

Two More Guidelines for Better Feedback Loops (Part Three)

Episode Summary

Are your processes useful? In the last couple of episodes, we've been talking about feedback loops and in today's episode we're continuing that discussion and zooming out to make sure our feedback loops are proving useful.

Episode Notes

Developers tend to create process for themselves and a shared process with a team, but what defines whether or not a process is useful? In today's episode, we're talking about feedback loop processes and the reality of their usefulness.

🙏Thanks to our Sponsor: GiveWell

Giving is hard. When you donate, how do you know what a charity can actually accomplish with your money? Givewell, is solving that problem by connecting your money with charities that will see the direct impact of your dollars spent.

Visit GiveWell.org/DeveloperTea to find out about effective charities and get your donation matched up to $1,000.

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

🍵 Subscribe to the Tea Break Challenge

This is a daily challenge designed help you become more self-aware and be a better developer so you can have a positive impact on the people around you. Check it out and give it a try at https://www.teabreakchallenge.com/.

Episode Transcription

Are your processes useful? This is a difficult question to answer. And for the most part, people have some kind of use in the processes that they've adopted. Developer Tend to create some process for themselves and then have a shared process on the team and then they have additional processes, perhaps at the company level. But what defines whether or not that process is useful? You can say that a successful process would be useful and how you define success in that case is how you can imagine the utility of that process should be measured. But what often happens is our processes, specifically our feedback loop processes that we've been talking about in the last two episodes of Developer Tea, we make them useful in theory but not in reality. That's what we're talking about in today's episode of Developer Tea. My name is Jonathan Cutrell and my goal on the show is to help different developers like you find clarity, perspective, and purpose in their careers. So in the last couple of episodes, we've discussed this idea and we've reiterated the concept of the feedback loop, the different stages, the measurement, evaluation, the reaction, and then the restart of that whole loop again. But we haven't really dove into how you can look at feedback loops from kind of a high-level perspective and identify the types of problems that occur at each of those points. We've talked about, for example, the idea that you might validate the input or the measurement itself, making sure that your measuring stick is consistent, for example, whatever that measuring stick is, and you might engage in double loop thinking. We talked about this in the last episode where you identify the models that you're using and the assumptions that you make, the evaluation process that includes the rules for what the reaction should be. All of this, we've talked about cycle time as well, how long should you wait between iterations? But how can we take a step back and validate that our feedback loops are actually working? We can have all of these pieces in place. We can identify the places where we need feedback loops, set up a good measurement system, create detailed and thorough models to use in our evaluation and react with thoughtful action in response to those evaluations. We can have that loop on the exact right timing and things still don't work. Why is that? What is it exactly that's contributing to this? The answer isn't always clear, but there are some things that we haven't talked about that I want to discuss in today's episode. We're going to talk about two of them. The first one is something we alluded to in the last episode when we were talking about adding new feedback loops and removing unnecessary feedback loops. That is the signal to noise ratio. Just imagine that you want to evaluate the productivity of an individual developer on a team. The way that the team is organized, you have developers who engage in pair programming sessions, they go through sprints, they may be assigned bugs, and they have a flexible time off policy. How might you go about actually measuring the productivity of this individual developer? Well, you have to think about all of the things that contribute to productivity for a single developer and all of the things that might conflate or add as we're using in this example, noise to this feedback loop. All influences that may bias your measurements in one direction or another. You might be able to take good measurements on how much code is this developer producing. Obviously, we can get that information from version control, for example. You might be able to take some qualitative or even quantitative feedback from this individual's peers, and you may even take into account the time that this developer has taken off. And all of this may help you reduce the noise, but the fundamental problem that you face when you're trying to measure a single point in a highly collaborative system, where one single point depends heavily on many other things. The problem that you face with this is that there's so many unknown collaborating factors that when you measure one thing, you are necessarily measuring more than one thing. For example, imagine that you are measuring the productivity of a junior engineer, and they skyrocket in their productivity. That junior engineer should likely receive some kind of recognition for this growth, or how is this junior engineer growing? What factors are allowing them to grow, or even supporting their growth directly? Perhaps there's a senior engineer on the team who is spending extra, unmeasured effort mentoring the junior engineer. How do you uncover all of this web of entangled things? Does this just mean that any time we want to measure something, we can't, that we're kind of hamstrung in this situation where everything is noisy. That's not at all what this means. But what it does mean is that when you approach these situations, keep in mind the complexity of what you're trying to measure. There's no simple measure that shows what productivity actually is for an individual developer. It's true to the real scenario where you have a high signal to noise ratio. It's very likely that it's hard and perhaps even impossible to separate all of the signal from the noise. We're going to continue with this metaphor for a moment. When you have a radio, a given radio can turn to a frequency, but most radios don't tune only to a very specific and narrow frequency. You'll notice, for example, on old analog radios, as you tune close to a frequency, you start to hear what is broadcast on that frequency. Now of course, the main tuning of that radio is the center of that frequency range, but almost every radio is going to provide some level of fuzziness. I am certainly not an expert in radio frequencies, but you can learn more about this if you Google Q factor, and that stands typically for quality factor. This basically defines the bandwidth that your receiver is going to pick up. If you have a higher bandwidth, then you're going to pick up more around whatever the central frequency is that you're tuned to. You can relate this to a feedback model in the sense that when you're picking up a lot of information, for example, imagine that you have an explicit feedback channel that you gather from your teammates, maybe. You have a survey, and it's just an open text field, right? Any feedback that you want to, to me, well, that's going to provide you a very broad bandwidth of information. Sometimes you're trying to central in on some specific things, but you're going to get a lot of extra information. A high bandwidth communication necessarily includes more noise. As you narrow down whatever that feedback mechanism is, for example, you provide a specific question with open text feedback, or at the very narrowest bandwidth range, you could provide a question with multiple choice. Maybe even more narrow would be a question that has a true or false, a Boolean question. This continuum of bandwidth goes from greatest amount of noise, but also the greatest amount of information to the least amount of noise, but also the least amount of information. As you increase the amount of information that you're getting, you're probably going to have a higher noise ratio. This model is not just limited to radio frequencies, certainly. The guideline here, as you're creating your feedback mechanisms, is to consider what sources of noise you might be ignoring. What sources of noise are going to be important to this particular feedback loop? We're going to take a quick break to talk about today's sponsor, GiveWell, and then we're going to come back and talk about ways that you can iterate on your evaluation stage in your feedback loops. Today's episode is sponsored by GiveWell. It is the season of giving. For many people, this is a time when you traditionally are giving of your time of your resources, but giving can be really hard. It's hard to know what to give our friends, much less, to know how to give well to a charity. And what that charity can actually accomplish with your money. Imagine you went to help children. You found two trustworthy organizations. They both are going to use your money as responsibly as possible, but they run totally different programs. One can save a child's life for $300,000, but the other one can save a child's life for every $3,000. If you could tell the difference up front, you'd probably donate to the one that was 100 times better at saving children's lives. This is what GiveWell does. They go and do the research for you. They spend 20,000 hours each year. Researching which charities can do the most with your donation. They recommend a short list of the best charities they've found, and they share them with donors like you at no cost to you. It's totally free to get this list, and on top of this, GiveWell doesn't take a cut. Donors can have a huge impact. GiveWell's recommended charities work to prevent children from dying of cheaply preventable diseases and help people in dire poverty. You can learn how much good your donation could do by heading over to givewell.org slash Developer Tea. Again, the recommendations are free. They don't take any cut of your donation and first time donors. This is the important part. Listen up, first time donors will have their donation matched up to $1,000 if they donate through givewell.org slash Developer Tea. Thanks again to GiveWell for sponsoring today's episode of Developer Tea. We're talking about feedback loops on today's episode of Developer Tea. We've actually been talking about it all week long. Highly recommend if you are getting value out of today's episode. They go back and listen to the last two episodes. And especially if you like those, go and subscribe. I'm whatever podcasting app you're currently using. But I want to talk about the next guideline here. And it's actually more of a short list of tips as you are iterating on your evaluation stage. This is the part where we take the raw information that we get from some measurement and we convert it to some kind of action. And we have rules. We have some kind of algorithm, whether that's implicit or explicit that we use to interpret and then create some kind of reactive imperative from that information. And as we talked about last in the last episode, we need to engage in some double loop thinking that is making sure we have the right underlying models that we're not just engaging in simple rule based mechanisms when we might need something more complex or vice versa. Perhaps we're doing something that's more complex and we might need to engage in something that's simple and rule based. But I want to take a step back and think about your evaluation stage a little bit more. Some things that you can do as you're iterating on your evaluation stage, you can ask questions like this. Does your evaluation fill in links? What do I mean by this? How does it decide? What assumptions are being made in your evaluation process? For example, let's say that you are like many teams using story points. You're estimating the work that you're doing by using some kind of story point mechanism. This is essentially, if you've ever heard of T-shirt sizing, you're assigning some magnitude and number to your given stories. And then you're evaluating the team's progress. Some assumptions that you're making here, the blanks that you're filling in, is that the team is consistently and accurately estimating the work. Now, it's important to note that assumptions are not necessarily bad things. They can be bad, but we have to make assumptions to be able to operate. Because if we always were asking the question of whether the team is accurate about their estimations, then we would paralyze ourselves. We wouldn't be able to have a feedback model at all because we can always ask questions that work to invalidate our feedback models. But we need to recognize explicitly what assumptions we are making with our feedback models. Another question you can ask, do your takeaways from the raw data actually reflect a reasonably correct picture of that data? So going back to the story point example, if your data is showing that the team is just going kind of through the roof in productivity, but the product doesn't seem to be growing that quickly. If the data is showing that they've doubled in productivity, but they actually seem to be slowing down from a broader perspective, that's a heuristic that shows that perhaps the underlying assumptions actually are problematic. That maybe your accuracy, the data itself, is dirty. It's wrong. Or on the flip side, perhaps your insight into what work is actually being done is incorrect. Your perspective of the velocity of the team doesn't match the actual velocity of the team. There may be, for example, underlying performance implications. The team has been working really hard to resolve those and those changes are not as clearly visible. Finally, the kind of question that you might ask when you are iterating on your evaluation stage is what human errors are likely to be present? What human errors are likely to be present? First of all, we have to recognize that no process eliminates human error. There's no process that a human can create that eliminates, that 100% eliminates human error. We can hedge against certain types of bias. We can hedge against certain types of human error, against certain types of behavior, or to balance those things. But we cannot eliminate it altogether. It's important to name what those likely errors are. This sometimes takes a deeper knowledge of psychology. It's one of the things that we talk about on the show so that you can have a better intuition for what those errors may be. The real answer may be that you don't know what the human error is, and you need to create some kind of expectation of finding those human errors. In the same way that we don't know what unexpected events might delay us, that doesn't mean that we should act like there won't be unexpected events. Just because we don't know what the human error will be doesn't mean that we should act like there won't be human error. Thank you so much for listening to today's episode of Developer Tea. This is a little bit longer of an episode about feedback loops. I hope you've enjoyed these three episodes on feedback loops. There's a lot more content that we could get through. I think if we mention them, we're going to do eight different guidelines. We've kind of done six or so, but with multiple subpoints and bonus points. So sorry that we didn't follow the layout exactly. Hopefully, these discussions are helpful to you as an engineer. If they are, I highly encourage you to share this with another person. That's going to do two things. One, if the other person finds it valuable, they're going to appreciate you sharing it with them. So it actually will build a positive rapport between the two of you. But also, of course, this helps the show. Whenever we can grow and reach new developers, that reach is what keeps this show alive. So I personally appreciate those of you who share this with your friends, with your co-workers, the people that you think are going to be most impacted by what we do here. Today's episode also wouldn't be possible without GiveWell. Head over to givewell.org slash Developer Tea. You can get your donation matched up to $1,000. Of course, that $1,000 is going to go a long way because GiveWell has found charities that are highly effective. They have a list of those that you can access freely. That's at GiveWell.org slash Developer Tea. Today's episode is a part of the spec network. If you are a designer or developer looking to level up in your career, spec is specifically designed for you. Head over to spec.fm to find other shows like this one. Today's episode was produced by Sarah Jackson. My name is Jonathan Cutrell. And until next time, enjoy your tea.