Developer Tea

Holiday Re-Air: Interview w/ Gabriel Weinberg

Episode Summary

Today we re-air an interview from 2019 with Gabriel Weinberg, the CEO of DuckDuckGO. Happy holidays and hope you enjoy this interview!

Episode Notes

Today's guest, Gabriel Weinberg, the CEO of DuckDuckGo uses connections to help steer the company. What we're talking about today with Gabriel are mental models for building a team and business.

In part 1 of this interview, we dive into Gabriel's recent book, Super Thinking. This is a big book of mental models. Don't miss part two of this interview, airing on Friday, December 27th.

Get in touch

If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Episode Transcription

I hope you have had a wonderful 2019. My name is Jonathan Cottrell. You're listening to Developer Tea. And my goal on the show is to help driven developers find clarity, perspective, and purpose in their careers. Today's episode is a re-air of an interview that I did back in May of this year with Gabriel Weinberg. Gabriel is the CEO of DuckDuckGo, and he also released this year, he released a book called Super Thinking. We talk about that on this episode and in the second part of this interview. This year to date, we have had over 3 million downloads just this year, and I'm incredibly grateful to be able to continue doing the show. Coming up on January 5th, we'll be rolling over our fifth year doing the show. If you've enjoyed listening to Developer Tea, and you'd like to be a part of the show, and you'd like to give back to the show, I'd encourage you to go and leave a review in iTunes, or even better, share this episode or another episode that you remember from this year that had a strong impact on you or on your career. Share it with someone that you think will similarly be impacted. Now let's get straight into this re-aired interview with Gabriel Weinberg. Gabriel, welcome to the show. Hi, thanks for having me. I'm a developer. Of course, people who are coming to this episode most likely know that you are the CEO and founder of DuckDuckGo. I'd love for you to take a moment and share what you hope people will remember you for. What do you want people to know you to be? What kind of legacy would you like to leave, I guess? That's an interesting question. I am only 39. Which sounds old to some people and young to other people. So I hope that I don't know the answer to that question yet. But I have my own kind of North Star, which is a mental model, actually, about kind of a mission statement. I mean, the North Star in reality is Polaris, and it's a star that always points north, so you can figure out what direction you're going to navigate to. But as a metaphor, it's like a personal mission statement or company mission statement that kind of directs your activities. And mine is really to maximize making a unique, positive impact on the world. And so, you know, there's a number of ways to date I've been trying to do that. One is through DuckDuckGo and helping people really get privacy on the internet. And our mission statement, the company has raised the standard of trust online. But I've also written two books now that are in completely different areas that I thought would also be unique impacts. And it kind of maximizes because many people can read the books. So I'd hope to continue that mission statement. I don't know. I'm definitely continuing with DuckDuckGo. I hope there are other things, too, that kind of fall on that list of things before I exit the planet. Yeah. Yeah. I mean, to answer this question is really difficult, right? Because you're kind of predicting what you would want once you're gone. And it's kind of hard to know what wanting looks like when you don't exist anymore. Yeah, exactly. But that can be helpful. I mean, I would hope that people would. I would look back and say, you know, he made a positive impact on the world. And, you know, and then on a personal level, you know, that my family enjoyed the time with me. Yeah, that's I think. So you mentioned mental models and you mentioned that you've written two books. And one of the big topics of today's conversation is going to be your book, Super Thinking, which you co-wrote with Lauren McCann. And Lauren is also my wife, by the way. It's your wife, right? Yeah. It's kind of interesting. My wife's middle name is McKay. So when I first saw the book, I had to do a double take to make sure that I wasn't seeing things there. But I've read through the majority of the book. Unfortunately, I've only had it for a short period before we had this conversation. But this is maybe one of my this is going to go up on my shelf as one of the like recurring books alongside. So I'm going to go ahead and read through the rest of the book. Daniel Kahneman's Thinking Fast and Slow and a few other books that I think are really just ground very good grounding books. I'd love for you to talk for a moment. Just kind of give a general overview. I know that's such a broad question, but maybe a definition or how you explain to people what a mental model is in the first place. Sure. I mean, so a mental model is really just a fancy word for concept. And, you know, there are billions, millions. There are some big number of concepts in the world. And to developers, there are many mental models for developers. Right. There's all sorts of design patterns. And every one of those design patterns, MVC or whatever, is a mental model. It's a way to think abstractly about, you know, development. And then every discipline has some of those. So, like, I was a physics major in college and there's a ton of all the physics concepts you learn in high school and then, you know, college. Now, some of those concepts. So are special. And they're special because they're useful beyond the discipline at hand. And so, for example, in physics, there is a concept of critical mass, which is, you know, the mass you need to make a nuclear chain reaction. But that concept is very useful outside of physics because you can apply it, say, like to a product. You know, if you're building a product and you think that product is a critical mass situation, that means that if you can get a certain number of users behind it, or get a certain amount of data in it, you can unlock something different. And so if you know that applies, you can automatically think more strategically and kind of higher order thinking about that situation. Now, for developers, there are a bunch of mental models in development that are very useful outside of development. So that might be a good one to explain. For example, like technical debt. Developers are very familiar with technical debt. But that concept extends to, like, diversity debt inside a company or management debt. Or any other kind of debt where you need to pay that down later. And so the idea with super thinking is there's about 300 of these generally useful mental models for decision making. And if you can get a grasp about all of them and kind of have them in your head at any given time, when you're faced with a random problem, you can be an amazing problem solver and just better decision maker, both professionally and personally. And this is such a critical. I guess meta concept, since it's the concept of concepts. For developers and non-developers as well to grasp, I'd love to know, you know, I know there's people who are listening to this episode right now who are thinking, OK, well, you know, mental models, this idea of taking a concept oriented approach. How can I even practice that? I'm told exactly what to do at my job. You know, I'm given very clear specifications. I'm not in a decision making position. So, you know, I can imagine they're reaching for the pause button or they're like closing the podcast app right now because they don't think that this applies. But I think you absolutely know that it does. And I'd love for you to speak to that for a moment for somebody who is an individual contributor. Is there a model that you can kind of hook their interest with for a moment and say, OK, yes, actually, this isn't just for the managers. It's not just for the CEOs. It's also for, you know, pretty much anybody can can get some useful, some useful information out of these models. Yeah, absolutely. I wrote down. So there's about 300 in the book, and they're grouped into nine narrative chapters on different topics, themes. One is how to spend your time wisely, which is basically how to be productive is chapter three. And there's a bunch of mental models in there. Yeah. And there's a bunch of things that are very useful for individual contributors and developers. I wrote down a few. I wrote down a whole list, but here's a few that we could kind of kick off with. So the first there's really a group of three that are all about the same concept multitasking, which people have heard of. I'm sure the top idea in your mind and deep work. And the basic idea here is developers are trying to solve pretty difficult problems that don't have obvious solutions. And to do that, you really have to have some creativity. But the creative process, unfortunately, is not very linear. And if you just sit down and try to crank at a hard problem, you may not solve it. Oftentimes it may come when you're taking a walk or in the shower or you have that kind of creative breakthrough. Now, the top idea in your mind, that is a mental model to really describe a bit how your mind works. And that there's generally kind of one idea kicking around the back of your mind that you're thinking creatively about. Yeah. And I think that relates to multitasking because humans are unfortunately just we can't multitask. Honestly, if you try to do two things at once, your context switching between them. And that's also probably known to developers. If you go try to, like, read Reddit or something and come back, you start all over again. So you generally don't want to multitask. And you generally also want to have this one idea be the idea that you're trying to solve. So that when you're kicking around in the shower or walking around, that's the idea that you're solving. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. third mental model that kind of works in there is this concept called deep work, based on another book by Cal Newport of the same name. And what it's saying is to really attack those creative problems, you also and work through the solutions you may come up with in the shower, you really need dedicated amounts of time, you can't be interrupted by either meetings or other things, and you should block those out in your calendar, to really have this notion of working deeply deep work. And so if you bring those together, you want to think what is the critical idea I'm trying to solve, you want to make sure that's kind of top of your mind, then you want to block out whole areas that you can do deep work. And within those areas, you want to cut down on multitasking. And you know, shut off all your notifications and things like that. So you're not distracted. So that's kind of a, that's one example. I'd love to give some more as well. But when we start there. Yeah, yeah. So I want to kind of zoom in on that. I want to zoom in on something that you mentioned here, because I think a lot of developers intuitively know that interruption is really detrimental to our work. I know that I can barely think if somebody is talking, you know, 10 feet away from me, I can, I can barely read a sentence. So I have a really hard time, you know, and I think that's actually true for a lot of developers, not only reading, but also reasoning about something. But something that you mentioned there, that I think is really critical is, you know, you can't, it's not just external interruption that matters. If you are interrupting yourself. I think we have this kind of this illusion as developers that the best developers are able to do 20 things in a given day. And they're able to hold all of this information in their head and, you know, be an expert at all of those things simultaneously. And, you know, what you're calling out here, and I think it's really critical for developers, and really, really everyone to grasp. And that is that, you know, your problem solving skills, or whatever your creative output is, when it's divided, it doesn't get divided equally. It's there is a loss factor. When you divide it, there's something missing because of those context switches. But I also like this idea that if you explicitly identify the top idea in your mind, then you have this, this new mechanism, of being able to kind of explicitly say no to the other ideas. I think that's really critical. So I love that. I love the mental model of the top idea in your mind. Yeah, just to give one story, which we account in the book is, you know, it came from a, a now venture capitalist, but wasn't a Joe computer contributor at one time, Keith, her boy, at PayPal in the early days with Peter Thiel. And what they did is, you know, what he recognized, or Peter recognized, is that people want to feel productive at work. And that makes a lot of sense. And so if they have a really hard problem, call it the A plus problem, but then you also have a bunch of other things you could do, call them B plus problems, you're going to navigate toward those B plus problems all the time, because they're easy to solve. And you can check them off and feel good about yourself. But if you do that, you're skipping that A plus problem, which would be more impactful to your work and the company. And so you have a whole group of people just always solving B plus problems. And the answer to that, you know, is to really make that A plus problem the top idea in your mind. And now, you mentioned, you can do as an individual contributor as a, as a leader in the company. I also try to do that for everybody. So we actually have a, a thread that we have every week called the top priorities thread, where everyone is explicitly listing what their top priority for the week is. And that is how to operationalize that idea for our company, where you're explicitly writing down that top idea in your mind and what you're trying to do. And so that's a really good way to get that out there. And that's what we're trying to do. And that's what we're trying to do. Yeah, I think, I think there's a lot of implications in this book throughout for, for managers, certainly. And not to leave out the individual contributors, as we already mentioned, but certainly, you know, when you're thinking about, for example, analysis, right? This is such an important thing for managers to consider analysis and cognitive biases. There's a whole list of these, not only in the book, but also you have an excellent medium post that kind of summarizes some of the things that you're thinking about. And so I think it's really important to some of them. But I'd love for you to talk about maybe one kind of really important factor for managers, a model that you think a lot of managers might miss. In particular, if you have one that's relevant to, to developer kind of engineering managers, I'd love to hear, you know, your thoughts on what you think engineering managers, unfortunately, don't often use, but you think they should? Yeah, I'll give you two that are kind of related that we use a lot of DuckDuckGo. So this idea of, of top priorities that we're discussing really bridges, you know, individual contributors and management. And one of the key, really, the key job of management is the first job, at least, is to make sure, you know, the right people are working on the right things. And implicit in that is what are the right things. And so there's this model of comes from economics called opportunity cost, which is, you know, the right people are working on the right things. And so there's this the cost of what you're working on, is what you're not working on. And to rephrase that another way is, you know, a lot of people, including developers, can come up with lots of important projects to do, right in the company, we got to refactor this piece of code, we got to make this new feature, we got to fix this bug. And you can make a case for why they're all important to do. But that's not really the case you need to be making as a manager. And ultimately, as you turn into a manager, and you're an individual contributor, you also want to be making this case of, I want to do this thing, not because it's important, but because it's important. And so you can make a case for why it's important, but because it's more important than all these other things. And so when you're doing that, you're explicitly looking at what the opportunity cost is, because if I work on this, I can't work on these other things. And so constantly thinking about the people that you're working with, and are they working on the highest leverage thing is a great mental model for thinking about priorities. Now, the other related one I wanted to talk about was called the forcing function. And what that is, is scheduled processes to force everybody, to think critically. And it doesn't just have to be about what's the thing to work on, it could be, are we still doing the right thing? Is, is this code, well structured, etc. And so let me give you some examples of forcing functions of DuckDuckGo. We have a project lifecycle that's pretty structured. So every project has a kickoff call. And in that kickoff call, we have, we call it pre-mortem, where we ask, how might this project fail? And then in the middle of the project, if it's a long, we'll do a mid-mortem, where we're asking, you know, is this project failing? And why or why not? And then after every project, whether it was very successful or not, we have a post-mortem, where we say, you know, what went well here, what didn't go well, what could be better. And all those things are forcing functions, because they're kind of pre-scheduled points of critical thinking, to really think about, you know, you know, what has gone well, and how we can improve. And some other ones that are very Deliver specific is, in that process, we have, if it's an engineering project, a technical design template, which a lot of companies do. And then we have a post-mortem, where we ask, how might this project fail? And then we have a where we're explicitly writing down in this template, what we're trying to get done. And then that is, there's a discussion around that, which is also a forcing function to really think about the technical design. I love this idea of having a forcing, I'm coming up with 100 different ways that I think forcing functions would be useful. One's a couple that you actually mentioned here. And this will probably bring us into another discussion about mental models. You mentioned this idea of having a forcing function to really think about the technical design. And I think that's really important. And I think that's really important. And I think that's really important. And that is something that can protect against that. But for people who don't understand, you know, why you might force thinking about failure ahead of time, how does that help when you are trying to analyze and prepare for failure in the future? Yeah, so it turns out, like, we have a lot of good ideas for different things and projects. And we have a lot of good ideas for different things. And we have a lot of good ideas for different things and projects. And in development, maybe ways to do this code or infrastructure to use. But if you don't write it down and think critically about it, you might miss something that if you thought about it a little harder, you may realize, wow, I could have done this in a completely different direction and saved a month of time, or, or maybe we shouldn't do this project at all. Or, you know, maybe we can do this a lot simpler if we ran a little experiment first. And so what I'm trying to do is I'm trying to figure out how to do this in a if we ran a little experiment first. And so what a premortem does is ask a really simple question of how might this project fail? And by asking a question like that, you are putting yourself in the mindset of failure to help you actually succeed. It's a little counterintuitive, but when you ask the question, it forces you to really think about what could go wrong. And then you can really have the, you know, you're giving yourself the opportunity and the leeway to kind of think about that because people don't want to fail. So it gives you a safe space to do so. And then you can think about, okay, well, you know, maybe this thing I'm thinking about is too complex. We won't figure out this algorithm or this code is not going to work together with that code. You can start to get very specific about these things. And then once you list them out, then you can decide whether they're really risky or not. And in my experience, always something gets turned up that it was not thought of. Before. Yeah, absolutely. And you're hitting on so much about, you know, the human brain is such an interesting thing and we don't really totally understand it yet. But one of the things that we know about it is that there are some kind of quick action things that the brain does. One of those quick action things is it tries to solve for gaps, tries to fill in when there is a missing piece of information. Sometimes we'll just make it up. And then we'll just make it up. And so when you ask questions, for example, what could go wrong, our brains immediately, essentially involuntarily jump into action. And so questions on their own are kind of forcing functions and they can be incredibly powerful. So for example, another forcing function question might be, what is the real question that you are asking? And this kind of calls back to it's a Kahneman-Tversky thing. They did some research and found that often we kind of skirt around difficult questions by answering a substitute question. And so if you kind of force somebody to restate what they really mean, then the question that they're really asking can come forth, right? Or the answer that they're really giving, what is the real answer that you want to give me? That can come forth. And so it's a really cool thing to do. And so I think it's if you ask that question a little bit differently. Yeah, that reminds me a couple of things that at MIT, there was this thing called the help instance, which was an amazing concept that I tried to replicate, but never could. And what it was, is it was effectively like a Slack channel will be the most common thing nowadays. But before instant message even existed, it was on a protocol called Zephyr that was predated IM. It's like the first kind of IM. And basically all these people, like 300, 400 people would be connected to the platform, they would click on the platform, they would click on the platform, they would click on the platform, they would click on the platform, they would click on the platform, And people would ask technical questions around programming. And invariably, I'd say 50% of the time, the first response was, what are you really trying to do? Someone would ask a question and they'd be like, what are you really trying to do? And we kind of operationalize that at DuckTek Go with each of these templates I'm talking about for like technical design and even our projects have a very specific template. And so like the project template has a background and objective and the objective is really listing out very specific success criteria for the project. And the technical design, the first thing at the top is the problem statement like you're defining. And most of the conversation that happens in the kickoff call, the technical design call is about clarifying that problem statement and the success criteria. And I feel that's where the most fruitful conversation comes and it often gets changed. You know, people are like, is that really the problem that you're solving? That kind of thing. Right. Yeah. And the same thing can kind of come from it. And maybe it really is the problem that you're trying to solve, but it's an abstract representation of the problem that you're trying to solve. And this goes to another mental model, the five whys. Can you explain kind of what the five whys is and where it comes from? Yeah. Five whys is a great mental model to use in postmortems or other places, even when you're doing bug finding, especially. And what it really is trying to do is get to another mental model called the root cause versus. The proximate cause. So when something bad happens, let's say a bug, the proximate cause is the thing that immediately caused it, that you noticed caused it. So like maybe it's in the search engine, which we run, say you type in a query and it breaks, it breaks the server, which was a real bug at some point in our history. Oh, wow. A long time ago. And so the proximate cause would be you type in this particular query and it causes the site to crash. The root cause. The root cause is what is really that line of code at the root of this whole problem. Right. And that might be, as people are familiar with the stack trace, that might be, you know, several lines up on the stack trace. And that's what the five whys does is help you get to that root cause by asking why. So you say, okay, that query is causing the site to break, but why? And you go, okay, well, it's because it, it ran this function. And you say, well, why does that break it? Well, that function called this function. Okay. But why does that? Why does that matter? Well, that function ran this other line and called this data that used a regular expression that had an infinite backtrace, which is a real example, by the way. And so then you get to the, the end of it. And if you do it correctly, it doesn't have to be five questions. It could be more or less, but you get to the real root cause of the problem. And then once you know what the root cause is, then you can actually have a good fix. And that's what you want to do in postmortems. You want to figure out, okay. I found a root cause through this root cause analysis. And five wise would be one example of a root cause analysis. We can now know what the real problem is and we can decide we need to refactor that code or, or whatever the problem is. Yeah. There's, there's an interesting ceiling that I've found with this, and this is just kind of more entertaining than it is particularly useful, but and maybe it is actually useful because the ceiling that I found is related to. Uh, kind of. Jumping from one domain to another. So eventually when you ask why enough, you're going to get into. Like a very personal reason. Like why was, why was that code there in the first place? Well, it was written by this particular person. Why was it written by this particular person? Well, because they're employed here. Why are they employed here? Right. So, um, you end up jumping into a different domain entirely. So it's important, you know, I guess. At least partially important to know. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Where do we, where is the actionable? Why in this particular chain of wise? Um, because it doesn't, it does seem a little bit exaggerated in that example, but in other examples, it may not be. I think that's a great point. I mean, that actually corresponds exactly what we wrote in the book as the example. So we use the example of the challenger explosion, which, um, I'm old enough to actually remember. Um, and fortunately it was a kind of sad day, which was a space shuttle that, um, exploded. And the reason Proxima Cosby exploded was an O-ring, um, basically failed. Um, and if you keep asking the five wise of that, um, you get to a point where what happened was, is they launched the shuttle in very cold weather, whether that they had net, it was colder than they've ever launched anything before and outside the acceptable range of the O-ring. And, and that's why all this stuff failed. And then you ask why, and it then jumped, like you said, really to a management problem. Um, where the engineers actually flagged this problem, even made a presentation about it. And the managers overrode it, um, because they didn't feel the risk was as high. They didn't kind of believe the risk assessment and they were just wrong, which is, which is another bias mental model. By the way, we talk about called optimistic probability bias, where you're, where you're exaggerating the probability of something. And it turns out it was a management problem. They didn't have enough checks and balances. And so the real solution there was really. To add more checks and balances and to make sure things like engineering could have a veto on safety. And so sometimes that jump is important. I think it's good to ask all the way down though, to figure out the real root cause. Yeah, it actually speaks to another, um, model that you have in the book, which is, uh, first principles thinking, certainly not a unique discussion. That's kind of, um, in the zeitgeist of discussions, uh, uh, on mental models. But. I imagine that. You. Can connect this five wise. Into first principles thinking you kind of get down to a bear, like a bare level of, you know, what are, what are we actually refactoring here? Are we really going to try to put a bandaid on this problem that we really actually have a management problem, but we're going to try to, you know, skirt around it with engineering. That's probably not a good solution, right? Yeah. Well, interestingly, you mentioned that because you're, uh, I literally wrote down a list of kind of top mental models, I think in the book that we use for developers and the. The ones that you two just listed five wise. I can first principles. I wrote down in order at the top of the list. And so like, I think we're on the same wavelength here. Um, but yeah, I, I agree. And I think first principle is actually even more broadly useful for developers and everybody, because what it really is asking you to do is throw out your assumptions and sometimes assumptions are great, but when you're first starting out on a project, it's really useful to throw. Them out at least for some time and think about, am I doing the right thing? And so an example would be, you know, you have design patterns and design patterns are generally useful and should be followed and kind of another mental model, but not except when they shouldn't. Um, or you're always using a certain infrastructure because that's what you have and have always used before. And so your assumption is you're going to, it's the best. Infrastructure for this or the best, uh, programming language for this or the best tool. And it may not be. And so it's useful to take a step back before you kind of start something and to list out, okay, why am I using that? And from first principles, you would say, okay, here's the problem I'm really trying to solve. And I think this is the right tool because of X, Y, or Z where X, Y, or Z is not just because I've used it before, or that's what I have available. It's because it's the best tool for the job. Now, maybe that is the best tool of the job because it's the one that I know. And the company is authorized. That might be the ultimate reason, but you're explicitly writing that down and questioning it. Right. Yeah. Yeah. That's, that's the, uh, that, that last piece is so critical to remember that we aren't just throwing everything out every single time. And choosing the most optimal solution based on only the requirements. We're also taking into account that we are participating. Right. And so based on all of the variables, not just based on, you know, in a, in a vacuum where I don't exist, where there is, you know, some sufficiently talented developer, um, that knows this tool, uh, to, to a reasonable proficiency level, right. That does, that's not a realistic thing, but I, that actually speaks to another mental model, which is, uh, uh, thought experiment. So you create these, and I'm going to, I'm going to summarize thought experiments. And then you tell me where I'm, you know, fill in the gaps where I'm missing things. Create. A situation. Yeah. It's not necessarily realistic. Um, but it allows you to control the variables. And, uh, for example, you can say, okay, in this, in this vacuum where let's say cost is not a factor. Well, you can, you can actually simulate that. You can think about if cost was no factor. And if we had this particular person who's proficient in this language, then we would choose framework X. Now we can now, you know, adjust those variables and see how. The outcomes might change. Yeah, that's exactly right. I mean, so it came from, um, I mean, people didn't thought experiments probably forever. And, um, but the, the, the, the real kind of popularity of it came from physics. Um, and oftentimes in physics, like there wasn't experiments you could even run. You couldn't write the code because it was about some theoretical concept that wasn't possible to experiment on yet. And so what people could do is do these thought experiments and try to. Um, think about it. So the famous famous one is Schrodinger's cat from quantum physics, where I won't get into the quantum physics of it, but the basic idea is that, um, you have this cat in a box and it could be, um, killed by a pellet of radiation. And you don't know whether it's randomly kind of happens and you don't know whether it happens or not. And before you open the box is the cat dead or alive. And it sounds like a open and shut case. So you start thinking about it deeply and it literally went on for decades and people arguing about it. Um, but that's what the power of thought experiment is. And we talked about earlier, the pre-mortem where you're asking why things can fail. That's an example of a thought experiment. But I really like your example of going to extremes because that's what really can test the boundaries. It's kind of like testing code, you know? Um, but instead of writing the code and giving extreme values to test your functions, you're thinking about it ahead of time. And a lot of these things like the pre-mortem and the force function, and, um, they're all about effectively saving you a lot of time and energy because you're thinking about these things before you're, you're, you're doing anything. Right. Um, not that you have to do it always before, but you're, you're using your mind as a tool and without having to build everything. Yeah. Yeah. And I like to think about all of these models and there's so many more, and I'm sure you have a couple more that. Okay. Uh, that you'd like to go through. Um, I like to think about all of them as kind of guardrails. Um, they're not always necessarily diametrically opposed from each other, but one may guide you in a direction that another one may guide you away from. And so, you know, using multiple models to kind of, you know, wrap your mind around a given, uh, situation, a given decision, for example, it's important to not just rely on, on one. It's, it's incredibly important to understand that, Hey, you know what, there are multiple models for how numbers may progress. Right. So for example, you have exponential, uh, and developers actually have a lot of these most likely, especially those who are more formally trained. We have mental models, uh, for algorithms, right? We have big O analysis is exactly that we have these models of how, how complexity may cause a function. That the amount of time that a function takes based on a set, how that grows. Right. So, you know, that O log in is going to grow logarithmically versus something that is exponential. And we also have the, the concept of constants, right? So there's all of these things that naturally developers may not initially think that those are mental models, but they absolutely translate. So I think it's important that we, you know, recognize that there's not just one kind of model. But we use these as guardrails to balance, uh, you know, one model balances another model out. Yeah, totally agree. I like to think of them as kind of shortcuts to higher level thinking and, you know, the shortcut, maybe the wrong shortcut. Um, and so you want to think about multiple things from multiple angles, um, and see which is the right for this situation. Um, but in general, you want to be using them because they're just, they'll, they'll, they'll be making so much more productive. Another huge thank you to Gabriel for joining me on the show earlier this year. And thank you for listening to Developer Tea in 2019. We won't be airing an episode on Wednesday, but we will air another episode on Friday. Thank you so much for listening. If you have yet to subscribe to the show and you'd like not to miss out on the second part of this interview, go ahead and subscribe on whatever podcasting app you're currently using. This is the best way to keep up with all of the episodes that we release. Of course, you can find this episode and every other episode at spec.fm. Today's episode was produced by Sarah Jackson. My name is Jonathan Cottrell. And until next time, enjoy your tea.