One of the most amazing things about the human brain and it's ability to make connections. We talk with DuckDuckGo CEO, Gabriel Weinberg about just that in part 2 of this interview.
Today's guest, Gabriel Weinberg, the CEO of DuckDuckGo uses connections to help steer the company. What we're talking about today with Gabriel are mental models for building a team and business.
In part 2 of this interview, we dive deeper into Gabriel's mental models specifically for engineers. His book, Super Thinking, which we base the discussion on can be found here: Super Thinking.
If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
This is a daily challenge designed help you become more self-aware and be a better developer so you can have a positive impact on the people around you. Check it out and give it a try at https://www.teabreakchallenge.com/.
Our sponsor GitPrime published a free book - 20 Patterns to Watch for Engineering Teams - based data from thousands of enterprise engineering teams. It’s an excellent field guide to help debug your development with data.
Go to GitPrime.com/20Patterns to download the book and get a printed copy mailed to you - for free. Check it out at GitPrime.com/20Patterns.
And today's episode we continue our discussion about mental models with Gabriel Weinberg. Gabriel runs DuckDuckGo as the CEO and he's written a book about mental models called Super Thinking. This is a list of 300 mental models and we've been going through some of them on this episode and the last part of the episode, part one. If you haven't listened to it, I encourage you to go and listen to that. It gives you kind of a primer on what mental models are and then we start going through some that are useful to engineers. We'll continue that discussion and we'll have a few other questions for Gabriel in today's episode. My name is Jonathan Cutrell, you're listening to Developer Tea and my goal on this show is to help driven developers like you find clarity, perspective and purpose in your career. Now let's get straight into the interview with Gabriel Weinberg. I'd love to know, do you have a way of kind of testing your mappings when you come across a situation and you're trying to map a decision onto a model? How do you validate that mapping? Yeah, I do, but you'll see if you like the answer or not. Because of the biases that we've talked about some and there are many and the one that you just mentioned, a bias towards a particular predilection for a certain way of looking at things, I think it's very hard to do yourself. The main reliance that I use and I've also tried to operationalize a sector to go is to have multiple people involved. A lot of these meetings that we're talking about are actually collaborative meetings where someone has written down what they think is the right thing to do, which may include literally writing down some name and models as part of the thinking. Other people are questioning those assumptions. We've taken it so far as you mentioned Validate Direction. We have three values of DECTA-GO. One of them is question assumptions and one of them is Validate Direction. So, they're like totally built into our processes and we encourage people to effectively question other people's assumptions in that to be challenging at times. That's the way I found to make things work now in my life. It's DECTA-GO. It's all the time in my personal life. That's generally my wife questioning my assumptions. But I do think you generally need somebody else. I think it's very hard to do alone. So I totally agree with that. That's actually something we've talked about quite a bit on the show. It's one of the things that I believe Ray Dallio talks about in his book Principles. I think people that are believable in subjects, so you have a preconceived notion and you check that notion against the people who are most believable in that particular category. So, in addition to that, I think that it's really critical that we check these ideas against a diverse group of people. When I say diverse, I don't just mean people of different backgrounds. I also mean people with diverse experiences and diverse perspectives. The reasoning for that is if you have a lot in common with another person, not only do you have a lot of those surface level things in common like the same music or you hang out at the same places, but you may also have the same kind of perspectives and those will shape your biases. So you end up making similar decisions and making similar judgment calls. So if you have a bunch of people in the same room who look the same, act the same and have similar experiences in life, then they're probably also going to have similar decision-making. Yeah, absolutely. I put another layer on that as well, which I totally agree with that. We have a core objective in it to go to a higher and diverse team in that diversity of thought type of way. But one other thing I've realized is that even if you have a diverse team, if everyone is say like you have a company objective or a big project that a bunch of developers are working on and they've been working on it for a while, they can all get in the same mindset even if they do have a degree of diversity about like that was the right decision and you often, in someone outside that group, to kind of be the question assumptions person. And so what we try to do is we try to do this in a number of ways. But one thing is we have every all the objectives in the company really report out weekly kind of what's going on and we do that at project level two and anyone can follow any project and objective and people outside the project objective are encouraged to ask what might be considered stupid questions or just other thoughts that they have. And often those from the outsiders are things that really kind of jigger things to the core, you know, not always. But it's often those outsiders who are asking things that the insiders are just too far down a direction to be able to question anymore. So I'm going to read something from Wikipedia that's exactly relevant to this. I assume you're familiar with the concept of a red team. Yes, yes. And that's exactly what this is. It's an independent group that challenges an organization to improve. It's effectiveness by assuming an adversarial role or point of view. So that's the very Wikipedia formalized version of this. But the idea is useful. I believe it's been used in military groups. It's been used certainly in journalism. So where somebody who has not been involved on the actual, you know, progress of that reporting, they will come in and try to tear the story apart before it goes out. Exactly. And there's a reason why those clichés, you know, fresh bear vise and things like that are true, you know, because you really is a fresh perspective that is required in some of these cases. Yes, it's kind of like a, you have this kind of a local sense of diversity and then a more global or long-running sense of diversity in both are important. Exactly. I'd love for you to share, you know, if I know you have a list of these that you think are particularly relevant to developers. I'd love for you to share another one of those. Perhaps one that's not as intuitive to us. Maybe it's the opposite of what you might kind of intuitively assume. Yeah, I have a couple. You can tell me how counterintuitive they are. So one that I think is in practice, not very intuitive is the concept of path dependence. And what this is means is you make little decisions all the time and you may not realize that those decisions may have cascading effects that really constrain your behavior further on. And for example, in a developer context, that might be a quick choice to use a tool or a library which you didn't fully evaluate as like the best tool or library for the job. And then all of a sudden, you know, a month into the project or later, sometime later in the project, you're running into trouble. But that library is now so embedded in your code that it would take a lot of effort to strip out or the tool in your infrastructure or a canonical example of the reason the company for developers is maybe really early on in the company, someone didn't think too hard about what the bug reporting software is we're going to use. And then all of a sudden, you know, we have 5,000 bugs in it and we don't want to switch systems even though it's a suboptimal system. And so, you know, thinking about that from that mental model in mind, you want to kind of check those decisions a little bit harder and think, are these having a path to send dependence problem or not? And the opposite model is preserving optionality. Whereas if there's a choice where, you know, you're not really committing to something fully, that might be the better choice at the moment. Now, that can also have a cost, so you have to weigh that. But yeah, so tell me, was that counterintuitive? Yeah, I think it's not necessarily intuitive that a simple decision today could have, you know, cascading effects into the future. I knew that on the other hand, you have developers who will spend a lot of time trying to analyze what is the perfect choice. And the second model that you mentioned, this preserving optionality, may actually be a better use of their time. So perhaps you can make, for example, just a concrete example instead of, you know, trying to arduously determine which particular code package you want to use. Maybe you make an adapter so that you can switch those out in the future, right? And so that would be a good use of your time and energy. And we'll likely pay dividends in the future. And it's a fairly small investment. Exactly. A couple of related to that is just the model of analysis paralysis, which can happen to developers where, yeah, they just go way too deep into something that doesn't necessarily matter where they've already kind of reached diminishing returns on a decision. Yeah. And so now that one's probably quite intuitive for a lot of us. I mean, so I got two more if you're up for them. Yeah, let's go. So one that is, I think very countertuitive, but we have a whole chapter on basically the statistics models that you need to know. And we try to stay away from the equations, you want to say you need to know all the underlying math, but we really think that developers and everyone should know kind of what the concept of statistical significance and how it's used like an A, B testing really means so that when you're part of a project that is using those techniques, you can really appreciate the numbers and the decisions that are coming out of that. And I won't get into the full explanation here because that would be, that would take a while. But I think that concept is one that people really should take the time to understand. And I think it is, I think a lot of people, especially, I see this a lot in our company, especially developers can get scared of it because it feels very mathy and statistics. Maybe they didn't take that. Maybe they felt that it was too difficult, say, in high school or college. But there, I think I truly believe there is a way to understand it that anyone can understand and we did try to write that in the book. But I think it's worth taking time to understand that concept. Yeah, I'm going to share a personal story here because I think it's relevant to this discussion on statistics. And another really kind of deep dive discussion on how statistics can relate to developing beliefs, for example, right? I actually talked to Annie Duke about a similar topic, the idea that we have these beliefs that we develop over time and we typically kind of, our brain tries to make those beliefs binary. So we either do or don't believe it. We don't have a continuous scale of belief. And her kind of message to the world is look at your beliefs more like bets. So how much would you bet on that? And it kind of breaks your brain out of that binary creation. So the personal story, my wife and I are expecting our second child. Congratulations. And thank you. And she recently has had this kind of odd symptom where her hands and her feet are itching. And so it's summer. It's probably allergies or something. It's hot. There's so many things that go on during pregnancy and her body. And so it wouldn't be surprising if there are some erroneous reasons why she's her hands and her feet are itching. And so we go to the doctor. And of course, we also have checked online to see what could be causing this. And one of the main things that might cause this, although it's still quite unlikely, is called colostasis. And colostasis is essentially an issue that happens both when you're pregnant and when you're not pregnant. But there's a specific kind that happens when you're pregnant. And so we have a test done. And we're actually still waiting on the results. And I assume that they're going to come back negative for this colostasis. And so we were discussing the possible outcomes. And my wife has done a little bit of research on colostasis. She says to me, you know, it's really likely that we're going to end up in the NICU. And I said, is it likely or is it more likely? And this was a moment where we were talking about statistics, but we were experiencing it in a very personal way. And so this idea that we should expect to end up in the NICU versus it's a little bit more likely than it was, but it's still incredibly unlikely. And so we still have this kind of statistically, we shouldn't believe that we're going to end up there. But because of a lot of factors which we won't dive into, it's easy to see the more likely and replace it with likely. Yeah, that's a great example. I hope everything worked out. Yeah, it's the risk of complication is fairly low. And I'm going to assume the things rationally. I should assume that things will turn out just fine. Well, I have one more for you if you like. Yeah, let's do it. So it's really a set of three models I think will be useful for our Developer To internalize. And you might have talked about this before on an episode. But it's the idea of deliberate practice, which came from the man Andrews Erickson, who spent a career studying experts, world-class performers and athletes and intellectuals of different types and musicians, and how they got to be experts. And he identified this process, which he calls liver practice, as the best way to move up a learning curve on really anything. And the process is pretty simple. It really involves going to the edge of your competence right outside of your comfort zone and working on a specific skill that, along the direction that you want to improve, and then getting real-time feedback from an expert who can help you coach you effectively or mentor you on kind of what you're doing wrong. And it sounds very straightforward, but it's actually pretty hard to do in practice. In part, because you're failing a lot, and that's kind of hard to internalize. And so the two other models related to that are this thing called the Dunning Kruger Effect, which was studied by these people named Dunning Kruger. And what they graphed was kind of how people feel as they're moving across this learning curve. And what they discovered is when you start out, you make a lot of progress on the skill almost immediately, and you feel really good about it, which is great. But then you over-project your confidence on the skill, and you think you're way more of an expert than you are at it. And then when you realize that you're not, whether that's pointed out to you or for some other reason, you figure it out, your confidence plummets, and you weigh over compensate on the negative direction, and you're in this kind of trough of really under-confidence. And that is this third-mental model called impostor syndrome, where you may feel that, especially when you're talking to experts who are farther up the curve, that you're an impostor, and you don't belong even working on this kind of skill. But that's not true, obviously. You're actually pretty farther along than the beginners. And so this method of deliver practice is really a great thing if you're trying to improve, but then you also have to be really wary of these psychological trigger models that you don't fall into. So if you're on the side doing the skill, you kind of want to be aware of that. And then if you're a mentor on the other side, you want to be aware of a help people go through this process, but be kind of understand that they can fall prey to these other models. Yeah, absolutely. We actually did an episode on impostor syndrome for the senior developer. It's actually something that is more common than you might expect, and I'm sure you actually know this. And we discussed the idea that a lot of our feeling, if you imagine getting in a car and pressing on the accelerator, that initial jolt going from standstill to dead end, 10 miles an hour, is going to feel like you're progressing quite a bit more than if you were to be steady at 60 or 70 miles an hour. And so for a lot of senior developers, because they're not learning at the pace that they used to, it may feel like I've stagnated. But most senior developers are still cruising along at a high capacity. They're the ones that are on cruise control at 60 or 70 miles an hour. And just because they aren't feeling that momentum, or I guess that acceleration, it can seem like things are not progressing at all. Yeah, one survey that we uncovered as part of the research showed that across a wide variety of industries, about 70% of people felt that they had impostor syndrome at least one point in their career. So it's very extremely widespread. The other 30% were probably not telling me exactly. Yeah, it's probably everybody at some point. So these models, you have a lot more in the book and really getting a hold of a wide variety of these. And I would also add that if you were listening to this episode right now, another thing that has been really useful for me is to take models from other domains and other things like hobbies that I participate in. So for example, music. There are a lot of mental models that can come from music. One of them as a quick example. The tonal scale has 13 notes, if you count the beginning of the octave and the end of the octave. And you can start at any point on that scale and move through those notes at the same, I guess, distance between each note. And you can translate what's called transposing music from one key to another. There's nothing special about a given key as far as whether or not you can transpose that music over to another key. They all kind of mathematically they just shift. And so this is a model of thinking. If I can create software that is similar, if I can somehow find a way to modularize what I'm using so that I can shift it from one project to another, it's very similar in terms of, okay, it may sound different. It may, the outcome may be a little bit different, but that underlying model of transposability is applicable. So I'd love to know, do you find that these outside practices that we have hobbies, interests, maybe even cross industry, that those are useful places to find models? Yeah, absolutely. I mean, that's effectively the premise too of writing the book. I mean, a lot of these models, and we covered some, but we didn't cover a lot of the ones from certain disciplines, come from economics and chemistry, like catalysts and activation energy or other physics ones. There's, we covered critical mass, but there's a bunch others, inertia and things like that that are widely applicable. And I think, those are the ones I can easily enumerate because they're coming from major disciplines. But if you're working and you have a good sense of models from your hobbies and you see how they metaphorically help you in another situation, that's exactly the point. And it's helping you because you've internalized music because you've done it for so long, they mean they're wired in your brain, right, to see that way. And so now you can use that as a shortcut for all these other areas of your life. And I think that's exactly the point is you can do that and you don't want to just segment all of your knowledge and experience from music into the music part of your life. You can use those things that you learned and you work so well in music and apply them to code. And music in code is actually, or art in code generally is, you know, the Paul Graham has that book, hackers and painters. I think there's a lot of overlap in those two disciplines in particular. Yeah, and I think the people who are listening to this show, they feel that. They can tell that there's a kind of a connection between those two things. And the same is true from your development life to your non-development life. I had written down some of the ones from the book, which I talked about technical debt at the beginning that are actually from development that are really useful outside development. So I wrote down premature optimization, brute force algorithms, divided conquer algorithms, the MVP type of concept, which I guess is more product, but also can apply development. Those are all very useful outside of the development product world as well. Yeah, I very, very regularly use dividing conquer search algorithms for socks in my drawer. That's a great strange, but it turns out it actually works in sorting algorithms. Yeah, maybe that's fine with you, but it's a sock thing to find stuff in my house. Yeah, fine something. Yeah, if you put, for example, if you put things that are similarly sized into the same buckets, you're kind of doing a bucket sort, right? And it's literal, but it turns out that, you know, your mind can actually grasp the size of something a little bit better than it can grasp other aspects of it. And so it's easier to find something if you know where the size, you know, similarly sized things are. Today's episode is sponsored by Git Prime. Have you ever noticed that the best engineering managers also happen to be the ones that debug problems really well? Part of the reason for this is that, well, engineering managers are using mental models, like what we're talking about in today's episode, to approach problems. It's not just about code, it's about systems. Git Prime has written and published a book about patterns that you find on successful engineering teams. Go and check it out. It's at Git Prime dot com slash 20 patterns that's 2 0 and then the word patterns. That book is entirely free. And if you go to that link, you can actually get a physical copy delivered to you as well, also free. Head over to Git Prime dot com. That's g it p r i m e dot com slash 20 patterns. That's 2 0 and then the word patterns. Thanks again to Git Prime for sponsoring today's episode. So Gabriel, I know we're kind of running up at the end of this episode and I've enjoyed every moment of it. I do have a couple of questions and these may open up into larger discussions that maybe we can have another time. But the first one, we've talked about duck dot go a little bit. I'd love to know you've been doing this for a little over 10 years now. And if you could go back and kind of give yourself that 2008 or even 2007 pre duck dot go version of you, if you could give yourself one kind of quick lecture or piece of advice or picture of the future, what would you take back to that? It's interesting. It's probably several answers to that. But let me take it from a couple of different framings. In terms of project success and things like that, at the beginning, we really didn't have and it was just me at the beginning, as much of these kind of mental models we've been discussing and operationalized inside the process of deciding what to work on. And for the first many years, we worked on a lot of stuff that turned out to not be the right direction. Sometimes you got to do that right. You got to take risks and you make experiments and sometimes they fail. But we went way beyond that. Building whole huge features and even kind of products that we could have validated were incorrect and de-risked that as another mental model way earlier. So one advice I'd give is probably like if I could give the blueprint of some of these things of how we operate now with those templates and objectives and really those forcing functions to question what we're doing. I think that's probably the single biggest thing I could do out of anything. Of course it would be prescient to want to know the future. That's probably the silly answer. Yeah, assuming that giving you the future wouldn't change it. Exactly. But I think that's probably the real answer is if the future is still uncertain and we operate in a very fast moving and most developers operate in a very fast moving technological industry where a lot of things are uncertain, you want to operate in a way that you can breathe very nimble and figure out what's going on through experimentation very quickly. And I think we weren't or I wasn't as agile when I was starting as I could be. Yeah, it's really important to think about these models. And I know at this point we've said the word model. I run them. But they really are. They're kind of like a map. And it's such an interesting concept because it's not really a specific map. It's more like navigating skills. You can think about it that way. So I have two more very quick questions for you. The first one is when I like to ask all of my guests, what is one topic of discussion that you wish more people would ask you about? I don't really don't have a great answer to that. You know, there are other things I'm interested in that I don't get to talk about a lot. But I'm also not like the world's expert at it. And so I don't know if I deserve to talk about it at this point. But I like to talk about these subjects. And some of the things that are currently fascinating me are actually like developer topic around evolutionary algorithms and a policy topic around why things cost so much called I think it's Brumall's Cost Disease of like education and healthcare and infrastructure at least in America has just the cost I've gone on up and up without much to show for it. And no one really knows why. And so I'm super interested in that. But you probably shouldn't ask about those things because I don't know the answer. Well, it's you know, talking about a subject I think you mentioned something kind of interesting that you know, you don't deserve to talk about it. But I think it's you know, one of the things that I think developers often get wrong actually relates directly to that. It's the idea that everything you do must necessarily be to some professional end. And I know that you don't necessarily agree with that. But I do think that you should have the opportunity to talk about that. Thank you. I definitely, definitely. I'm going to research it all. I mean, these are kind of on the hobby side. And then ultimately they turn into the professional side if it gets, if I get deep enough into it, you know. Yeah. Well, I think, you know, going back to what we've been discussing this whole episode, you really have the ability to take you and others who study models. You can think about these things and think about them thoroughly and engage almost any topic of discussion and start to get your hands around it. That's a key lesson. So I mean, it's one that I love to underscore. And we wrote in the book and I really believe is that, you know, with the power of models, but also just the power of just people are good at learning things. I think people end up having, especially after they have a career for a while, a very static view of their abilities. But in reality, you could really become an expert using liver practice or other things that really anything. If you just spent enough time, you know, researching and practicing. And so I definitely believe that, you know, certainly if I put effort into these topics, I could be back here in a couple of years, being an expert on it for you. It really, it's really just putting it in the effort. Yeah. And nobody gives the expert badge out anyway, right? So most of the time, expert is one of those kind of soft terms that we self apply or that ends up being applied. And a lot of it is just about learning and spending time with this subject. Exactly. Well, Gabriel, I have one last question for you and this is, I think I might be able to predict the answer, but we'll see. If you could give developers who are listening to the show, regardless of their experience level, just 30 seconds of advice, what would you tell them? Hmm. I'm curious what you predicted. I think my advice would be to figure out, I mean, I think I'd start with what is that North Star and figure out what it is you actually really want to do. Like, we have a lot of people now work with that go and that's a core question that we try to deter for people because, you know, every kind of, you know, some people don't have any ability necessarily to choose their projects, but there's often a wiggle room of kind of what exactly you work on and even what job you choose. And if you have that North Star and you know where you want to be, whether that's, I want to be a generalist or I want to be a specialist in this subfield or I really like working on this type of thing and that makes me happy. And you know that you can really make yourself a lot happier in life. And if you don't have that North Star to really answer that question, you can just really feel adrift. And so my advice is probably that which really is not just for developers, really for everybody. Yeah, my prediction is that you would say to be deliberate and kind of rather than just trying whatever random thing comes along that deliberate, you know, whether it's deliberate practice, deliberate thinking, really deciding is the critical skill. And what you're saying about having a North Star, I think, is kind of step one of being deliberate. Yeah, exactly. I mean, I agree with that. I mean, everything that, you know, we try to do and I try to do it right down here is it. Yeah, the word for that would be being intentional, right? And critically thinking about whatever it is you're doing. I'm really engaging the topic fully. Yeah. Gabriel, this has been an excellent conversation. Thank you so much. I'd love to know. This book comes out on June 18th. Correct. And people can find out on Amazon. You can pre-order it now, I believe. Yes, you can. There's more info at superthinking.com. And if you are not an Amazon fan, there are other ways to pre-order it, but you're welcome to use Amazon as well. Excellent. Thank you so much, Gabriel. Thank you. Thank you so much for listening to today's episode of Developer Tea. My interview with Gabriel Weinberg, make sure that you go back and listen to part one if you haven't already and then subscribe. If you enjoyed this episode, there are more episodes just like this one coming out soon. We publish three episodes of this podcast a week. So, if you don't want to fall behind, go ahead and subscribe. And then listen to the ones that stand out to you. You don't have to listen to every episode of this show. It's not a serial kind of show. It's not one where we have ongoing storylines. The only time that we actually connect one episode to another is when we're doing a series or if we have a guest on the show. So you can definitely listen to one episode at a time. There's no pressure to listen to all of these. Thank you again to Get Prime for sponsoring today's episode. Whatever to Get Prime dot com slash 20 patterns. That's all one word with the numbers, 2, 0. It's Get Prime dot com slash 2, 0 patterns. You're going to find a field guide to help you recognize achievement, spot bottlenecks, and debug your development process with data. Thank you so much to Gabriel Weinberg for joining me on today's episode. Go and check out superthinking dot com. That's where you can find his brand new book, which comes out on June 18th of this year. Thank you so much for listening to today's episode. This episode wouldn't be possible without spec network. Sarah Jackson is the producer for the show. My name is Jonathan Cutrell. Until next time, enjoy your tea.