Developer Tea

Design Lead at Basecamp, Jonas Downey - Part Two

Episode Summary

Jonas Downey is the design lead at Basecamp. In this interview we discuss the ethics of designing and building constraints into your product that change human behavior.

Episode Notes

Jonas Downey is the design lead at Basecamp. In this interview we discuss the ethics of designing and building constraints into your product that change human behavior.

✨ Sponsor: Redhat

Get access to Redhat's exclusive developer resources. Head over to https://developers.redhat.com/about  to join for free today!

📮 Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com/contact.

If you would like to join the new experimental DIscord group, reach out at developertea.com/contact, developertea@gmail.com, or @developertea on Twitter.

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Episode Transcription

so that's the problem that Twitter and Facebook find themselves in. They have an existing system. It has all these rules. It already has all these constraints and decisions in it. And you can't walk those back because that's what they have. So all they can do is sort of like bump it a little bit in one direction or another, but it's not enough. Like it's not enough to really fix the root problems that came out of it. Thankfully now we've learned that. And like, if someone sets out to make Facebook too, or whatever, maybe they would take all that into account and build something that's healthier. That was the voice of Jonas Downey on the last episode of Developer Tea. And if you missed out on the first part of this interview, I encourage you to go back and listen to it. It will add a lot of good context for the second part. Jonas is the design lead at Basecamp. If you haven't ever heard of Basecamp, then just Google Basecamp. I guarantee you will find out a lot. We talk about Basecamp. We talk about, hey, we talk about ethical design and all of the problems that we're facing on a very large scale. It's a very broad sweeping discussion that I've had with Jonas and it was incredibly interesting and empowering. I encourage you to go back and listen to that first part. And of course, if you enjoy today's episode, I encourage you to subscribe in whatever podcasting app you currently use. And if you're interested in talking with other engineers like you who are driven, they're looking to become a part of the business, they're looking to become a part of the business, they're looking to become better at being engineers and at being people. I encourage you also to reach out to me for an invite to the Developer Tea Discord community. Right now, I'm sending those invites directly to people who ask for them because we want to make sure that the people who are coming to this community are driven enough to reach out and ask for that. So you can reach me by emailing developertea at gmail.com. You can also reach me on my website at developertea.com. You can also reach me on Twitter at at developertea or my personal account at jcatrell. Let's get straight into the interview with Jonas Downey. Yeah, so this is what is so interesting to me about this is that it's kind of a philosophical and psychological question of why is it that if you don't put these boundaries on, things kind of naturally go down the drain. I don't know if you saw that the AI generated tweet bot or something that Google released that just very quickly devolved, right? It quickly became just not good. We'll leave it at that. And so, you know, I wonder, you know, why is it that things don't trend in a positive direction by nature? And I'm not sure that I have an answer to that. I wonder if it's because the levers, you know, there's a theoretical answer here, which is that if you don't put these boundaries on, things kind of naturally go down the drain. Yeah, so the theory is that the levers that give us money also happen to be the levers that cause fear or cause sensationalism or, you know, the things that our brains pay a lot of attention to, the attention economy doesn't really care what kind of attention it is. And it just so happens that we are very, we've evolved to be very fear oriented for our survival. And so we've kind of figured this out in a time where we don't really need that fear of survival. We're using it as a tool to make money from other people. That's one theory, but I'm interested in any theory that you might have. Well, that's definitely a large part of it is that whether or not, you know, Facebook or Twitter and these platforms set out to grow the way they did is an open question, but they certainly did focus on using psychological, you know, manipulative tactics in order to increase views, increase revenue, increase. usage across the products. And they continue to do that to this day. And that has been their sort of driving thing more than what they proclaim, which is like, oh, we make a platform that unifies the world and communication and all this stuff. Okay, yes, you do. But also, the way you're doing that is by trying to keep people glued on to these, these products as much as you can, because that's your business model. And so that kind of goes back to what you're saying before, that like the root of how you start influences all the decisions you make after that. If you start with that business model, you're going to have to make those calls that are probably not as good for people. And it's the same thing that like a healthy Facebook probably would never have grown to have like 3 billion people on it because like it just shouldn't. If someone had started that in a different way with a respectful business model where you had to maybe pay to use Facebook or your data wasn't used to sell to advertisers or whatever, the economics would be different. And it would be almost impossible to gather that large of a user base, which then in result would make it so you would have less of a likelihood of like affecting democracy in places across the world. So you can kind of see the root of it is that business model thing that like that is sort of the toxic core. And you can do lots of things on top of the toxic core to make it a little less toxic, but there's still going to be kind of like sewage spewing out. And I think that's what we have to learn from as platform developers going forward. It's like we've now seen that happen. And if we care about the good of people and we care about doing the right thing, we have to come up with other ways because that's just it's proven to not work. Yeah. So this is where the. Kind of rubber meets the road in a way because we can kind of see, well, what this necessitates in some ways is smaller growth, like less growing platforms, less reach for these platforms. And potentially, you know, how do you how do you ensure that? Well, because if the door is open for somebody to come along and say, well, I'm going to make a thing that other people are making you have to pay for, I'm going to make it free again. Well, it's just. It's just going to grow again. Right. So there I think your point about having some kind of like like the medical community has ethics boards and they have some level of regulation. I think it'd be interesting because we have regulation of this type for, for example, monopolies. Right. We need to break up companies because they control pricing. Well, what's more important potentially, I think, could be the pricing. Or the control that the pricing control of our attention. If one company can monopolize attention that much, that's probably not good. But we don't have a mechanism right now, culturally, to to even think about that. There's there's not even a discussion, a super cultural discussion about this topic. It seems to only be happening in subcultural ways. And specifically, the way that we're talking about. But now the way that we, you and I, Jonas, we create things that are not that way. That's kind of our contribution to the world. But because we're creating things that necessarily require smaller user bases, how can this how can we change? I guess I'm asking, how do you change the world? Right. Yeah. How do we fix it? So it's hard. I think the thing that's the most difficult is that especially the tech industry, but really, you know, capitalism in general is very focused on growth, that like growth is sort of the shining star that all companies and all governments and all economies aspire to. Like we have to be constantly doing more than we previously did in order to be seen as successful. And to me, that is fundamentally the root cause of why we have a lot of these problems, because that if you if you have a growth at all costs mindset, you optimize to that. And you say, hey, we have a billion users. Well, we still better. We're showing stock growth. So let's get a billion and a half. And how do we do that? Well, what else can we juice to get 500 million more people in here? So like when you're guided only by that and that's like the main thing, it leads you down some bad roads. So what we've done at Basecamp to kind of counter this idea is to reframe what we think is is successful for us. They're like we went entered into the email market where you have three or four very major big tech players that own. Easily billions of users. And we have no interest in being anywhere near that. For us, it was like, could we get 100,000 users? Like if we could get that, that would be like an incredible business. It'd be the most successful we've ever done. Um, and so but that's because we prize staying smaller and being profitable and not, you know, growing beyond something that we want to run. Like I don't personally even want to run a 2 billion user email platform like that sounds terrible. So, um, I don't know how we get that. Yeah. I mean, we've been sort of selling this idea for years and years that like you don't have to do this endless growth idea. You don't have to dominate the world to be a success. It's okay to define what's good for you and say that's a success. Um, but it doesn't seem to be the running expectation in the industry or, you know, in society at large. And I think the only thing we can do is just basically keep shouting from the rooftops about it. But it's a, it's hard to change people's perspectives because it's so ingrained in culture. And in politics and just in our expectation, um, that it's, it's going to take a lot of people yelling to change it. Yeah. Yeah. A lot of people yelling and potentially some, some level of collective, uh, cultural change, um, to recognize these problems and to, to say, okay, as a society, we have to find ways. Uh, if the, if the market is not going to regulate this, if we don't have. A mechanism in the market to regulate these problems, then we have to do it outside of that. Right. Right. Um, which is not very popular, uh, in a capitalist society to regulate things because, Hey, you know, we, we got here because things are unregulated. Why would you put restrictions on this? Because we're all living better lives and we're adding value to the economy. Well, there are some things that are not worth trading. Um, and I think that's, that takes time, uh, and, and big cultural shift. But it is definitely an interesting question. And I, I'm, I'm very appreciative that companies like base camp have decided to speak very vocally and find ways to spread that message both, you know, from the actual product level by building products that do this, but also, uh, in more philosophical ways, um, like releasing books and, you know, talking about podcasts. And I think hopefully more companies are going to do that. I think we're going to pick up that responsibility and continue to, to bring this up as a point of discussion. So it's not just tinfoil hat kind of people, uh, or at least that's not the perception from the public anymore. Right. That's, that's the hope is that I'm not crazy for begging my family to stop using this stuff. Right. Starting, you're going to be changed by it. You don't know that you're going to be changed by it, but it will, it can harm your life. Um, and. And that's not just conspiracy. It's not just fear, uh, talking. I've, I've seen it. I'm sure that you have too. We'll get back to our interview with Jonas Downey shortly, but first I want to talk about today's sponsor, red hat, red hat developer program brings developers together to learn from each other and create more extraordinary things faster. You can get free access to countless benefits to help you drive your career with red hat developer. You can access a full portfolio. Yeah. You have app development products and tools for creating enterprise software built on microservices, containers, and the cloud, everything from downloads to developer how tos and getting started guides to videos, books, newsletters, et cetera. You can get access to the gated contents of the award-winning red hat customer portal as well. There's interactive tutorials on the latest technology and red hat products with a red hat. You build here, go anywhere. Head over to developers.redhat.com. That's developers.redhat.com. Thanks again to red hat for sponsoring today's episode of developer team. It is, it's hard to get the message out when you're competing against these big platforms and the way things are. For that matter. Yeah. Right. Lots of, all of our friends are on it and we have that one friend who's saying that it's bad. Right. Right. Exactly. Yeah. Well, your group of friends is right. Right. Yeah. I've kind of been thinking of it from, from two angles. One is that, you know, we do have to constantly be speaking out and, and being a counterpoint and being counterculture in some ways. And eventually, you know, the culture comes around, like we were spouting about remote work, you know, 15 years ago. And now remote work has actually like kind of happened. And obviously not necessarily by people's choice, but it, it came around. And so at the time, if you'd asked us, like, is this remote work thing taking off? We'd be like, nah, not really. So, you know, change is slow. It's not like it's going to happen tomorrow. But so one thing is speaking out, but another thing is continually, you know, making stuff that demonstrates and, and shows the value of this thinking. They're like, we want to make products that are respectful, that do take care of people that are successful businesses. That hopefully meaningfully improve people's experience with email, even in the small way. Like if you just saved 15 minutes a week that you're not looking at garbage in your email, like, cool. Like, I'm glad we did that. So there's a whole gamut. I think another part of it is making sure that if you have some kind of authority or power at a company that you work at, that you bring in more voices that maybe aren't at the table. That can have an influence over the decision. Your company is making. That's also partly why we get things like Twitter being abusive to women and people of color, because those people didn't have a seat at the table when the decisions were made about how, you know, Twitter's features work. And they may have been able to help guide the product leaders to figure out how to avoid those problems, but they didn't. And so now you have this thing. So we, you know, all of this stuff, you can look at it and say, well, it should be regulated or there should be rules around it or whatever. But we can. We definitely still do grassroots improvements in a million ways by caring about these issues and talking about them. Yeah. You know, I like to imagine that there are some people in very high positions at some of these companies that actually do care, that actually do want to change the tide. And they do have the opportunity to bring those people in to help Twitter be less abusive. Right. And that, you know, there is some small inflation. There's a collection point that's, you know, along the way where it's kind of growing linearly. This, you know, inside of a big company like this, the change grows linearly. Then they do something that turns that change up. And it suddenly catches on. It's like, ah, I see why having, you know, multiple voices in the room is better. Right. And how it's going to become policy or it's going to be, you know, it's going to be encoded into our culture in a more meaningful, meaningful and consistent way. And that to me seems like the most likely route of change is kind of this, I don't know, turning, turning the ship around because you're at the helm somehow. Yeah, I think somehow I think that is actually the harder road. Like my guess is that the more likely thing is that someone will come up with, you know, better products and better ideas. And those will eventually take over, you know, attention in the way. Facebook took over attention. I think it's harder to say, given Facebook's history and its current leadership and the decisions they've made, that they're likely to graft on, you know, a new group of people who's going to like meaningfully affect what they're doing. It just doesn't seem like that's in their DNA at this point. It'd be great. Like, I hope so. But I don't think so. So, you know, more than likely, we're going to just have to kind of wait them out. And then, you know, whatever next year's TikTok is or whatever. Other things will come along. And. You know, Facebook may well get broken up eventually by the government. Who knows? It seems unlikely they're going to be dominant forever. You know, it's all these things eventually lose their favor. But. But, yeah, I mean, the answer is all of it. Basically, it's like speak out. You know, if you work at a company, try to make changes. If you don't work at a company, you know, do the best you can to make the decisions that are good. Yeah. So. So this. We've turned this conversation. Into kind of a long discussion about how we need to change things. But I'm interested in in some really practical advice for engineers from your perception or perspective as a designer. You know, I I want to care more. Right. I'm going to play the engineering role. Let's say I just joined a company. I want to care more about this stuff. And I'm not really sure how to think that way yet. I know how to do my job. I know how to pick up a ticket and work on it and then ship it. And. I'm good at that. I can write code. I understand, you know, all of the requirements. But I'm not sure how to think more like a designer. What are some cues or questions or just. You know, what can I be thinking about more often that will help me think more down this this road, this this way of thinking as a designer? Right. I would say the main thing is to think about the experience that someone is going to have. When they interact with what you're building. First of all, which you may not be doing if you're focused on purely on the implementation of the thing. So if your traditional process is that, you know, designer and some product owners have figured out what they want and then they sort of hand it to you on a platter and then you make it. There's no point in that process in which you're like thinking about how it works and whether you should do that or not. So the first thing is just asking, like, is this something that is good? Like, does this do? What I think it should do, you know, when we get to the end result of this, is this going to be something I can stand behind and say that, like, I like this. I worked on this and I'm proud of it. That's part of it. Another part of it is really thinking about which aspects of your product are using tactics that you could think of as being manipulative tactics. And those manifest in a bunch of different ways. Some of them are things like a sticky interface, like a thing that. Tags someone to come back and look at the software again. So, like, if you're designing a push notification, let's say, oh, we're going to send out a push notification every time someone does this in our app. Then you think about, OK, well, if we do that, let's say we just add this one push notification and we have 30,000 users and we send that out. We're going to send out 30,000 push notifications every day, which means we're interrupting people 30,000 times every single day. Like, is that what we want? Like, have we? Have we considered, you know, the side effects of that? Is it worth the interruption? Like, is what we're getting out of this going to be valuable? And, you know, you could say, like, well, hopefully the product owner and designer and CEO, whoever signed off on this earlier in the chain would have thought that through. Hopefully they would have. But if they haven't or if you're building something that you think is not what you personally would want in this product, push back on that. You know, go back to the designer and say, hey. Can you explain this to me? Like, tell me why we're doing this. You can get into it a little bit. At some companies, that sort of feedback may not be welcome. But if, you know, if you think about it more and you start to feel like, hmm, we're doing stuff that I don't agree with, then you kind of have to reckon with, you know, what you want to do next. So the main thing is just asking questions and being curious and pushing back is hard. And it also requires some degree of relationship between the programmer and designer they're working with. Hopefully there's like a back and forth thing. But yeah, I would say those are the two main things. Like, look for manipulative tactics and think about the experience that you're building. Yeah, that's great advice. I think looking for manipulative tactics is definitely something that takes time to build that skill, to notice, ah, this is manipulative. And there are, you know, there are things that I have seen built into products that are, you know, I've seen that are really good. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. They're not overtly manipulative. They're pushing this person this direction. You know, there's the overt stuff. Like when you see a pop-up that says, no, I don't like money. Right. A passive-aggressive pop-up. Exactly. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. choosing maybe the highest cost plan for something. Well, that may not necessarily be good for that person. And by choosing that as the default option, you're kind of saying, well, the people who choose to work with us, this is the kind of person that should. And maybe that's not true. And maybe the best plan for them is actually the cheapest plan. Or maybe you don't even need to use your product at all. But by having defaults, you're biasing people in one direction. And there are certainly times where this is important and it's part of the toolkit. I can understand having useful and positive defaults. But yeah, it's very difficult. There is a line. And knowing where that line is is very context dependent. You have to be able to think really thoroughly about who is actually looking at this thing. And are we? Are we actually pricing things fairly? That's another very simple. That's something that a software engineer, you may not be able to start that conversation as a brand new software engineer. But perhaps you should be able to. This seems like it's overpriced for this group of people. And one other thing I wanted to mention or maybe ask you about. It seems like you do a lot of thinking that is beyond the immediate experience. So in other words, okay, well, I want to check my email. I don't want to be stressed when I check it. Cool. All right. We're going to take away numbers. We're going to design the experience in a way that's more mindful. Great. But it seems that you and Basecamp more generally and good designers also think, okay, what about in three hours from now? Or what about when there's a very important email that comes through 10 minutes after they check it? What happens? And what about when they're on their phones in transit? There's these second order effects and third order effects. What about when somebody else is using a different email client? And now you have these two people that have totally different perceptions of what email or how email fits into their lives. How do we reconcile with those? So I'm interested in this idea of second order effects. And how do we think more thoroughly about the user's experience rather than just when they're on their phones? And how do we think more thoroughly about the user's experience rather than just when they're using our product for those three or four minutes beyond into their lives? Yeah. So I think there are kind of two parts to this. One part of it is when you're thinking about things like defaults or things like notifications or interruptions or whatever, you can almost always think of those things from two angles. One angle is the person first angle, the person who's using this thing. What would be appropriate for them? And what would best fit? And what would best fit into their model of how this product works and fit into their life at large? Like, what would you want them to experience? And then the other side of it is the business first view that like, we want to get more people using this app. We want to get more people paying for this app. We want to get people to automatically upload their contacts when they download it. So we have more information about their contacts. Those are business problems less than customer problems. And so you have to balance those two things because a lot of times we do a lot of business first. We do a lot of stuff without really questioning what we're doing. Like, we'll just be like, oh, we'll just send out an email campaign or, oh, we'll just add notifications. And then every time we get an email, you'll just get a notification and it's fine. You know, those are sort of lazy decision making things that like, sure, it might improve your business performance a little bit. But the experience that you then gave to people to use is not great. Like, they're going to like it and it doesn't push the needle forward. So for something like notifying about email, what we did in Hey was that we designed several different workflows for different types of email so that before any person can email you, you have to say if they're allowed to. So when they first email you, Hey asks you, do you want to hear from this person or not? And if you say no, you just don't ever hear from them again. They're out. They're blocked. If you say yes, you can pick where that person's email should go into a number of different spots. And then within those spots, you can decide if you want to be notified. You can be notified about certain things or not. So you can be notified about emails from a certain person or from a certain thread that you're on or from like anything that you've sort of designated as important. And that's it. So there's no like blanket notifications on switch in the whole app. You can't just like get notified about everything. You have to be a little bit more thoughtful about like what's worth interrupting you. And it's your decision. It's not our decision. We didn't say like, oh, we have a very smart artificial intelligence. And we've decided what you'll. Get notified about. Well, we probably can't. Like no matter how good the artificial intelligence is, it's not going to exactly match your mental model of what's important to you. So our solution was to give you the power to make these decisions rather than setting the default to be like you just get blasted because that's the default that everybody just accepts. Like, well, I just have to get blasted. We don't think that's true. Like we don't you don't have to accept that. So I think those are kind of the different parts of it. It's like thinking through. What experience you want people to have when they use the product and then thinking through, like, how can we design this in a way that drops the assumptions about how these things should work and actually does what we want it to do? And sometimes it's harder. It's not the easiest road because like turning on push notifications like across the board is easy. But doing the harder thing is usually the right thing. Right. And so one thing that stuck out to me and what you said there is, well, no matter how good our AI gets, it will never be able to totally understand. Like this very, very. Important thing in your daily life. Email. It's important and it's also dangerous. Right. And so we it would be dangerous for our AI to pick up the ball and say, well, we're going to decide what you're going to see. Right. That seems like a bad idea. And so in some ways, the hard solution, which is trying to build an AI that can understand what's important to you, is also a bad solution. So it's like. Right. It. Turns out. Yeah. It turns out that the good solution here is is also somewhere between the easiest thing, which is turn all notifications on and the hardest thing, which is to try to build a robot that replaces your judgment, your human judgment. And it actually is a mechanism to allow that person person's judgment to shine. Right. Right. Give keys back to to the user in a way. Yeah. Part of this thing, too, is that as technologists, I think we are always eager to throw more technology at problems and say, like, well, the reason that email is bad. Is because we haven't been smart enough about training computers to filter through email for us. And it's like, no, actually, the reason email is bad is because we people spam the hell out of each other with really useless email all the time, especially us in marketing and in product companies that blast out campaigns and do all these things. So it's not so much that we need to train computers to be even better at email. It's that we need to empower people to fight back against the junk that they have been. Basically. Having to settle with for the better part of a decade. Like everyone just got used to like, well, I just get all this trash and I hate it, but it is what it is. Like, it doesn't have to be what it is. We can give you tools to make it not be gross. You know, it's like that's our job as designers to think about those circumstances and say, like, why is it like this? Like, why have we just like accepted that it's gross? And I think to me, the answer is rarely that like I want machines to do more thinking for me. Like in some cases, perhaps if I'm doing like sophisticated. Mathematics or something fine. But in terms of like my day to day experience using a product like an email app, I want to have control over it. I want to know where that email went. Like if you use Gmail, it decides what categories things are and it's hard to get it to change its mind because like its mind is made up. Like you don't really actually have autonomy. And it's a weird feeling to be a user of a thing where like the thing is smarter than you. And so we try to place all of that back on the people and say, no, you can make. Decisions like we trust you to do the right thing. We're going to make it easier to make those decisions so you don't have to like configure 40 options and set up complex rules about where emails go and all these things. We just we give you the workflow and then you can just do it. Yeah. Yeah. There is there's some level of design thinking that that begins with understanding what you assume about the problem. Right. OK, well, we assume that you need to see all of your email. And that's probably a bad assumption. The vast majority of my email I don't need to read. And probably I should I should I should never even let it in my inbox to begin with. But I made a bad decision three years ago and haven't unsubscribed from that particular list yet. Right. It would take me a long time to go through and unsubscribe from it. But I realistically there's a very small percentage of my email that I really need to see. So if you if you design a product with you, you will design a product. With a lot of assumptions in mind. For example, the average person who's listening to this podcast is probably going to design a product with the English language as the primary language in the product. Right. And it's most of these assumptions tend to pan out. OK. English tends to be a good language to start with for a new product, especially if you're targeting people in the United States. Right. These are seem obvious. But as it turns out, you know, these are assumptions that we're making. And there is probably. You know, for for most products, there is probably a line that gets crossed when we make those assumptions that if we were to roll it back a little bit. Right. To say, OK, wait a second. Are we sure that they need to read all these emails? Right. We sure. Yeah. If you if you kind of roll that back and say, no, maybe not. Well, we could we could change. We could take a different path in that fork in the road. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. if they lose that coupon code from three years ago from Adidas, what are they going to do? Leave us, you know, attrition rates are going to go through the roof. No, that's not going to happen. They're just going to be a healthier person probably because they don't have this. Yeah. And part of this is also the power balance that exists in these things. So traditionally in email, the power balance is very one-sided. It's the sender has all the power balance that you, if you are like an email marketing company, you can, and you have a mailing list, you can send out, you know, hundreds of thousands of emails and pretty much be sure that they'll all get delivered if it's a decent mailing list. And the people who receive those are kind of stuck. Like they can maybe unsubscribe from it, but that doesn't even always work. It's not very obvious, tends to be downplayed in most emails because the people who send the email don't want you to unsubscribe. So they don't make it real easy and it's, it's lopsided. So that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, the effect that you get is that when the sender controls everything, the receiver just has to take it and then they're inundated with stuff. So we try to tilt those power dynamics in hay and say, okay, as the receiver, you can say, no, you can block those things. You can say, sure, I still want this, but I don't want it to bother me most of the time and put it in a different place. So if it comes in, I can still get to it maybe if I need to, but I'm not going to care. There's just those basic, simple human things. It's the same thing that happens if you go to your mailbox. And you have two pieces of mail that are important and five that aren't, you just throw the five in the trash probably. Like it's the same deal. Like you, the sender sent you stuff and now it's your decision to have to deal with it. And we just try to think of that from a different perspective and say, maybe it doesn't have to be like that. Yeah. And, and our, you know, this asymmetric consent, right. And it's, yeah, it is consent. Yeah. Cause I consent once, right. I gave you my email four years ago and all of a sudden you're going to pop up in my consciousness, you know, right. And that seems like asymmetric consent to me. I feel like you should have to ask for my email again. Yeah. Well, even the work you have to do to go and get off a list is unknown. Like you'll click through. And then a lot of times those unsubscribed sites tend to be designed with dark patterns to try to keep you on your favor. So it's like, it's work to even stop it. In order to get it to stop, you have to do even more. And like, it's just another place where it's, it's asymmetric, like you're saying. Yeah. This has been an excellent conversation. I want to wrap up with you here. I have one final question for you. I wish we could talk about this stuff for a long time. Oh, and I also should say that Basecamp certainly has not sponsored developer team in any way. This is all a voluntary discussion on both parts, but I do have one final question for you. If you had 30 seconds to give, software engineers, developers, whatever their background and level of experience, if you just had 30 seconds to give them all a bit of advice, what would you tell them? Um, my main advice is to chart your own path. Um, I think that it's very tempting when you're young and you're coming into this industry to think that the only route is to go the common route, like go work for a big tech company, go get a job that pays a lot of money, you know, go do the things that you hear are the things that you do. as a tech worker in this industry. And I think there's another way to do it. And the other way to do it is harder, but it's much more fulfilling, which is that look for companies that speak to you that stand for things that you stand for. If they don't exist, make a company that does. And these are going to be harder paths to do, but we need people who believe in things and we need people to be risky and step outside what's expected. And I think that's my advice is just go figure out what feels right. And if something doesn't feel right, don't feel like you're obligated to go do that just because that's what you thought you had to do. Yeah. Yeah. That's great advice. Jonas, thank you so much for joining me on Developer Tea. Where can people find you online? I am at Jonas Downey on Twitter, and you can email me. Jonas at hey.com. Awesome. Thanks so much, Jonas. Sure thing. Thanks for having me on. Thanks again for listening to today's episode of Developer Tea, part two of my interview with Jonas Downey. A reminder that we are kind of in the middle of our COVID series, or what life is going to be like as an engineer after COVID is over, or as the pandemic begins to come to a close. So if you don't want to miss out on those episodes, go ahead, subscribe on whatever podcast you're listening to. And I'll see you in the next episode. You're currently used. Thank you again to today's sponsor, Red Hat. Head over to developers.redhat.com slash about to get started today. Thanks so much for listening to this episode. If you would like to join the Discord community, send me an email at developertea at gmail.com, or you can reach out to me on Twitter at at developer tea. Also, we'll be back on Friday with another Friday refill. So look forward to that. Thanks so much for listening. Until next time, enjoy your tea. Thank you.