Developer Tea

Interview w/ Chris Shinkle (Part 1)

Episode Summary

In today's episode, I talk with Chris Shinkle, Director of Innovation at SEP. I believe today's episode is one of the most important interviews I've done to date, and I hope you enjoy it as much as I did! Today's episode is sponsored by Linode. In 2018, Linode is joining forces with Developer Tea listeners by offering you $20 of credit - that's 4 months of FREE service on the 1GB tier - for free! Use the code DEVELOPERTEA2018 at checkout.

Episode Notes

In today's episode, I talk with Chris Shinkle, Director of Innovation at SEP. I believe today's episode is one of the most important interviews I've done to date, and I hope you enjoy it as much as I did!

Today's episode is sponsored by Linode.

In 2018, Linode is joining forces with Developer Tea listeners by offering you $20 of credit - that's 4 months of FREE service on the 1GB tier - for free! Head over to https://spec.fm/linode and use the code DEVELOPERTEA2018 at checkout.

Another huge announcement - Developer Tea is officially available on Spotify!

Episode Transcription

How often do you need to change code that you wrote before? Hopefully the answer is every day. None of us are writing code that stays in the same state that it was in when we wrote it indefinitely. I would imagine that all of us are writing code that were changing pretty consistently, some more than others, but certainly all of us have had that experience. And we're talking about that change management process and many other exciting topics with today's guest, Chris Shinkle. Chris is the director of innovation at SEP. This is one of those interviews that I think a lot of you and certainly I am going to come back to many, many times because there's so much great information packed into this interview. Chris is such a well-seasoned developer, but he has a lot of great experience to share with us. And I'm just really excited to get started. So I'm going to get out of the way. Again, you're listening to Developer Tea. My name is Jonathan Cutrell. The goal of this show, by the way, is to help driven developers connect to their career purpose. And we already laid out kind of the plan for the year to focus on these three pillars, the purpose, the practice, and the principles. And in today's episode, most of what we're talking about is in that principles area. So hopefully that is apparent. This is a two-part interview. The interview at the second part got cut a little bit short because my wife and I, we thought we were going to go and have our son that day, that night. And actually while I was interviewing Chris, she texted me and said we need to go to the hospital. And you'll hear that part actually at the end of the second part of this interview. Let's go ahead and get in the interview with Chris Shinkle. Chris, welcome to Developer Tea. Thanks for having me. I'm looking forward to this discussion. You have been giving a talk recently that I think is right in line with some of the things that we've been discussing on Developer Tea. And we'll get into that in just a moment. But first I want to ask you about your origin story, I guess you could call it. What are some of the experiences or the initial moments that guide you interested in software development? Sure. So I was born, I was born in 1971. But really, in really the first kind of cohort, I'll say the serious gamers. I grew up in that world where the world of video games and the Aftooie and even Yutari and programming at home kind of originated. And so, I had an early interest in computers and engineering as a kid. Went to Purdue University and studied computer electrical engineering. And after that got out, you know, got out and, and sort of found a job that really allowed me to kind of apply some of those skills. But really I love just the never ending landscape of learning and solving problems and challenges and puzzles. And so that really fueled me to get into software development. That's so interesting. Can you kind of reminisce on some of those early projects? What were some of the technologies that you're actually visiting in those early days? Sure. So the first, I'll say, professional project I was on was an engineering monitoring system for a military aircraft. And it was written in Pascal. Oh, I'm going to say it wasn't even a Windows GUI. It was all DOS based GUI system. Wow. So you're writing this Pascal code. And I assume since it's military and it's something that's so important as, you know, flight engineering instruments that you had to have some level of pretty rigorous testing and validation and that kind of stuff. We did. So Agile really wasn't around or those concepts, at least for us, weren't really familiar. And so the concept of pair programming or even though we didn't call it that, we worked together as pairs. But what we often did is worked closely with our customer representative. So this particular fellow, his nickname, his nickname, we just was Willie. And he was at a military, some naval bases and worked with a lot of the young guys working on planes and seeing stuff that would capture all of these flights and flight data. And every so often he'd come to SEP and sit down with us and we'd look at like an extra flight. He'd think, here's what happened and this is what the system reported and said happened and that's not right. And so we would sit and work really closely together and sort of figure out what went wrong or what diagnostic wasn't getting triggered or what was maybe getting falsely triggered. And so there was just, there was a lot, it wasn't the rigorous testing in terms of a big huge automated test suite or these really in-depth test scripts as much that was working with customers with actual flight data and using that in the system to sort of validate all of these diagnostics and faults and stuff that we would detect. And over the years we would build up this pretty exhaustive case, this library of all these flight data. And when we would run the system or do a new release, we would run through and execute all the flights and they would generate flight summary reports and we would look at those then and confirm that we were getting what we expected to get. And so in a lot of ways we were doing what developers do today just was so much with a lot of automated systems and if you build scripts and build servers and whatnot. But it really involved a lot, you know, involved a lot and I think it's funny. We did a lot of using actual live data to validate the system. And you know, sometimes today I see people get away from that. They end up using, you know, manufactured data or test scripts or test data or somebody inside your organization builds the data and it's just never quite as robust. It just never has all the nuances. There are all the craziness in it that the actual data does. And so we might have been a little ahead of our time, I guess. Yeah, that's such an interesting thing because you know, basically what you did was sort of like test driven development in a way and in that you set up your expected case and then you ran your code against it. But it just so just so happens that your expected case wasn't just a written out file. It was, well, I'm actually going to go and record the data or create an expected flight rather than an expected CSV or something like that, right? Very interesting, very interesting reality there. But it outlines and underscores this concept of, you know, just because your tests are green or just because things seem like they should be working, that may not necessarily be the case, right? You still need to validate that in the real environment, still need to look at, you know, with your own two eyes is the way I like to put it. You need to, as a person, validate that the software is doing what you're expecting it to do. And that takes time. So yeah, I think a lot of people are shipping software today, you know, whenever you're all of their tests pass and they expect that to cover them and a lot of times they don't put their own two eyes on the software, whatever the outcome is, they don't validate, you know, even in very simple cases, a lot of times that software doesn't give validated properly. And that ends up, you know, being very costly, very interesting use case. Yeah. So I'd love to talk to you, you know, about that progression and, you know, about 10 years later or so, I guess, is when Agile, you introduced Agile to SCP. Can you kind of discuss, first of all, you know, the scale of SCP, SCP, what you do with SCP, but also, you know, how you came on this concept of Agile and then we're going to go into discussing how behavioral science actually can apply to Agile. But first, tell us a little bit about SCP. Sure. SCP is a product design development company in Carmel, Indiana, which is just outside of Indianapolis. And we work with mostly Fortune 100, Fortune 500 clients, big companies, although we do, we do some, a fair amount with some smaller organizations in aerospace, healthcare, medical devices, consumer electronics, agriculture, finance. So a really diverse, you know, set of industries and help our clients create products and get products to market. So we have about 120 so developers, total people in the company, somewhere between 130 and 140 individuals, but mostly development, design, test, right. But the majority of the work is makers or the majority of the people there are makers working at code. SCP has been around for 29 years. So we've gone through a lot of stuff and it's interesting how the industry has changed over that time period. When I started working and introducing Agile to SCP was 2004 and at that point in time, it was really sort of had the purest kind of form where I guess the Agile originated from, which was just looking for better ways to build and deliver software. So it just had to work with a lot of customers and you'd just, you'd kind of gone through and seen some of the same stuff over and over again and you just kind of felt like, you know, there's got to be a better way to do this. There's got to be a better way to build software than this traditional big design up front, all these requirements, build these documents and let that drive development and ultimately, you know, verification or testing at the end to try to... Can you actually share some of those signs, some of those things that you saw over and over that felt wrong, it felt like, you know, there must be a better way. Can you share some of those signs with us? Sure. So, I mean, some of it was, so a lot of our clients are these large companies and in fact, some of them have been around more than 100 years. And so they have got the lion's share of their organization, their culture, their management structure, their philosophy for building and delivering products. It comes from, you know, an era where most of what they built and delivered were all sort of manufactured products and goods and in that world, it was essential to get things figured out up front. It was essential to get things right up front because once I built the software, so I'll go back to the example I was using earlier in the engine monitoring, we delivered tests software on CDROMs. So once I've burned the CDROMs and give them to the Navy and they put them in Siemens hands on boats or aircraft carriers that are around the world, it's pretty difficult to go to tell the captain of the boat how you need to do it up there in your software and somehow deliver a new CDROM to them. No, we don't have a fly-out there. It's the point, right? They do, but like what we were always told is the ship's captain of God and when he says there's no communication on or off the boat, then that's what happens, right? And so in those early days, those early days, there was just this huge emphasis on, we had to get it, we had to get everything right up front because it's too costly later on. It's too expensive to... Interation is expensive back then, right? To do that, so the transaction costs and coordination costs associated with that are just way too high. And so we need to get things figured out. Well that philosophy sort of... Or that fact sort of permeated all levels of the organization. So they funded projects on an annual basis and their governance structure and the way they went through and had to secure funding and get approval and all of that stuff, you know, evolved around this sort of staged gate philosophy and lots of checks and balances up front before they'd made substantial investments and what. But what they were checking wasn't working test, it's offer like we would today. What they were looking at it, a lot was documents and papers and stuff and they saw those documents as an asset as opposed to a sort of a liability. And so those things, that philosophy or that kind of experience or idea of I've got to get things figured out in front, when you started to apply those to software where, you know, it was easier to change software. Now even though we were delivering stuff on CDROM, it was harder to change but that was still easier to change than say, you know, a physical e-promber or, you know, chip that you're burning and actually manufacturing. Right? I mean, that's literally the difference between hardware and software. Software supposed to be this thing that was often easy to change that you could make adjustments to. But these companies, these large organizations that have been successful for so many years had all of their experience in systems and culture and organizational structure or whatever all, you know, coming from this other area that made it very difficult to adopt those. And so they were taking in the same philosophy as the software. And so oftentimes it was about the philosophy was, well, if we get the plan right up front, we just have to execute the plan and that equal success. Or if we define the doc, if we define all the right requirements up front, then that's going to equal success. Or if we get the design perfect at front, that's going to equal success. Right. And it just never did. Right? It never did. It didn't matter how hard you tried or how, what new idea came about for getting the requirements right. And it sort of led to this like, you know, like it kind of took in the wrong direction in the sense that the more you tried to get it right up front and you feel as you were wrong, then the more time you tried to spend up front. And that just made you were more wrong by the time you got to actually building software and how long it came. Right. Yeah. And so I need, I need to spend more time up front, right? And it just kind of was moving further and further away from short, iterative cycles, building small increments, testing and validating those, you know, early and often. And so just, we just saw that over and over again. Organization's struggle, recognizing that there was a better way. There was a different way to approach it. It sort of fundamentally started with the notion that we don't have to get everything figured out right because it is easier to change now. And then what it was and it's software. And so there's not a huge cost, right, with. To update. Right. Right. You're not remanufacturing. You're not out. Material costs is, is, is, right. Right. Yeah. Today's episode is sponsored by Linode. It's a new year, but Linode is still providing the same excellent service that they've always provided and they've come back in 2018 with another four month credit, a $20 credit for their Linux in the cloud service. You can instantly deploy and manage an SSD server in the Linode cloud and you can get a server running in just a few seconds with your choice of Linux distribution, resources and node location. This is very simple and it hasn't changed very much because these same building blocks are kind of fundamental if you're a developer. Linode is providing you with the Linux operating system in the cloud and that's, that's really kind of a fundamental building block for pretty much every app or web service that you can imagine building. Now of course, Linode is continuing to innovate within their company and in their service offerings. For example, they do have 24 seven friendly support. They have phone support. You can call somebody in the middle of the night and you're going to get a human on the phone and you're going to talk to them and they're going to help you with your problem. This is a unique experience to have if you're a developer that you're working on an app at 2 a.m. on a Saturday. Hopefully you're not doing that but maybe you are and you need support while Linode has your back there. They have VMs for full control. You can run Docker and containers and crypto disks, VPNs. Pretty much anything you can imagine doing on Linux, you can do on Linode except it's in the cloud. They also have a new beta manager and this is one of the cool kinds of things that Linode does. They've open sourced this. It's a React app. It's a single page app. They built it with React and it's available on their GitHub page. Linode is basically a company of developers and they're going to give you resources too. This is such a cool thing. Linode is building a knowledge base and they'll send you resources. For example, after you have called them on a support line, they'll send you a resource that's relevant to your problem so you can learn more. Not just solve the problem, move on but solve the problem and then gain some better insight. So Linode is providing you with that four free months for using the code. This is a new code. Developer Tea2018. Developer Tea2018 at Checkup. Head over to spec.fm slash Linode now to get started. Thank you so much to Linode for continuing to sponsor Developer Tea. Something I talk about on the show a lot is doing small things. This is at the heart of agile. This small iteration, the very smallest thing that you can do is what you should do next. Moving the value forward as much as possible. The reason, or some of the reasons for this, maybe not the only one, but some of the reasons for this is that smaller things are more manageable in your own mind. Basically, what we're doing when we create software is we're engaging, we're bringing an idea into our brains and then we're doing some sort of operation with that idea in our mind to translate it to a machine and to other humans simultaneously and fit it into these previous ideas that we have already translated. That sounds like an ethereal subject, but if you think about the process of writing code, that's exactly what it is. You're taking some idea and you're translating it into code. Instructions for a machine that another human can read. This is important because the larger the idea, the more difficult the translation process is going to be, the less likely you are to be able to hold it all in your mind and the more error prone you're going to be when trying to translate. Very simple process of iteration helps create manageable chunks. That's the basic concept. The only thing that would lead up is there. The only notion I would put down is that it's not small for the sake of being small. It's about delivering an increment of value. That value could become in terms of the product itself is valuable or the features valuable. It could come into form of value in some form of learning. This is really kind of such a doppelene and only startup philosophy of small experiments and what the throughput is is actually learning them. Learning small steps and that learning and the record is learning or knowledge is the greatest constraint to throughput. I'll ask people all the time thinking about the last product, software product you worked on. If you were to do that, build that same thing today. Knowing everything you know today and everything still happened, how much less time would it take you? It's usually a little more than if I sort of take a poll. This often times between 50 and 60% less time. So knowledge is one of the greatest constraints of throughput. I see some people when they adopt agile, they just break things down in small chunks and they break things down in small chunks in terms of what's convenient for the developer or what's easy to do for the developer or maybe it's even what easy to test. They lose side-ows. We're trying to deliver a small increment of value or we're trying to expedite the learning process. So sometimes maybe what we're working on isn't the smallest thing we could work on but it's the smallest unit that's going to maximize our understanding or learning of the problem we're trying to solve. And in doing so, we're going to increase throughput and reduce cycle time and so on and so forth. So, you know, lean to help teach us that a system, a local optimization and optimizing a small individual piece, oftentimes results in a suboptimal whole. And so I don't want to just build these small things and optimize my entire system around building these just small individual increments. And such that doing so would reduce my ability to deliver value or make the entire system less optimal in terms of my ability to ultimately deliver a product or deliver, you know, could be the tester inside an organization or whatever you're delivering or wherever you're delivering to. I think it's tricky, right? There's a balance, but you need to understand why you're breaking something down. And what you said is true. It is easier to understand. It's easier to manage in our minds and what we're working on and for somebody to review and provide feedback, all of that is true, but I think you have to sort of frame that with the context. That's that we're delivering value or that we're optimizing learning or that we're reducing risk. Those are the things that end up being the greatest constraints. And that's how that's one of the ways that a client will ultimately end up buying into the idea in the first place, right? If you can recognize, okay, the best way to deliver value earliest in this is to create the smallest version of value, the highest return over time. There is a really interesting thing you said in there that I'd like you to kind of expound on if you don't mind. Sure. And I'm just going to take all the knowledge that you can possibly give us in all the years that you have in the experience that you have at SCP, but the discussion of hyper optimization or local optimization creating a less than optimal outcome for the whole. And you can discuss why that is and how that actually plays out. Sure. It's a concept that really originated from lean manufacturing. And so if you think about a series of machines on a manufacturing line, that's producing a physical product. And I have all these, let's just say, yeah, we have five different machines there, each one performing a job and then passing the results onto the next one and so on and so forth. Using each of those individual machines, working to optimize that each one of them is the most efficient and the fastest as possible, doesn't often result in the greatest throughput through the system. Meaning one of those machines is slower than the rest. Right. And if that one ever is starved for work, it's going to slow the whole system down. And so I need to understand, and this is kind of comes from, again, sort of lean manufacturing, but was originally from sort of this theory of constraints and that system has a single constraint. And if I don't work to manage that constraint, the whole system is going to suffer as a whole. And so in software development, I'll oftentimes see, I'll give you a real example. You're working on a product for a customer that was a lock that would be installed in small office buildings and the lock was a wireless system. So there was firmware in the actual lock. There was communication protocols, all the locks would talk to each other. There was a sort of central router and admin interface to the system. And so if it was in a school or a building, you could lock down all the doors at once or you could set different schedules on different doors. So you basically control this access control to this physical location. Well, we were building all three of those pieces of the parts. We had sort of three individual teams working on the firmware. We'll just call it the router software, the high level sort of admin piece and all the communication protocol in between. And when it came to doing, we were working in agile way. And when it came to doing sprint planning and sort of laying it out, each team was picking the pieces that was easiest and made the most sense for them to do. So the firmware team was selecting user stories or tasks that made the most sense for them. They're in this file and makes the most sense to do, you know, A and B was C because they're all in the same location. So I'm going to focus on that. And every team was doing this. So every team was optimizing and looking at the entire pile of work that they had to get done. And they were planning and developing and building it in a way that made the absolute most sense for them. Okay. Now, all that's going to development and things are getting in the test. And we use con bond boards and visual task boards. And so all of a sudden you started to see tickets start to pile up in test and thinking, okay, more and more stuff's getting there, but it's not getting through the system. And again, let's go back to man, the greatest, the greatest constraint here, the greatest to my system is to throughput his knowledge. And so what I really want to do is get these things together, working together so I can go back and show the customer and we can test and validate this, you know, or work with the real users to make sure it's working. And nothing was getting through the system. We were getting lots of stuff done. I mean, we were flying all three of those teams individually optimizing. But all of those pieces didn't work together to form a bigger whole unit that we could then test. So features were in firmware, but maybe the communication protocol wasn't ready to support them because it had made sense for that team to build those features. And maybe some of the communication protocol was there and it was in the firmware, but you couldn't actually do anything with it because the interface to the router and stuff wasn't built yet. Or maybe there was features in the router, but it wasn't in the firmware. And so you had all these pieces and so you had this thing piling up and test. And so nothing was getting through and things were piling up and delayed work creates additional work. And the more things sit there, the more likely defects are going to, you know, delays also increase reduced quality and increased number of defects. And so you start having quality issues and the whole system started, right, you can sort of see sort of slow down. And so what we did is we backed off of that and said, look, we need to get something through test, ultimately into the customer's hand to look at to test and make sure all these pieces are working. And so from there, we started working backwards. Well, in order to do that, what do I need to test here? Well, I need to test this. Okay, if that's the unit I'm testing, that's the smallest piece of value I can deliver to customer's, what does that mean for each of the customers? What needs to be there for each of those teams? Well, initially the developers resisted that notion, right? Like, well, you know, test is usually the guy at the end that kind of is the afterthought and just, you know, it just gets whatever is handed to them until the good test. But it was actually starting to drive and influence development. And then, in some cases, it's, you know, going back to say a team that was implementing features A, B and C, and it made sense to do all three of them at the same time because they were, they were all in the same file or the same part of the code base. But doing B and C was going to take three days and just doing A was going to take a day. And so now I'm delaying testing in this release by several days or several weeks and so and so forth. And so it started to feel very, developers didn't like the notion because it started to feel very suboptimal to them. They didn't feel like they were, they were working in a very efficient way, which is something you'll hear in a system. If you're not the constraint, right? If you're not the system, it feels like what you're doing is maybe not real efficient. But what happened was, is we were able to double our throughput and reduce our cycle time by half by sort of re-mounting. So we were basically getting twice as much done twice as fast as we were before. And the reason that happened was, as we started looking at the entire system, the entire value stream from this feature inception all the way through to delivery and what pieces needed to be done. And we were only working on those pieces. And again, it felt inefficient to some of the developers or some of the development teams. But in the end, what it produced was a much better overall system. Yeah, that's absolutely nice. It's actually that a lot, right? A developer wants to do what makes the most sense for them. And it happens, and this is why we use visual task boards on Conbon, because developers make decisions and it's not always clear the impact of those decisions. So I'm going to go and do this. It's not going to take me that much longer to do this other thing. Maybe just a couple days, big deal. So when your project, what's a couple days, right? Oh, but those couple days means this gets delayed by a week. Oh, well, that's delayed a week, then this gets delayed by a month. And all of the things start to pile up. And when they don't have visibility and understanding of what those individual decisions do, it's hard to make good decisions. Yeah, yeah, absolutely. Sorry, that was a very long answer to that. Not at all. I think that's an excellent answer to what ultimately may not feel like an intuitive problem to people, because that the first version of that discussion where you're talking about, having systems that are really on their own working quite well. It's well-oiled machines. And we see this a lot in web development when we have siloed guilds of people. So you have people who are really good at front-end web development, and they're in one kind of silo. And then you have people who are really good designers and they're in one silo, and things come in, and that silo is a really well-oiled machine. They crank through exactly what they're given, and they pass it along. They throw it over the wall. And the problem is very seldomly, and this is, I'm not saying specifically at the company I work at, but as a general rule in web development, the problems end up arising when the design doesn't quite translate very well to web. Or when the front-end developer didn't quite exactly get something from design correct in the interface. And so there's a lot of that that really we need to learn from this multi-discipline team, the idea that, hey, I'm going to be able to learn from you better, and you're going to be able to learn from me better. And really, and I say this all the time, we're building one thing together. It feels like in a silo, you're building multiple things, but when you actually integrate your teams, you realize it's like an epiphany that you'll have, you're really building one thing together, and that's such an important distinction to make as you work. Right, and I have found that we use a lot of visual controls and visualizations in our work to help us see those things, right, because a knowledge work, it's very difficult to see inventory piling up. If we were working at a manufacturing floor, you could see inventory piling up, and you could see, you know, this siloed group of people over here, the UX team producing lots of assets that are piling up, right, but in a virtual world, digital world, you don't see that. You don't see code and artifacts piling up. It's not evident walking through an office building, and so we want to make that, that hidden work, that inventory, that virtual digital inventory. We want to make that apparent and easy to see. And so we'll do, we'll offer a lot of stuff to create, you know, visualizations and visual representations of that work so that we can make better, but ultimately make better decisions. It's been a jam-packed episode of Developer Tea. Thank you again to Chris for joining me. For today's episode, make sure you subscribe if you want to mess out on the second part of this interview. And more importantly, beyond the second part of this interview, we're going to continue releasing three episodes a week. This show is going to continue moving on, and we have tons of incredible content planned for this year. I'm more excited about this year. I've already said it many times already, but I'm more excited about this year of Developer Tea. than I have been any other year previous. So please, if you find yourself in that group of motivated developers, driven developers, and you want to level up your career, of course, level up your career, but beyond that, kind of start with yourself. Start with the person, rather than just focusing on your career as if it's an object external to you, right? Philosophically speaking here, imagine that your career is a part of you, and you're bettering a part of yourself by leveling up in your career. That's what we want to do with this show. So if you are ready to accept the challenges and the difficulties of taking on that, that huge responsibility of crafting your career and finding your purpose and developing your principles and practices, this show is for you. Please subscribe so that you can join us for more episodes. Thanks again to Linode for sponsoring today's episode of Developer Tea. If you want essentially a free $20 bill, it's four months worth of Linode service. The plan starts at $5 a month, and you get a gigabyte of RAM on that plan. If you want a full four months of free service on Linode, head over to spec.fm slash Linode, use the code Developer Tea 2018 at checkout. Thanks again, and until next time, enjoy your tea.