Comments by "Daniel Sandberg" (@ddanielsandberg) on "Continuous Delivery" channel.

  1. 2
  2. 2
  3. 2
  4. What you just described is an argument *for* CI. CIs main purpose is to *expose* all these faults (code, technical, culture) and by extension: XP's and agile's purpose is to expose all the *systemic* issues in the organization and give everyone a health dose of reality. If it's broken all the time and some project manager still thinks "it will be delivered on time, work properly and be of good quality" and developers just keep adding more code to a broken system, someone is lying to themselves. Version control is not your personal backup-system, it's an auditable, versioned, *publication* mechanism. As a developer your job is not to just implement some tasks, push to git when you go home for the day and the rest is "someone else's problem" or "other developers fault". Either the entire team wins, or the entire team fails. 1. You make sure to merge other peoples changes from master to your local clone many times per day, and run the build, unit-tests, linting, and functional tests *before* you push to master. That way you know if there is a conflict before you push. This is the prime directive of CI - check your sh*t before you push. And if it still breaks on the build servers, stop working, drop everything and fix it! Did it break because you made a change or did someone else make a change that conflicted with yours? Doesn't matter, say "huh, that's odd?" and then talk to your team members and fix it. 2. Someone deleted a database record/table/whatever in a test-environment and broke something? Well, then everyone stops, figure out what and why it happened and then *talk* and solve the real problem (people, process and culture), not just the symptoms and complaining "Aaah, it's broken again!!!". 3. "Algorithm works here but not there"? Not sure how that could happen. But here is a tip: build it **once**, deploy the same code/binary/package in the same way, by the same mechanism, to every environment, every time, and 95% of all the "works here, but not there problems" goes away. 4. Acceptance tests break but you haven't bothered to run it? How do I even respond to that!? "If it hurts, do it more, and bring the pain forward." - Jez and Dave
    2
  5. 2
  6. ​ @appuser  Not sure how to answer that directly... I can say this: Pair programming is not - two - people - programming. Pair programming is "two people programming". It's collaborative, not cooperative. It's the difference between having fun with your friends and having fun with your spouse. That doesn't mean everyone spend all their time pairing. It's like a relationship, we spend a lot of time "collaborating", but it's occasionally punctuated with times of alone time and deep focus. You'll probably end up divorced if you insist that your spouse spend all their time with you and vice-versa. It's like everything, it takes time to learn and get comfortable with. If you have never pair programmed and spent your career "sitting alone in your corner, with your headphones on, typing code" it's not easy, and quite frankly you've been mismanaging your job. Deep collaboration and high bandwidth social interaction is a very important part of SWE. Unfortunately too many of us thought that working in software would be the same as when we got into it as a hobby. We were either lied to, or lied to ourselves. We are not getting paid to do our hobby of just writing code and it's easy to be disenfranchised when it turns our to be different than we thought. And thus we tend to invent all these handoffs and roles where we push the uncomfortable things away... we add layer after layer and build walls and "specialist groups", all so that we won't have to change and do that scary thing we don't really want to do... And that is what I reacted to; I interpreted it as if you wanted to add yet another specialist group to avoid changing/keep the status quo. I may have overreacted... 😕
    2
  7. 2
  8. 2
  9. 2
  10. 2
  11. 2
  12. 2
  13. Open Source is a different context. Linus created Git to slow down changes *because* he's the author of the biggest open source project there is - The Linux OS kernel. FB/PR was built *for* open source, where you have 1000's of unknown contributors all over the world and a few trusted committers. Pretty sure Dave said [paraphrased] "async, pre-merge, blocking review using pull requests in your *team and organization* is where the problem lies." And I don't buy the "introvert" argument for one damn second. There are two sides to that argument. 1. It's used as an excuse so that developers can sit alone in their corner, with their headphones on, writing code in isolation, getting paid to do their hobby. Probably why FB/PR has become so popular... It avoids interactions. It defers integration. 2. For 50 years the industry has been built on the notion of "the lonely geek" and the "superhero programmer"; attracting a certain kind of personality. If the situation is that "people can't collaborate with other humans" we probably did that to ourselves and it's time to stop digging the hole deeper. Programming is fundamentally a social activity and too many of us thought that choosing it as career would mean to continue what we did when we learned on late evenings in our rooms when we grew up. It's still a job and we have to do things that's uncomfortable. We need to become much better of getting out of our comfort zones. That doesn't mean we need to sit in each others laps all the time. Most of the time we should pair and collaborate in small groups, punctuated with moments of deep focus work alone or pair. It's not all or nothing. PS: I'm an outgoing introvert.
    2
  14. 2
  15. 2
  16. 2
  17. 2
  18. 2
  19. 2
  20. ​ @OzoneGrif  Why is it unacceptable? I don't mean to sound antagonistic but you need to think outside the box of release-trains, Gantt charts, milestone-driven release schedules, and manual test-and-stabilization phases. First of all, all code shall be tested whether they are "feature complete or not", and then you have an evolution: 1. Early in development the features may be excluded/included from the build by using build profiles. This allows local development and testing as the team fleshes out the basics of the feature. 2. Later on features may be included but be disabled by default and enabled by a deployment flag in different environments. Great for feedback and early testing. See "branch-by-abstraction pattern" for one example of implementation details. 3. Once we reach the "this is a nice feature, polish it"-stage we may choose to convert the flag to a runtime property or keep it as a deployment flag. Context and technology matters. 4. When we finally are ready to release the feature to the world we can turn it into a user preference (Opt-in/beta users for the new feature), or even using Geo-location or target groups of who will see it. 5. This allows the business to then decide if the feature shall be available for all users and we can remove all the flags and gunk around it. Or we may decide that it's an "enterprise feature" and customers have to pay for the feature to be enabled. The point is - you can have a lot of unfinished changes in-flight, in a single repo/branch and still always be able to deploy a hotfix. This is kind of how facebook implemented their chat-system. It was running (by bots in your browser) in production, in secret, behind users timelines, and only facebook employees could actually use it. It had been running in production for over six months before normal users could access it and then they incrementally rolled it out to users - without needing to build/deploy/release a special version.
    2
  21. 2
  22. 2
  23. 2
  24. 2
  25. I have no answer shorter than an entire book on this... The problem is if team members are working on multiple projects at the same time then you don't actually have a team. That's like having a football team playing 5 different matches at the same time. This is a symptom of something being wrong with how things are prioritized and/or how the organization is structured... The job of a senior/lead is not to gatekeep, to check everyone's work and approve of it. A senior can't behave like a curling/helicopter parent and protect juniors from themselves all the time. That tends to hold back peoples learning and growth, and also teaching people to not trust themselves. You need to find a way of working where everyone can contribute, learn and make mistakes, without a senior handholding them or taking the company down in the process. I don't care what that is, as long as it's a minimal measure and not someone gatekeeping everything (the seniors might as well do all the work by themself in that case). You should be able to find a way of working where in the end "everyone" would make similar (or better) decisions, even when the seniors aren't in the room. After all, the job of a senior engineer is to create more senior engineers. There are no "do this"-answer except "get good at pair and group programming" and build "defenses" into the system of work so that the blast radius of mistakes is as small as possible. Dave has another video here "A Guide To Managing Technical Teams" and Dan North has a video from "Goto conference" called "Beyond developer", check them out and show them to your team next time you have a "tech friday" get-together.
    2
  26. 2
  27. 2
  28. 2
  29. 2
  30. 1
  31. 1
  32. 1
  33. 1
  34. 1
  35. 1
  36. 1
  37. Wall of text, partially off-topic warning! Let's take this from the start - releases In the olden days we used to have something called RTM (release to manufacturing) - basically meaning (after months of testing and polishing) sending the software to a factory that put it on floppies, CD-ROMs, etc. and shipped it out to stores in pretty boxes. There was a really high cost of delivering and patching software back then. Then came tools like Maven that took the old concept of "cutting a release" and made it mean "make a final (re)build and stamp a version number and publish it for deployment". Later CI/CD/DevOps came into the picture and things started to change; we came up with terms like "separate deployment from release". What is meant by that is that deployment comes before release. Enabling new functionality or changing old becomes some kind of configuration change instead of a "all hand on deck deployment 3 AM" thing. This also enables A/B-testing, user opt-in and deploying many many small changes all the time - thus reducing the blast-radius if something goes wrong as well as knowing that every change actually works in real life. This doesn't mean that every change gets a deployment, it's just that deployments becomes a choice by need instead of a planned date/deadline. How does it (ideally) works with CI/CD? With CI/CD we instead try to subscribe to the (impossible) ideal of Toyota "one-piece flow". Instead of keeping track of the state of the software by using branches/merges (with all the management overhead) and thinking in terms of "master represents production" or "let's build a release and a release branch" (a bit like the previously mentioned "cutting a release") - we commit to main, we build it once , we record the version/tag/commit/buildnumber/digest of that build; we test and validate it, and then pass that immutable piece from environment to environment, from validation to deployment and finally release. It's a survival of the fittest scenario to try to prove that a build does not meet the criteria for deployment. Extrapolating and deeper context As with everything there are contexts and compromises. But if we default to use branches for change control, separate departments for frontend, backend and testing, huge ops-teams getting software thrown over the wall from developers (we made a release, your turn) because it feels easier and it worked 20 year ago we are not making any progress or improvements. According to some of the "old IT/tech people" the amount of programmers doubles every 5 years. About 10 years ago everyone went nuts with FB/PR/GitFlow. So we can extrapolate that 75% of all programmers have never done anything but, and so have no idea about Continuous Integration. I'm really passionate about this because I see the same thing repeating in every org I've been in. As long as our thinking doesn't change it doesn't matter how much technology, tools, languages, frameworks and processes we throw at software development - nothing changes.
    1
  38. 1
  39. 1
  40. This assumes that the only work happening is the feature, that we would know where conflicts are likely to happen. The thing about TBD and CI is that it has really nothing to do with "optimal isolation of features and workflow for individual developers to avoid conflicts". It is the opposite. There is this thing/strategy called "artificial constraints" which basically sets out guardrails and signs to push peoples behavior in a certain direction. This can be used both in a limiting sense (like feature branching which generally arises due to some kind of culture/trust issues and some need for illusion of control), but it can also be used to change behavior and level-up the entire organisation (like when your personal trainer pushes you and says "only 3 more minutes to go" even though "but it was only 3 more minutes - 15 minutes ago!!!"). Imagine the following scenario: You are working on some feature in your branch, you get a bit stuck, it won't work and you can't figure out why. You start to follow the code-path, you rip your hair and after two hours you figure that there is a function elsewhere that your code is calling and your change causes this function to be called twice. It is named getTicket(username) and it obviously gives you a ticket, right, RIGHT? What you didn't know is that this function actually creates a new ticket every time and that made you waste 2 hours. Now the question becomes - what do you do? Do you just fix your feature code based on this new understanding, commit, create a PR, move on to the next feature? What if you instead fixed the name of the function to its proper name createNewTicket() and also moved it to a different file/class/module because you realized it was in the wrong place. Or do you think "not in scope, feature first, don't touch it, too scary"? Think about it, you just spent 2 hours staring at this thing before you figured out that you were misled by the function name. You could fix it now while the understanding is fresh in your head and save everyone 2 hours in the future, every time they use the function. What if everyone did that all the time; made some little change that improved the life of everyone, forever? How much better would everyone and everything have become after a year, after three years? Now for the kicker; if there is a whole bunch of feature branches and pull requests waiting to get merged and people are doing this opportunistic refactoring that continuously improves the code base for everyone. How long does it takes before people starts getting angry due to merge conflicts and then the culture becomes "Don't refactor! Don't improve!". Feature Branching, when used dogmatically (often due to culture and trust issues) without understanding the long term impact and taking a more holistic systemic view on quality and culture, will actually inhibit learning . There is a difference between learning and being taught. And that is the nuance that is missed when all everyone is talking about is "optimal isolation of features and workflow for individual developers".
    1
  41. 1
  42. 1
  43. A pipeline does not replace the steps and tasks in a build script (npm, gulp, gradle, maven, etc). This is a common mistake I see, where teams replaces everything they had in their scripts with a pipeline YAML that can only run on some build management system. When practicing CI you should be able to run the majority of the build steps including unit test, some component and functional testing locally. Whole system acceptance tests is difficult and often too slow to run locally. It also serves as feedback; if it is difficult to run, build, test, lint, you probably need to look into the structure and design of both tests and code. If it is very hard or slow to build/test locally you may need to look into the the technology choices made (like frameworks, etc). It is an optimization technique where you should be able to compile, package and run 10000s of tests locally in minutes. Enough that it gives the developers a high degree of confidence that it will work before pushing. The build management system will of course run the commit build as well, plus a bunch of slow wider system tests, performance, etc. This also has the added benefit that if GitHub is down (like it is now as I write this) you can have a "break glass" process and build an emergency patch on a (monitored) local computer and push that to production. While the build system runs the commit build you are not allowed to go away, lunch, or home. If it breaks you have 10 minutes to fix it or revert the change. If it's still broken the team swarm around the problem and get it back to green.
    1
  44. 1
  45. 1
  46. 1
  47. 1
  48. 1
  49. 1
  50. 1