Comments by "Daniel Sandberg" (@ddanielsandberg) on "Git Flow Is A Bad Idea" video.

  1. 2
  2. 2
  3. 2
  4. 2
  5. 2
  6. 2
  7. ​ @OzoneGrif  Why is it unacceptable? I don't mean to sound antagonistic but you need to think outside the box of release-trains, Gantt charts, milestone-driven release schedules, and manual test-and-stabilization phases. First of all, all code shall be tested whether they are "feature complete or not", and then you have an evolution: 1. Early in development the features may be excluded/included from the build by using build profiles. This allows local development and testing as the team fleshes out the basics of the feature. 2. Later on features may be included but be disabled by default and enabled by a deployment flag in different environments. Great for feedback and early testing. See "branch-by-abstraction pattern" for one example of implementation details. 3. Once we reach the "this is a nice feature, polish it"-stage we may choose to convert the flag to a runtime property or keep it as a deployment flag. Context and technology matters. 4. When we finally are ready to release the feature to the world we can turn it into a user preference (Opt-in/beta users for the new feature), or even using Geo-location or target groups of who will see it. 5. This allows the business to then decide if the feature shall be available for all users and we can remove all the flags and gunk around it. Or we may decide that it's an "enterprise feature" and customers have to pay for the feature to be enabled. The point is - you can have a lot of unfinished changes in-flight, in a single repo/branch and still always be able to deploy a hotfix. This is kind of how facebook implemented their chat-system. It was running (by bots in your browser) in production, in secret, behind users timelines, and only facebook employees could actually use it. It had been running in production for over six months before normal users could access it and then they incrementally rolled it out to users - without needing to build/deploy/release a special version.
    2
  8. 2
  9. 1
  10. 1
  11. Wall of text, partially off-topic warning! Let's take this from the start - releases In the olden days we used to have something called RTM (release to manufacturing) - basically meaning (after months of testing and polishing) sending the software to a factory that put it on floppies, CD-ROMs, etc. and shipped it out to stores in pretty boxes. There was a really high cost of delivering and patching software back then. Then came tools like Maven that took the old concept of "cutting a release" and made it mean "make a final (re)build and stamp a version number and publish it for deployment". Later CI/CD/DevOps came into the picture and things started to change; we came up with terms like "separate deployment from release". What is meant by that is that deployment comes before release. Enabling new functionality or changing old becomes some kind of configuration change instead of a "all hand on deck deployment 3 AM" thing. This also enables A/B-testing, user opt-in and deploying many many small changes all the time - thus reducing the blast-radius if something goes wrong as well as knowing that every change actually works in real life. This doesn't mean that every change gets a deployment, it's just that deployments becomes a choice by need instead of a planned date/deadline. How does it (ideally) works with CI/CD? With CI/CD we instead try to subscribe to the (impossible) ideal of Toyota "one-piece flow". Instead of keeping track of the state of the software by using branches/merges (with all the management overhead) and thinking in terms of "master represents production" or "let's build a release and a release branch" (a bit like the previously mentioned "cutting a release") - we commit to main, we build it once , we record the version/tag/commit/buildnumber/digest of that build; we test and validate it, and then pass that immutable piece from environment to environment, from validation to deployment and finally release. It's a survival of the fittest scenario to try to prove that a build does not meet the criteria for deployment. Extrapolating and deeper context As with everything there are contexts and compromises. But if we default to use branches for change control, separate departments for frontend, backend and testing, huge ops-teams getting software thrown over the wall from developers (we made a release, your turn) because it feels easier and it worked 20 year ago we are not making any progress or improvements. According to some of the "old IT/tech people" the amount of programmers doubles every 5 years. About 10 years ago everyone went nuts with FB/PR/GitFlow. So we can extrapolate that 75% of all programmers have never done anything but, and so have no idea about Continuous Integration. I'm really passionate about this because I see the same thing repeating in every org I've been in. As long as our thinking doesn't change it doesn't matter how much technology, tools, languages, frameworks and processes we throw at software development - nothing changes.
    1
  12. A pipeline does not replace the steps and tasks in a build script (npm, gulp, gradle, maven, etc). This is a common mistake I see, where teams replaces everything they had in their scripts with a pipeline YAML that can only run on some build management system. When practicing CI you should be able to run the majority of the build steps including unit test, some component and functional testing locally. Whole system acceptance tests is difficult and often too slow to run locally. It also serves as feedback; if it is difficult to run, build, test, lint, you probably need to look into the structure and design of both tests and code. If it is very hard or slow to build/test locally you may need to look into the the technology choices made (like frameworks, etc). It is an optimization technique where you should be able to compile, package and run 10000s of tests locally in minutes. Enough that it gives the developers a high degree of confidence that it will work before pushing. The build management system will of course run the commit build as well, plus a bunch of slow wider system tests, performance, etc. This also has the added benefit that if GitHub is down (like it is now as I write this) you can have a "break glass" process and build an emergency patch on a (monitored) local computer and push that to production. While the build system runs the commit build you are not allowed to go away, lunch, or home. If it breaks you have 10 minutes to fix it or revert the change. If it's still broken the team swarm around the problem and get it back to green.
    1
  13. 1
  14. 1
  15. I assume that you mean to work by a pre-defined release plan where you have Gantt-chart like planning that "next month it's A, B", the month after that it's C and D, and by summer it will be E and F? In CI/TBD/CD we do things differently. We associate the readiness of the a release with a version instead of associate it with where it lives in version control. We also try to separate "code fixes, improvements, etc" from "feature releases", meaning that we can get any change out at any time, in hours, irregardless of plan, without hot-fix branches and special processes. We then promote versions between environments. Having separate testing teams, hand-offs, approvals, etc is pure death to CI/CD. Even if the case is that we plan to release features at specific intervals we can handle that using feature flags at multiple levels. In CI/TBD/CD we don't treat a release as a deployment in the same way. Instead deployments comes *before* release, meaning that unfinished features are deployed, but disabled/hidden until ready. This is how Facebook release their chat way back in pre-historic times. They had "javascript bots" running behind the scenes on a subset of users timelines; testing, trying, measuring and logging stability and security. Later on it was rolled out to a subset of users for trial, that subset then grew and after a few iterations (and performance fixes) it was rolled out to 1 billion users. Note that they did not do a "all hands on deck and deploy the universe and hope it can handle 1 billion users" on release day - it had already been running in production for over 6 months.
    1
  16. It's more complicated than that. Humans tend to gravitate towards easy solutions to complex problems, while the best solutions tend to be counter-intuitive. That "lure of easy" often leads to local optimization, isolation and tribalism. Sometimes it goes so far that the system becomes optimized for developers sitting alone in a corner, with headphones, typing code and occasionally popping their heads up exclaiming "I'm done, sent it to QA" and then keep typing. Meanwhile managers starts measuring "performance" based on number of tickets closed, PRs merged, bugs found, story points delivered, etc. This is a death spiral where the best, most senior people leaves and to compensate management hires more people, and the only way to get any order or structure is to impose even more rules, process and managers. Because so many of us grew up with programming as a hobby, often alone, late at night, we also never learned how to collaborate. We mistake "cooperate by divide and conquer" for collaboration and when we then end up in an organization where we have to interact with other fickle people all the time we build walls (branches, functional teams, layers of management, rules and processes) to protect ourselves from those interactions. It is similar to the reasons why so many developers are "afraid" of TDD, pair programming, etc. We have to *change and unlearn* old behaviours and accept that we will be crap and slow at the new thing for a while and that is hard to get past. My point is that XP/TBD/CI/CD is hard indeed. But instead of making the case for isolation by branch-driven development, PR/Jira-driven communication, etc. as the norm; what if people actually learned code hygiene, single-branch development, TDD, pair programming, etc., and got good at it first, and then turned to branches and more complex processes as the exceptions, when needed, based on context, instead of as the default strategy "because it's easier"? We also have to take into account that we have 2-3 generations of programmers that has grown up with GitHub, FB/PR, lone programmer, etc and has never seen or tried TBD/CI. Where it has always been that way and branches has always represented environments and development has always been individual programmers communicating using tickets. Fear, cognitive biases and personal incredulity makes these opposing practices feel offensive, wrong and unprofessional.
    1
  17. 1
  18. 1
  19. 1
  20. 1
  21. 1
  22. 1
  23. 1
  24. ​ @primevalpursuits  Ok. I understand. In the olden days we used to have something called RTM (release to manufacturing) - basically meaning (after months of testing and polishing) sending the software to a factory that put is on floppies, CD-ROMs, etc. and shipped it out to stores in pretty boxes. There was a really high cost of delivering and patching software back then. Then came tools like Maven that took the old concept of "cutting a release" and made it mean "make a final (re)build and stamp a version number and publish it for deployment". This kind of clashes with the notion of a release in DevOps circles in which a release comes after a deployment. What we usually try to do in CI/CD is to stamp *every build* (deployable artifact) that comes out of the commit build (on push to main) with a unique version number (build number, etc) and make it immutable. We then move that reference through the pipeline. Preferably we should treat deployment scripts, and application configuration the same if possible. There is an article on Hackernoon: "A Guide to Git with Trunk Based Development" that has some nice solutions. It does require us to invent some kind of manifest and control repo and tooling to handle it. If you're using Kubernetes then Flux CD is basically using the same solution, but based on some kind of GitOps branching pattern. Now, I'm not using K8S (we're using Fargate and Terraform+GitHub Actions) so I can't really speak for Flux. I recommend to read "Investments Unlimited: A novel about DevOps..." - it's written in the same form as The Goal and The Phoenix Project but is based around a financial institution and how they get a year to fix things or the government are coming with the hammer. Now they chose FB/PR (which I dislike), and many other solutions they bring up are great.
    1
  25. 1
  26. 1
  27. 1
  28. 1
  29. By "Never" he means - don't branch for development purposes. There are still business-cases where we for example might want to support multiple versions of the software (for example if it is customer installed applications), but at the speed software moves/changes today we might just as well create a fork of the application and manually backport bugfixes. There is a reason that companies charges a lot of money for subscription licenses - maintaining old versions is expensive. To answer your question: Branch by abstraction and a feature toggle. That is; create a "variation" of the pages/auth/config/whatever and have a flag that can be turned on/off if the new auth system should be used or not. This way we can turn on the new auth in some test environments but have it off in production and we can test and perform audit/penetration tests until happy. We can even just have the auth flag turned on a subset of pages to begin with and expand from there. Later on we can remove the flag if we want so that the app/site always requires auth. In CI/TBD/CD we don't equate a release with merge and deployment in the traditional sense. Deployments comes *before* release, meaning that unfinished features are deployed, but disabled/hidden until ready. This is how Facebook released their chat way back in pre-historic times. They had "javascript bots" running behind the scenes on a subset of users timelines; testing, measuring and logging stability and security. Later on it was rolled out to a subset of users for trial, that subset then grew and after a few iterations (and performance fixes) it was rolled out to 1 billion users. Note that they did not do a "all hands on deck and merge and deploy the universe and hope it can securely handle 1 billion users" on release day - it had already been running in production for over 6 months.
    1
  30. 1
  31. 1
  32. 1
  33. Copy-pasting my usual long rant regarding the belief that "we must have reviews to stop bad code from other bad programmers getting into the code base". I think there is this idea that we programmers only get one chance to implement something. Everything is a one-off "task/project" and then we move on to the next thing. Nothing is ever perfect, learn to accept that and then do something about it. Life is messy, sh*t happens, deal with it and move on. Committing unperfect code, unperfect design, janky solutions (that nevertheless works) is actually OK. What it requires is that people actually have to learn to refactor, redesign, and do continuous improvement *all* the time. It's never perfect, it's never done, so get into the mindset of always fixing and improving things that looks "off". It is a good idea for the team to have recurring reviews of the current state of the software and always improve issues as they are found. Keeping the code clean and working is more important than adding more features, or following a project plan or some rules put down by people not doing the job, all separated by 5 levels of management. The second problem is that every programmer appears to believe that programming is a solemn activity. That is wrong; it's a social activity. In a company setting I see FB/PR as a symptom of missing teamwork and/or bad organizational leadership. If every developer wants to work alone in their corner with headphones and only intermittently "communicate" and "integrate" using PRs and "do their own stuff" you do not have a team. Don't you talk? And yes, we love to use dumb excuses like "but I'm an introvert", etc which is just BS to avoid having to talk with other people while getting paid to do our hobby.
    1
  34. 1
  35. 1
  36. 1
  37. 1