Comments by "Daniel Sandberg" (@ddanielsandberg) on "Continuous Integration vs Feature Branch Workflow" video.
-
7
-
Do you mean "trying TBD" just like organizations "tries Scrum" by replacing requirement with user-story, milestone with sprint, deadline with release-trains, keep assigning devs by "percentage utilization" and doing annual budgets, writing 500 pages of "design" for a year before starting, developers communicating using Jira-tickets instead of talking, and nothing really changes, culturally or behavioral, et. al?
You can't just decide to "go CI/TBD" unless you have the practices and tools that makes it possible. Like pairing, TDD, good fast build system, being able to make changes by many-many small steps without ripping 23 modules apart for 3 days and a culture that supports it.
CI/TBD is a skill, and just like everything else it takes time to get there. I would suggest to start by setting up some things:
1. Make sure there are high-level tests covering the most important happy-paths.
2. Make these tests fast, unfragile, and make them/the system runnable on developers machines.
3. No changes gets committed without some kind of test covering the code being changed.
4. No feature branch lasts for more than 1 day.
5. Get a build system that tests main at every merge/commit.
6. If the build turn red - you have 10 minutes to fix it, or revert it.
7. If you can't get the build back to green EVERYBODY in the team stops what they are doing and do anything they can to get it back to green. There is no situation where a failing build is OK to just leave and keep doing other work (which usually happens with branches - habits matters).
Getting these things into place, and making them a habit will go a long way.
Next we turn the heat up:
8. Pair program as much as possible. Not every change requires a review (and some changes can be reviewed after the fact. See "Ship / Show / Ask" by martin fowler).
9. Try to slice changes into small parts that takes a couple of hours or less.
10. No feature branch lasts longer than a couple of hours.
11. TDD and refactor mercilessly. Make changes easy.
12. Make it hot: Any changes that has made it into main may be deployed 30 minutes after you commit, when you're on lunch. Now code-hygiene, care and practices really starts to matter.
13. Start doing some changes without a feature branch/PR. Evaluate, practice, habit.
14. Remove organizational/cultural issues: The entire team succeeds or fails together. As long as everything is driven by "individual tasks" (and associated rewards), stupid "sprint commitments", and "done" that means - "I coded it, it's done, right? Then a separate QA-department is trying to inspect quality in after the fact and it's someone elses problem after that" - nothing you do will matter. Culture eats strategy and intentions for breakfast.
15. Expect things to go wrong, learn to deal with it and improve.
It takes years for a team to get good at CI/TBD. It takes even more years for an organization to get good at CD.
3
-
3
-
3
-
2
-
2
-
You state that CI is based on knowing the right solution the first time. It's the opposite!
CI and most "agile" practices recognizes that we can't know the correct solution nor the right implementation, or even properly understand the problem to begin with, and optimizes for being able to turn around and adapt over time.
FB/PR on the other hand optimizes for doing it once, one chance, most of the time with an "all or nothing" mentality. Get it right the first time by working on it and polishing it for some time, merge it, then jump to the next feature and don't look back. With PB/PR the granularity of a "commit/push" is "a task".
With CI the granularity of a commit/push is "small change that works and takes us closer to the goal, but sometimes we have to turn around and change it, even when it has reached production" (it still works, it's not broken sh*t code, it's just not quite right yet).
This way of working with CI practices scares people that have never tried it and that often are measured and rewarded based on some kind of productivity metric or risk ending up on some kind of PIP.
2
-
What you just described is an argument *for* CI. CIs main purpose is to *expose* all these faults (code, technical, culture) and by extension: XP's and agile's purpose is to expose all the *systemic* issues in the organization and give everyone a health dose of reality. If it's broken all the time and some project manager still thinks "it will be delivered on time, work properly and be of good quality" and developers just keep adding more code to a broken system, someone is lying to themselves.
Version control is not your personal backup-system, it's an auditable, versioned, *publication* mechanism. As a developer your job is not to just implement some tasks, push to git when you go home for the day and the rest is "someone else's problem" or "other developers fault". Either the entire team wins, or the entire team fails.
1. You make sure to merge other peoples changes from master to your local clone many times per day, and run the build, unit-tests, linting, and functional tests *before* you push to master. That way you know if there is a conflict before you push. This is the prime directive of CI - check your sh*t before you push. And if it still breaks on the build servers, stop working, drop everything and fix it! Did it break because you made a change or did someone else make a change that conflicted with yours? Doesn't matter, say "huh, that's odd?" and then talk to your team members and fix it.
2. Someone deleted a database record/table/whatever in a test-environment and broke something? Well, then everyone stops, figure out what and why it happened and then *talk* and solve the real problem (people, process and culture), not just the symptoms and complaining "Aaah, it's broken again!!!".
3. "Algorithm works here but not there"? Not sure how that could happen. But here is a tip: build it **once**, deploy the same code/binary/package in the same way, by the same mechanism, to every environment, every time, and 95% of all the "works here, but not there problems" goes away.
4. Acceptance tests break but you haven't bothered to run it? How do I even respond to that!?
"If it hurts, do it more, and bring the pain forward." - Jez and Dave
2
-
@buddysnackit1758 Oh, I read it all right. Trying to disprove CI by arguing that at the extreme it isn't a millisecond contiguous synced workflow is just a strawman argument. It's like saying that science is false because we can never prove anything with absolute certainty. So, no I didn't address it - it's just nonsense.
Second point. With CI we collaborate with other team members, pair program, even pair with testers and SMEe, instead of sitting alone in a corner with headphones for days on end only to share the result at the end (which is all too common). So the amount of "adding wrong code" isn't 50% to begin with. And even if it was, that is not the big deal you make it out to be since the code still have to work , even if it's not quite right. XP/CI embraces that we are not perfect, but instead of isolating work and people with gate-keeping we optimize it around learning, feedback, outcomes and result.
1
-
1
-
1
-
Yes. Three things.
Assumption: We are talking about software that is run as a service, provided by your company (user installed software is a different beast).
1. There is no such thing as a major refactor that's broken for days. No wonder managers get heartburn when developers mentions the word "refactor", and then the XP people comes along and say "refactor mercilessly" and the managers dies of heart attacks.
2. When doing a big re-write in a messy, badly tested codebase a common solution is to implement the new version of the module parallel with the old module directly on master branch. This is called "branch by abstraction" and has nothing to do with version control branches. Perhaps we put the code under a new namespace/package, and then have a configuration flag so that the new code only runs locally when developing. Once the new module is "complete" we can expose the configuration flag so that the build and test environments can verify both implementations. Further, when we later deploy to production this flag can be used to turn on/off the new implementation at will. If things go badly - just turn it off and the old code is running instead. When the new implementation is deemed "good" it should become the default; we remove the configuration flags, the old implementation and possibly any other "complications" left in the code. This is properly managing risk by providing an escape hatch without needing to revert, rebuild, retest, redeploy anything.
3. Another way to do it is of course to make the changes in small increments and use automated tests and verification every step of the way. Jez stated it wonderfully: "...we have to work in very small batches. This is antithetical to the way lots of developers like to work: sitting off on their own going down a coding rabbit hole for days before re-emerging. The elevation of "flow" (by which I mean individual flow, not lean/team flow, which is actually inhibited by this behavior.) [is much at fault here]. Trunk-based development is about putting the needs of the team above the needs of individual. The premise of CI and trunk-based development is that coding is fundamentally a social, team activity. This presents a challenge to the mythos of the developer-as-hero which is still pervasive in our industry."
https://twitter.com/jezhumble/status/982988370942025728
1
-
1
-
1
-
1