Comments by "Daniel Sandberg" (@ddanielsandberg) on "Continuous Delivery"
channel.
-
2
-
2
-
2
-
What you just described is an argument *for* CI. CIs main purpose is to *expose* all these faults (code, technical, culture) and by extension: XP's and agile's purpose is to expose all the *systemic* issues in the organization and give everyone a health dose of reality. If it's broken all the time and some project manager still thinks "it will be delivered on time, work properly and be of good quality" and developers just keep adding more code to a broken system, someone is lying to themselves.
Version control is not your personal backup-system, it's an auditable, versioned, *publication* mechanism. As a developer your job is not to just implement some tasks, push to git when you go home for the day and the rest is "someone else's problem" or "other developers fault". Either the entire team wins, or the entire team fails.
1. You make sure to merge other peoples changes from master to your local clone many times per day, and run the build, unit-tests, linting, and functional tests *before* you push to master. That way you know if there is a conflict before you push. This is the prime directive of CI - check your sh*t before you push. And if it still breaks on the build servers, stop working, drop everything and fix it! Did it break because you made a change or did someone else make a change that conflicted with yours? Doesn't matter, say "huh, that's odd?" and then talk to your team members and fix it.
2. Someone deleted a database record/table/whatever in a test-environment and broke something? Well, then everyone stops, figure out what and why it happened and then *talk* and solve the real problem (people, process and culture), not just the symptoms and complaining "Aaah, it's broken again!!!".
3. "Algorithm works here but not there"? Not sure how that could happen. But here is a tip: build it **once**, deploy the same code/binary/package in the same way, by the same mechanism, to every environment, every time, and 95% of all the "works here, but not there problems" goes away.
4. Acceptance tests break but you haven't bothered to run it? How do I even respond to that!?
"If it hurts, do it more, and bring the pain forward." - Jez and Dave
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@OzoneGrif Why is it unacceptable? I don't mean to sound antagonistic but you need to think outside the box of release-trains, Gantt charts, milestone-driven release schedules, and manual test-and-stabilization phases.
First of all, all code shall be tested whether they are "feature complete or not", and then you have an evolution:
1. Early in development the features may be excluded/included from the build by using build profiles. This allows local development and testing as the team fleshes out the basics of the feature.
2. Later on features may be included but be disabled by default and enabled by a deployment flag in different environments. Great for feedback and early testing. See "branch-by-abstraction pattern" for one example of implementation details.
3. Once we reach the "this is a nice feature, polish it"-stage we may choose to convert the flag to a runtime property or keep it as a deployment flag. Context and technology matters.
4. When we finally are ready to release the feature to the world we can turn it into a user preference (Opt-in/beta users for the new feature), or even using Geo-location or target groups of who will see it.
5. This allows the business to then decide if the feature shall be available for all users and we can remove all the flags and gunk around it. Or we may decide that it's an "enterprise feature" and customers have to pay for the feature to be enabled.
The point is - you can have a lot of unfinished changes in-flight, in a single repo/branch and still always be able to deploy a hotfix.
This is kind of how facebook implemented their chat-system. It was running (by bots in your browser) in production, in secret, behind users timelines, and only facebook employees could actually use it. It had been running in production for over six months before normal users could access it and then they incrementally rolled it out to users - without needing to build/deploy/release a special version.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
@buddysnackit1758 Oh, I read it all right. Trying to disprove CI by arguing that at the extreme it isn't a millisecond contiguous synced workflow is just a strawman argument. It's like saying that science is false because we can never prove anything with absolute certainty. So, no I didn't address it - it's just nonsense.
Second point. With CI we collaborate with other team members, pair program, even pair with testers and SMEe, instead of sitting alone in a corner with headphones for days on end only to share the result at the end (which is all too common). So the amount of "adding wrong code" isn't 50% to begin with. And even if it was, that is not the big deal you make it out to be since the code still have to work , even if it's not quite right. XP/CI embraces that we are not perfect, but instead of isolating work and people with gate-keeping we optimize it around learning, feedback, outcomes and result.
1
-
Short answer:
1. You can commit incomplete code as long as it works and doesn't break anything else. If it's complex, very WIP and "not yet usable" then disable/hide it somehow (usually a build/compile flags). Improve and refactor as you go.
2. You don't "check if it works at the end of each day", you check it ALL the time, you make a full local build every 5-15 minutes and then check-in the code and let the Build Management System validate you didn't screw up. That's CI!
3. It works just as well for changing existing code as well as adding new code (that will need to be integrated with the old code anyway). It will often be faster to deliver *because* you know it always works (whatever definition if working you're using). No "test phase", no "integration phase", no "long arguments if we can merge it yet" because it's already integrated, finished or not.
Nothing is free and everything we know how to do takes time to learn. Meaning, going from "sit alone in a corner, coding for days on a personal branch" to "everything always works and check-in the code many many times per day" is hard. Working with TBD and CI practices takes practice, discipline, and code hygiene.
1
-
1
-
Wall of text, partially off-topic warning!
Let's take this from the start - releases
In the olden days we used to have something called RTM (release to manufacturing) - basically meaning (after months of testing and polishing) sending the software to a factory that put it on floppies, CD-ROMs, etc. and shipped it out to stores in pretty boxes. There was a really high cost of delivering and patching software back then. Then came tools like Maven that took the old concept of "cutting a release" and made it mean "make a final (re)build and stamp a version number and publish it for deployment". Later CI/CD/DevOps came into the picture and things started to change; we came up with terms like "separate deployment from release". What is meant by that is that deployment comes before release. Enabling new functionality or changing old becomes some kind of configuration change instead of a "all hand on deck deployment 3 AM" thing. This also enables A/B-testing, user opt-in and deploying many many small changes all the time - thus reducing the blast-radius if something goes wrong as well as knowing that every change actually works in real life. This doesn't mean that every change gets a deployment, it's just that deployments becomes a choice by need instead of a planned date/deadline.
How does it (ideally) works with CI/CD?
With CI/CD we instead try to subscribe to the (impossible) ideal of Toyota "one-piece flow". Instead of keeping track of the state of the software by using branches/merges (with all the management overhead) and thinking in terms of "master represents production" or "let's build a release and a release branch" (a bit like the previously mentioned "cutting a release") - we commit to main, we build it once , we record the version/tag/commit/buildnumber/digest of that build; we test and validate it, and then pass that immutable piece from environment to environment, from validation to deployment and finally release. It's a survival of the fittest scenario to try to prove that a build does not meet the criteria for deployment.
Extrapolating and deeper context
As with everything there are contexts and compromises. But if we default to use branches for change control, separate departments for frontend, backend and testing, huge ops-teams getting software thrown over the wall from developers (we made a release, your turn) because it feels easier and it worked 20 year ago we are not making any progress or improvements. According to some of the "old IT/tech people" the amount of programmers doubles every 5 years. About 10 years ago everyone went nuts with FB/PR/GitFlow. So we can extrapolate that 75% of all programmers have never done anything but, and so have no idea about Continuous Integration.
I'm really passionate about this because I see the same thing repeating in every org I've been in. As long as our thinking doesn't change it doesn't matter how much technology, tools, languages, frameworks and processes we throw at software development - nothing changes.
1
-
1
-
99% of all IT companies wouldn't know agile if it hit them in the face. Most companies thinks it's just a process and takes all the "easy parts" from scrum, skips the good parts from structured SDLC and ends up with the worst of two worlds. Thinking that burnups, burndowns, boards, jira, standups, story points, user stories, sprints, etc. makes them agile. They ignore all the technical practices from XP, CD, et al. (that is required to make it work) and then wonder why it didn't.
Then people come out on the other side saying "I hate agile". Like my grandpa - he refused to try pizza his whole life. A few weeks before he died we asked him why he never tried it - his answer? "It's just a pancake with ketchup and cheese!".
1
-
This assumes that the only work happening is the feature, that we would know where conflicts are likely to happen. The thing about TBD and CI is that it has really nothing to do with "optimal isolation of features and workflow for individual developers to avoid conflicts". It is the opposite.
There is this thing/strategy called "artificial constraints" which basically sets out guardrails and signs to push peoples behavior in a certain direction. This can be used both in a limiting sense (like feature branching which generally arises due to some kind of culture/trust issues and some need for illusion of control), but it can also be used to change behavior and level-up the entire organisation (like when your personal trainer pushes you and says "only 3 more minutes to go" even though "but it was only 3 more minutes - 15 minutes ago!!!").
Imagine the following scenario: You are working on some feature in your branch, you get a bit stuck, it won't work and you can't figure out why. You start to follow the code-path, you rip your hair and after two hours you figure that there is a function elsewhere that your code is calling and your change causes this function to be called twice. It is named getTicket(username) and it obviously gives you a ticket, right, RIGHT? What you didn't know is that this function actually creates a new ticket every time and that made you waste 2 hours.
Now the question becomes - what do you do? Do you just fix your feature code based on this new understanding, commit, create a PR, move on to the next feature? What if you instead fixed the name of the function to its proper name createNewTicket() and also moved it to a different file/class/module because you realized it was in the wrong place. Or do you think "not in scope, feature first, don't touch it, too scary"?
Think about it, you just spent 2 hours staring at this thing before you figured out that you were misled by the function name. You could fix it now while the understanding is fresh in your head and save everyone 2 hours in the future, every time they use the function. What if everyone did that all the time; made some little change that improved the life of everyone, forever? How much better would everyone and everything have become after a year, after three years?
Now for the kicker; if there is a whole bunch of feature branches and pull requests waiting to get merged and people are doing this opportunistic refactoring that continuously improves the code base for everyone. How long does it takes before people starts getting angry due to merge conflicts and then the culture becomes "Don't refactor! Don't improve!".
Feature Branching, when used dogmatically (often due to culture and trust issues) without understanding the long term impact and taking a more holistic systemic view on quality and culture, will actually inhibit learning . There is a difference between learning and being taught. And that is the nuance that is missed when all everyone is talking about is "optimal isolation of features and workflow for individual developers".
1
-
1
-
1
-
A pipeline does not replace the steps and tasks in a build script (npm, gulp, gradle, maven, etc). This is a common mistake I see, where teams replaces everything they had in their scripts with a pipeline YAML that can only run on some build management system.
When practicing CI you should be able to run the majority of the build steps including unit test, some component and functional testing locally. Whole system acceptance tests is difficult and often too slow to run locally. It also serves as feedback; if it is difficult to run, build, test, lint, you probably need to look into the structure and design of both tests and code. If it is very hard or slow to build/test locally you may need to look into the the technology choices made (like frameworks, etc).
It is an optimization technique where you should be able to compile, package and run 10000s of tests locally in minutes. Enough that it gives the developers a high degree of confidence that it will work before pushing. The build management system will of course run the commit build as well, plus a bunch of slow wider system tests, performance, etc. This also has the added benefit that if GitHub is down (like it is now as I write this) you can have a "break glass" process and build an emergency patch on a (monitored) local computer and push that to production.
While the build system runs the commit build you are not allowed to go away, lunch, or home. If it breaks you have 10 minutes to fix it or revert the change. If it's still broken the team swarm around the problem and get it back to green.
1
-
1
-
1
-
1
-
1
-
While I will try to answer this without purposely being offending, you may feel that way anyway.
Your job as a senior/lead is not to gatekeep, that is, to check everyone's work and approve of it.
You can't behave like a curling/helicopter parent and protect juniors from themselves all the time. The only thing you'll end up doing is holding back their learning and growth, and also teaching them to not rely on themselves.
You need to find a way of working where they can contribute, learn and make mistakes, without you handholding them, nor taking the company down in the process. I don't care what that is, as long as it's a temporary measure and not you gatekeeping everything (you might as well do all the work yourself in that case).
You should be able to find a way of working where in the end "the others" would make similar (or better) decisions as you would, even when you aren't in the room. After all, the job of a senior engineer is to create more senior engineers, not dissuade people and having them leave within a year.
Dave has another video here "A Guide To Managing Technical Teams" in which he calls your style "programming by remote control".
1
-
1
-
1