General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Mikko Rantalainen
Continuous Delivery
comments
Comments by "Mikko Rantalainen" (@MikkoRantalainen) on "Git Flow Is A Bad Idea" video.
I don't think full complexity of GitFlow is needed but I definitely think that feature branches is the way to go for any complex changes. Sure, if you're sure that the change you're working on is going to be a good idea and the issue is just how to implement it correctly. However, in my experience many changes need prototyping and UI or UX design may be tweaked before even the idea is complete. Until that, you want to keep the feature away from the master branch. The team I'm working in is using feature branches and two branches called master and production. We try to keep production a delayed version of master but if a simple hotfix needs to be implemented, it will be implemented directly into the production branch and merged into master soon after. Basically production branch (usually just a label for existing commit) is pushed towards the master when testing and QA is completed. The code is kept in feature branches and continously run on test servers (one per one active feature branch) so that the results can be continously tested but the integration with master is not tested all the time. This allows very clear version history in the end because the feature branch is owned by one developer so it can be rebased as needed. If every continously commits to master, the full history of master will be total mess with continuous fixes to incomplete patches already included in the master.
2
@ContinuousDelivery Sounds interesting and I'm always willing to investigate the issue more. Googling for "State of DevOps Report" finds reports from multiple sources (and I would assume multiple separate documents). Could you tell more information so I can find the correct report?
1
I did read Haystack article called "The Accelerate Book, The Four Key DevOps Metrics & Why They Matter" and I didn't find any evidence that the branching style you use would make any difference. The Change Failure Rate (CFR) was interesting in that less than 4% was already considered elite team. Our team has that well below 1% so does that mean that our software has too high quality? Should we try to reduce the quality of our releases to improve cycle time? Another interesting thing in that article was Mean Time to Recovery (MTTR) which makes me think that maybe the most important part for getting new features rapidly into production is fully automated recovery? If you can automatically detect failures after push to production in seconds, your failures cause downtime of a couple of seconds in worst case. In that case you don't need to mind CFR that much as long as your failures do not cause damage to non-volatile data. How to guarantee that a failure cannot damage non-volatile data? That I don't know. For example, if you have OLTP-like system and can detect data corruption failure in 3 seconds, you could still end up with multiple thousand corrupted transactions stored in permanent data. And you cannot silently revert all those transactions as a recovery action either because you have already confirmed the transactions to 3rd party.
1
@1oglop1 Are you using feature branches that depend on other feature branches? The way I understand feature branches is that new features are based on master and feature branch is merged into master as soon as it's considered final code (note that the feature may not be complete but the code that far is considered good enough to take responsibility).
1