General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Mikko Rantalainen
Continuous Delivery
comments
Comments by "Mikko Rantalainen" (@MikkoRantalainen) on "Continuous Delivery" channel.
Previous
1
Next
...
All
I think having any design that is inmutable and cannot adapt to customer demands should be a definite no-go for any serious software project. Instead, having stable APIs means that once you create an API, you have to keep it running. To add new features, you introduce new APIs without breaking the old API. Some of the old APIs may be implemented as wrappers for the new API but the consumers of the microservice API don't need to mind about that.
3
I don't think full complexity of GitFlow is needed but I definitely think that feature branches is the way to go for any complex changes. Sure, if you're sure that the change you're working on is going to be a good idea and the issue is just how to implement it correctly. However, in my experience many changes need prototyping and UI or UX design may be tweaked before even the idea is complete. Until that, you want to keep the feature away from the master branch. The team I'm working in is using feature branches and two branches called master and production. We try to keep production a delayed version of master but if a simple hotfix needs to be implemented, it will be implemented directly into the production branch and merged into master soon after. Basically production branch (usually just a label for existing commit) is pushed towards the master when testing and QA is completed. The code is kept in feature branches and continously run on test servers (one per one active feature branch) so that the results can be continously tested but the integration with master is not tested all the time. This allows very clear version history in the end because the feature branch is owned by one developer so it can be rebased as needed. If every continously commits to master, the full history of master will be total mess with continuous fixes to incomplete patches already included in the master.
2
@НиколайТарбаев-к1к I'd argue that if you continously modify the API in a way that requires consumers to be modified, too, you don't actually have an API at all. You just have fully custom protocol instead.
2
I mostly agree with this video. However, the biggest problem I see with microservices is that they very often increase the latency of the system because often the end user API sees bigger requests that some sort of gateway service splits to multiple backend microservices and the get extra latency for each hop in the sequence. In addition, some kind of access checking is needed in nearly every microservice so many microservices end up querying access from one central access checking microservice. And if you do that with zero trust design, many of those microservices are running on different physical computers and you have to pay for the encrypted network connection for every hop in the whole system. If there's a generic overall microsystem design that can avoid this latency (and encryption overhead in zero trust design), I've yet to see one. Sure, if you have a design where access is checked only once in the gateway and none of the actual microservices contain any security or access limits, then it gets much easier to implement. However, I think that the security of the whole system would be worse than monolithic design in that case.
2
@ContinuousDelivery Sounds interesting and I'm always willing to investigate the issue more. Googling for "State of DevOps Report" finds reports from multiple sources (and I would assume multiple separate documents). Could you tell more information so I can find the correct report?
1
I did read Haystack article called "The Accelerate Book, The Four Key DevOps Metrics & Why They Matter" and I didn't find any evidence that the branching style you use would make any difference. The Change Failure Rate (CFR) was interesting in that less than 4% was already considered elite team. Our team has that well below 1% so does that mean that our software has too high quality? Should we try to reduce the quality of our releases to improve cycle time? Another interesting thing in that article was Mean Time to Recovery (MTTR) which makes me think that maybe the most important part for getting new features rapidly into production is fully automated recovery? If you can automatically detect failures after push to production in seconds, your failures cause downtime of a couple of seconds in worst case. In that case you don't need to mind CFR that much as long as your failures do not cause damage to non-volatile data. How to guarantee that a failure cannot damage non-volatile data? That I don't know. For example, if you have OLTP-like system and can detect data corruption failure in 3 seconds, you could still end up with multiple thousand corrupted transactions stored in permanent data. And you cannot silently revert all those transactions as a recovery action either because you have already confirmed the transactions to 3rd party.
1
@1oglop1 Are you using feature branches that depend on other feature branches? The way I understand feature branches is that new features are based on master and feature branch is merged into master as soon as it's considered final code (note that the feature may not be complete but the code that far is considered good enough to take responsibility).
1
@ContinuousDelivery I think the microservices makes this problem bigger because by definition each service in the whole system should be smaller – hence there will be more interconnects between the services. How do you think access checks should be usually implemented in system based on microservices? Should each microservice implement it's own access checks or use one "access validation" microservice? As an architechture it seems that having a single microservice for access checking would be the obvious way to proceed but in my experience, safe code requires a lot of access checking in actual logic and that would cause lots of traffic to access checking microservice which would cause extra latency. Of course, one solution is to decide that you don't have any fine grained access settings anywhere. That doesn't seem like very agile to me, though.
1
@ContinuousDelivery If you trust access-checks firewall-style, is the remaining system microservice based anymore? The microservices definitely cannot safely work independently anymore with a design like that.
1
Many parallelization claims for automatic parallelization totally ignore caches in modern CPUs. All the cache-compatible synchronization primitives are simply too slow to parallelize fragments where the synchronization takes more time than the actual execution of the fragment. And the faster the CPUs get, the more expensive the minimal synchronization gets when the cost is expressed in potential instructions that can be executed by the current thread. The higher the IPC the more expensive the synchronization tied to base clock of the CPU gets. And the synchronization must be tied to base clock to be synchronous over all cores. As a result, automatic parallelization nowadays focuses more onto SIMD than whole threads because SIMD doesn't require similar level of synchronization because it all happens in one L1 cache on the local core.
1
Previous
1
Next
...
All