Comments by "Michael Stover" (@michaelrstover) on "Continuous Delivery"
channel.
-
19
-
12
-
10
-
7
-
7
-
6
-
6
-
@ContinuousDelivery I guess I always thought of services as being this sort of independently deployable. The main issues I have with the current trendyness of "micro" services is the "micro" part. As far as I can tell, people's impression of the "right" size of these things is way way too small. And, microservices being just inherently more difficult to work with, people are splitting things up into far too many, far too small chunks that are then an extremely complicated ecosystem to work with and reason about. When I read or hear about the "modular monolith" idea, that makes far more sense to me, as it tries to bring the good parts of modularity and independence of deployability, but also keep some sanity about the number of moving pieces.
I don't know if you have thoughts about "modular monolith" ideas, but if you do, I would be extremely interested to hear them. Maybe you could make a video about this issue of granularity in these services. You've said in the past that the right size is what you can rewrite in about a week, and that seems much too small to me. I had myself thought the right size would be what a normal sized development team could be expected to handle and maintain, which would be a lot larger than what they could completely rewrite in one week.
5
-
I think what a lot of people get stuck on with this concept of "going faster results in higher quality", is that there are multiple different ways to "go faster". When most people think about going faster, they are thinking of ways of going faster that do indeed result in poor quality software. They think, skip the tests! Don't waste time refactoring! Ship the prototype! These are ways to go faster. But, it's not what Dave means.
What we mean by go faster is about picking each smallest bit of value we can safely deliver and doing it and getting it in front of customers and end users right away and then going on from there. We don't skip tests. We don't skip refactoring. We don't skip any concepts of good code design. But we do skip anything that isn't needed for that bit of value, and we don't skip anything that's important for that bit of value. If the bit of value degrades performance, then performance design was in fact needed, for example.
But there's a big ol' BUT: picking the smallest bit of value, and delivering it right away, and recognizing what can be skipped and what cannot is HARD. In communicating about this, it doesn't help to gloss over that fact.
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Resurr3ction If there are 8 PRs, each one runs a test of it + main. They all pass. Then they all merge. Then master is broken! You didn't test master + all the PRs merged all together.
Of course, you could. You could run all your PRs serially, testing only one at a time, merging it, then testing the next, and that might work or it might be a bottleneck. When I push my PR up, I might not find out it doesn't work with someone else's changes until many hours, or even days go by. What didn't work might have been something simple and stupid that the unit tests covered, but I couldn't discover that except by waiting hours and hours. Meanwhile, I had to move on to other things.
I can't really speak for Dave, but if I try, I'm guessing the attitude is, the tests we run locally have to be a very good, thorough, and fast test suite. We run them before we push to trunk. Then, after every such push to trunk, the more expensive suite is run. If that fails, we stop and go fix it, and the reality is this is not such a problem because we found out about it very quickly, and since we only ever make small incremental changes, fixing is not such a problem. The fast feedback is worth more than the occasional small breakage that 99% time are easy to fix.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1