Youtube comments of (@ContinuousDelivery).
-
571
-
525
-
423
-
263
-
223
-
117
-
102
-
101
-
76
-
75
-
67
-
I would slightly change that approach. When I am presented with a problem, my first thought is "is a solution possible?". At that point, my approach is to try and think of any way in which you could solve this with software. If I can think of one, no matter how horrible or complicated my initial idea is, then I am comfortable that it is possible - and all I have to do is find a nice, or better, solution.
I don't usually want to describe this first version to anyone, because the solution is usually nasty.
I am pretty experienced, and so I am pretty confident that if I can imagine a solution, I can build one too. I am also VERY confident that I can start without knowing all of the answers. So for regular things, I am happy to start very quickly.
At that point, I want to start, not with the most complex stuff, but with the most valuable stuff. Ideally, I'd like to create something useful and get it out to people to see how they react.
If the problem has some aspect that I have no idea how to do it, or imposes (for example) performance constraints that I don't know if they are possible, I will do some experiments - These are consciously throw-away, in terms of code. Don't confuse learning if something is feasible, with building something that is production ready. If you try to do both at the same time, you will do a worse job of both. Best to try and think of the simplest experiment that can show you the way forward, and do that.
63
-
49
-
48
-
45
-
44
-
39
-
@benbaert2166 Sociology is difficult, and this kind of research is sociology. One of the problems is getting enough data for statistics to start working. I agree with your criticisms of the research, but who is going to pay for research on pair programming - no one has a real commercial interest, there was a flurry of research in academic circles (hence the use of students) at the start of the century when pair programming was first introduced.
I report what data I can find, but I also have my personal experience, which being interested in science, I know to be a bad indicator, nevertheless in my small sample, which is probably bigger than yours, because I am older, and so have had time to work in more different places, but still not statistically significant across an industry, in all the best places I have seen, some form of frequent, regular, intensive collaboration was a factor. In nearly all places where I saw, often good, experienced, developers working alone on things their output was worse. This is not always true, I have been lucky and worked with some great programmers, who always did good work, but even the best, did better work when working closely with other people. The idea of the programmer as a lone, socially-isolated genus is a myth, and a damaging one in my opinion.
These views are reinforced by what small, inadequate, evidence that there is, but SW dev is simply not studied in enough detail to rely only on data, we do have to guess and make a choice. I know for a fact that you don't have data that disproves pair programming for example, because there is none. I bet you don't have data for why you use the language or tools that you use, because there isn't any that says one is better than another. For me that means we must each experiment with ideas like these. We can't trust our own "likes" and "intuition", we need to figure out what works, in our own specific, small, context. People like me can give you my advice, and like all advice, it is for you to decide how to use it. I try to only offer advice when I have tried both ways. I have done a lot of programming with pairing and a lot more without. For me and the teams I worked on, pairing always worked better, significantly better, in the teams that tried it.
38
-
@visiongt3944 My main advice is to write as much code as you can. Find projects that interest or excite you, play with different kinds of things, maths problems, graphics, little tools to help you do something, almost anything.
Next I would advise that you learn TDD, watch a few of my videos on that topic, I hope that they may help, but also pick a 'coding kata' to practice. Take a look here for some inspiration https://cyber-dojo.org/
On tech-stacks, I don't think that should be very important, however, there are lots of orgs that aren't very good at interviewing and so all they go on is a check-list of tech. That is a terrible way to interview anyone, but particularly bad for people just starting out.
If I were in your place now, knowing what I know, I would treat that as a bad sign for that employer, but, being pracgmatic, getting your first job is hard and so you may have to play that game. My advice though, don't treat them like a collection, pick tools and tech that you like working with and get good with that. You can look at what sorts of tech are popular in places where you would like to work, or just pick the most popular things generally. Python and Java and Javascript are popular, but it does depend on what you want to do.
As for frameworks or platforms, I wouldn't worry too much about that, most orgs will use a few technologies and it will be hit or miss if you have the ones that they use. Also they are ephemeral, they will change all the time through your career. The skill that you will/should develop is the skill to learn new ones. You get that from doing real work and thinking about what is happening in terms of design, not just the sytax of the use of framework A over framework B.
Take a look at this for thinking about languages,
https://dzone.com/articles/top-10-most-promising-programming-languages-for-20
but first get comfortable with one (oh and with TDD 😁 😎).
37
-
37
-
36
-
You are right in that there are lots of ways to fail at software. That is why I think that taking an engineering approach, as in applying the principles of science, to development is so important. In this case though, it was 'waterfall' thinking that caused these problems. Most large organisations operate a non-iterative, planned, informal process that attempts to fix time & scope, wether we call it waterfall or not is probably less interesting.
I agree with you that the problem is fundamentally one of "inspect & adapt" vs "attempt to fix a plan & fail".
It is possible to iterate to success, and to create tests from the beginning. That takes a certain level of experience with automated testing, but I, and many others, have succeeded at that.
34
-
33
-
Thank you! I think that I have a mostly rational, mostly consistent approach to software development. I think that helps with explaining my ideas, because my view, at least inside my head, is consistent. I believe that I have "strong opinions, lightly held". I am easy to convince that I am wrong, if you have evidence or a stronger theory than mine, but I try to not change my mind based on emotion, rhetoric, or sometimes abuse :)
I think you may overstate my conversational skills, I am good on a few nerdy topics and rubbish at chat :)
32
-
31
-
30
-
28
-
26
-
25
-
25
-
25
-
25
-
24
-
24
-
You certainly have to work at it. My preference, as well as pairing, is to rotate pairs often, so that you don't get stuck with working with people who you don't like working with all the time, and you get to learn from everyone on the team. Still, part of pairing is to reinforce the other people, so if people are on the phone, try and find a way to remind them that they are working, or should be. Maybe say "I see you are on the phone, want to take a break for a few minutes, then we can get back together and concentrate".
24
-
23
-
23
-
@defeqel6537 Whatever approach you pick, if the developers either don't understand it, or don't copy with it, they will break it. I think what you are saying is that if you are working with bad developers, you need to make them better. There is no process or technical fix that will correct this, this is a cultural change. You don't get to build good software with bad developers, so make the developers better, whatever that takes. I am trying to do that by explaining the techniques that the best dev teams use.
(P.S. by "bad developer" I mean people who don't do a good job, not "bad people", in my experience it is easy, or at least possible, to help "bad developers" do better).
23
-
22
-
21
-
20
-
20
-
@NukeCloudstalker " "Continuous Integration"? Did people work in the shadows for 3 months on a separate branch, never merging branches and then having a completely broken / incompatible project" - Yes, and some still do!
...but even if you don't the data from the State of DevOps reports, read about it in the "Accelerate" book, says that teams that don't practice CI produce worse code more slowly!
It is not "Incredibly Stupid" because the point at which you merge your changes, whatever the frequency, is the first time that you know that they work with other peoples. You can try talking and planning more, but definitively, until it works, you can't be certain that it will. CI says "OK, if that is the only time we can tell it is correct, then let's do it more often so that it is correct more often". It really is that simple.
19
-
19
-
19
-
18
-
18
-
@CosineKitty I am kind of on the fence when it comes to #NoEstimates. I think that at heart the idea is right, but that pragmatically estimates are probably going to stick around, rather like astrology - no basis in reason, but people like the habit.
The problem with estimates is that they are always wrong, and they are usually treated as though they are firm commitments. This seems to be based on the idea that in the absence of estimates, dev teams would slack-off and not work so hard. I see no evidence of this at all.
Let's just image, for a moment, a perfect dev team. They are working at the limit, producing great, high-quality software as fast as anyone could. What would estimates do for this team and this business? It could only slow them down, because now we are asking them to do stuff that doesn't directly contribute to the creation of great software, in addition to what they were doing before.
The problem is that orgs like the illusion of predictability. This is completely unreal, and certainly in software, but also in commercial performance too, there is no predictability. This is doubly true when attempting something new, for the first time. In software we are always producing something new for the first time, otherwise why would we bother, because we could copy it for free.
So I am philosophically in the #NoEstimates camp, but in reality there are times when you can't avoid them because orgs like estimates in the same way that kids like Xmas and the tooth fairy. Under these circumstances, I won't try and just make stuff up, but I will try to minimise the work invested in the estimation.
I did a video on this topic a while ago: https://youtu.be/v21jg8wb1eU
18
-
18
-
18
-
17
-
Naturally, I don't think that it is a "gross over simplification". It is certainly a simplification, because it is a 15 minute video!
The approach that I have described is, and has been used, to build large complex systems all over the world - I have worked on several myself. Occasionally we found the need to branch, and it felt like a failure when we did.
I don't think that there is much about software development that is easy, but I also think that, as an industry, we are exceptionally good at over-complicating things. Branches being one of those times where we are prone to mis-using them.
Any branch is "running with scissors", it exposes you to risks with BIG consequences if you get it wrong (as you point out). Of course, this doesn't mean that every time you create a branch you will fail. It means that your chances of failure are increased.
CI has a different set of compromises and fails more gently IMO. My intent with this video was to point out the costs, and benefits, of both, but naturally my advice to any team is to "prefer CI to branching" and "always branch reluctantly and with care". (I probably should have said that in the video - sic!).
I agree with a lot of what you say when you describe naive approaches to branching, cloning different versions of products is a horribly common practice - and terrible solution.
17
-
17
-
17
-
17
-
17
-
Sorry but that is simply factually wrong. It works for every team I have worked on for the past 24 years, it works for SpaceX, Tesla, Microsoft (in parts), Google (in parts), Amazon, Netflix and so on and so on and so on. It is how some people behave, and when they do in my experience they write better software as a result. Human creativity starts with understanding something about the problem, and then making a guess about how you would like the external answer to be communicated to you - that is your test.
We could argue about the word "Test" in TDD, I think it is better thought of as a specification, you don't write specifications after the design & development do you?
16
-
16
-
16
-
16
-
16
-
16
-
16
-
16
-
16
-
15
-
15
-
15
-
15
-
15
-
15
-
15
-
15
-
I do agree that this is a really difficult problem, and I know that I am opinionated on this, as well as other topics. These days I think of myself as a software engineer, and as a communicator on this topic, my goal is to help people to build "better software faster". I don't think that you can do that with armies of people who can't build software. One of my ex-colleagues, a very talented developer, once worked in a well-known company, and was asked by his manager "how can we help you to be more effective", he'd had a bad day so he said "fire all these people who just create crap, because I spend ALL MY TIME fixing the crap they create". There is a BIG problem with throwing people, sometimes unskilled people, at a problem in the hope that it will go faster. It simply doesn't work, it goes slower and produces more crap. I know that is a problem that you and I can't fix, but I don't think that Platform Engineering - when it is focused primarily on tools, rather than primarily on SW design, can solve either. I think that there is a serious danger of repeating an old, old, failure, and taking a step backward, instead of forward.
Thank you for the interesting, and civil, discussion though.
14
-
@Jeffs-r-us I am not automatically against the idea of certification, but it has become so discredited in our industry that most of the places that I worked, often good places, that many people would like to work, treated certification as a downgrade. I am not say this is right or wrong, it is just what happened. If someone came in with certification, not just "Scrum Master" but technical certs from the big providers too, that was assumed to mean that they had been, at best, looking in the wrong places. The problem, as I think Allen said, is that certification for our industry is much more a commercial venture than a learning experience. How can anyone claim mastery of anything following a 2 day course that you pass if you only attend?
14
-
Well, first, thanks for the thoughtful response.
I don't claim that what you are describing are not useful ideas, they are. Much better than what went before. However, in this one narrow context I can be more definitive, maybe even dogmatic, than I usually am, because I invented the concept that we are talking about. The term "Deployment Pipeline" is one that I created for the Continuous Delivery book. So I can be sure of the definition for this one thing.
What I describe here is exactly what I meant, and have been applying for the last 20 years or so, on real projects.
One area where we may be getting confused on the approach, I am NOT saying that the pipeline has to automatically release into production.
The decision to release can be automated (Continuous Deployment) or manual (push-button). My point is that the "Deployment Pipeline" is definitive, in terms of what constitutes releasability. If the pipeline says everything is good, there is no more work to do apart from "pushing the button to release" and that is a choice.
So, where we differ, perhaps, the "Release Pipeline" and the "Deployment Pipeline" are not different, but "Release" and "Deployment" are.
I can choose to deploy and change into production that passes the evaluation in the "Deployment Pipeline", that doesn't necessarily mean that I have released a feature, the change may be part of a feature that is not yet ready for release (I talk about that idea in more detail here: https://youtu.be/v4Ijkq6Myfc).
My point here, is that we can use the idea of a "Deployment Pipeline" as THE organising principle for our development approach. It is definitive in terms of what it takes to get to a point where we can safely, and with confidence, deploy a change (hence the name) and that idea is a lot more valuable than breaking up the process into a series of different pipelines.
I hope that, at least, my thinking is clearer?
14
-
14
-
14
-
14
-
So the implications of what you say, is that this proves that women aren't good at logic?
If so, I am sorry but I think it rank nonsense. There is more at play than that. They "system" may be skewed in a variety of ways, it may be intentional, it may be accidental, in may be because of self-selection by women or it could be because they are not smart enough to do this.
There is no science behind the idea that women's brains are structurally different to mens, there is no evidence that women can't reason well, when they have the chance to do it. Marie Curie is the only person to have won Nobel prizes in two different (technical) disciplines for example.
I don't necessarily ascribe this to malice on the part of men, but that doesn't rule out accidental, or sociological sexism. Women weren't always under-represented in software, that started in the 1980's so what happened, and how do we fix it?
14
-
14
-
14
-
14
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
Not really, some teams practice agile that way, but the best teams that I have seen don't. Even at the detail level the approach is collaborative and iterative. For example, on the teams that I worked on, POs would sit with the devs and would see the software evolve as it was developed, if at any point we had a question about the requirements, or they didn't like the direction we were taking, we'd talk about it. Testers tested the software while it was being developed, not after development was finished. So really not anything like, even a mini, waterfall at all.
13
-
13
-
12
-
12
-
12
-
12
-
I think it matters how you approach it, but I would certainly agree that it can be more tiring, it requires more focus for longer periods of time, but then working hard does.
I have done pairing for a long time too, and mostly, in the teams that I worked on, it worked well for us for years, but I have seen places where it didn't work. I think that in those cases, it was usually down to two reasons. Their was a cultural problem in the team, of a few individuals who really hated pairing and so disrupted it.
I confess, I really hate giving that kind of response - "It was you fault because you did it wrong" isn't good enough!
In the places where it worked, I think that one of the reasons was that we consciously built a strong sense of "team" with other stuff beyond only pairing, and we rotated pairs frequently, so you were never stuck with the same person for long periods of time.
Sorry to hear that it didn't work for your teams.
12
-
Emily thanks for the great feedback.
The point of my video series was to try and demonstrate the uncertainty that is inherent in this kind of exercise. So I didn't rehearse it or plan it. I am not sure that I was clear enough on the trailing comma thing. I should have been more explicit that at that moment I was experimenting to see how the code worked before I decided how to proceed. This would have been clearer, perhaps, if I hadn't take the short-cut of not really committing to save time. There is no way that I would have committed the change with the line of code commented out!
Yes, I hummed and harr'ed about showing the creation of the Approval test - I think that I will do another separate video on that sometime. I basically took the snippet of sample XML that was in the comments in the code, wrote a test based on that as the input, then measured coverage and added more XML that I guessed would increase the coverage until it did.
Whether or not the code counts as "Testable" based only on the Approval tests is debatable I guess. Strictly I guess you are correct, but I suppose that I fall into the trap of the overloaded nature of the word "Test" in the context of TDD. What I really mean by "Testability" is "Designable Through Executable Specifications". Approval tests don't do that, which is why I reacted against them a bit when I first heard of them from you. I know see their value, but it is not the same thing that you get from TDD. I suppose that by sports analogy, Approval Tests are defensive tests and TDD tests are Offensive tests?
You are probably right that the video would have been in better context if I had a planned feature to add.
Anyway, thanks again for the feedback.
12
-
12
-
12
-
Hard too answer in general terms. It depends how badly implemented, and how useful the resulting, poor, tests are. I think, in general, it is better to improve going forward than to look back. So I'd begin by improving how you create new tests for new work. This will make it easier to learn how to do it well. It will also help you to establish the basics of your DSL.
A slight modification to that, is it may help to identify a simple, but very common representative use-case for your system and use that to begin you DSL with, even if you do have some tests that cover that already - re-implement tests for this simple scenario in your new DSL. So don't try to back-fill the tests, start with some simple cases to establish the test infrastructure and DSL. Only later decide which of the old bad tests to replace. Once you have a DSL in-place, it may be easier than you think to replace the old tests - as long as you know what it is that they test.
12
-
12
-
12
-
12
-
11
-
@britishjames9415 I agree with this, but also think that often we devs wait to be given permission, instead of taking the responsibility for our own work. This is human nature, certainly, to some extent, but we are also not taught how to do this, and that it should be expected of us. This is a problem of education, but also of the culture in our industry.
My friend, Jeff Patton has a great phrase, "devs should be like doctors, not waiters". When you talk to a waiter you say "I'll have the steak, medium, with pepper sauce, and salad, can you hold the dressing on the salad". When you talk to a doctor you say "What's wrong with me doc, what do I need to do to get better?".
We are a profession of problem solvers, and yet we aren't taught that that is what we are, and we often complain if someone else doesn't solve the problem for us "It would have worked if only the requirements were correct".
So yes, we need to be allowed to take responsibility, but also we need to just take it - I have said this before, but it's no one else's job to give us permission to do a good job. We are the experts, we know better than other people what it takes to do good work, we need to exert that expertise, and when we do, this is in everyone's interest.
11
-
11
-
11
-
11
-
11
-
I confess that I have struggled a bit with Adam's response to my stuff, which seems like a shame because I think that we would probably agree on quite a lot, but the tone of his response to my stuff is more like a troll, than anything else, so he is one of about 3 people that I have muted on my social media. I have no objection to people disagreeing, but I prefer reasoned arguments, and Adam and I didn't seem to be able to achieve that. So I tend to ignore him, which is a shame because I think otherwise we'd probably be allies in the DDD, event based systems cause.
I watched the video that you linked and I agreed with lots of what he said, I am not sure that I see the distinction he makes between Event Modelling and Event Storming, at the level of detail that matters to me, the way that I use Event Storming sounds pretty similar to his use of it to me. Of course, I don't agree with his take on automated testing, though I do agree with his stance on designing systems from small, focused, independent pieces of code. It is my view that the best way to get people to design code like that though is to prefer testable code, which is what TDD does. He clearly doesn't like that. I accept that he may have something better, but I don't see it, and that is not how I and my teams worked, and successfully built systems.
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
@KrisRogos I agree with you about how difficult this will be, but I see no alternative to trying. It is not impossible to imagine controls. At present, you still need massive cloud resources to train these things. You could imagine checks on the use of cloud resources that limited them to prevent such massive-scale training. You could make these things illegal, and establish punishments for people that broke these rules.
I am not arguing in favour of these things, I am saying that they are possible, conceivable. The closest example that we have is the control of nuclear weapons and materials. It took decades to establish controls for those things, the problem is that we don't have decades this time, and this time we are not just trying to find ways to control the people, but also to control the thing itself, because it may be sneaky too.
My view is that some sort of control is vital, and however difficult, we should probably try to find a way to do it.
10
-
10
-
10
-
10
-
10
-
When is it not better to know as soon as possible that something is broken? Name me a type of software where CD doesn't apply? It is used in Cars, Medical Devices, Banks, Stock Exchanges, Games, by all of the biggest web companies in the world, Cloud Infrastructure, Operating Systems, Space Rockets, Mobile Apps, military hardware, retail systems the evidence is that this approach does work everywhere.
10
-
10
-
10
-
10
-
Well, because you can't write 'unit tests' that talk to a DB - by most agreed definitions, they are no long 'unit tests' at that point, they are integration tests and those are focussed on different things.
I would test my SQL queries, in integration, or acceptance tests, but I don't want to confuse testing them with testing the logic elsewhere.
If I write unit tests as in-memory tests of my code (how they are usually defined) then I can run 10s or 100s of thousands of those in minutes. As soon as I write tests that talk to DBs or File-Systems or whatever, I am probably down to 10s or 100s per minute. This is WAY TOO SLOW for proper feedback and control.
To your last point, this is clearly a design choice. I confess that I am not a big fan of putting too much logic into the DB - it is the wrong place. It comes from the assumption that the DB is an integration point that is shared between applications, which I think is widely regarded as an anti-pattern in large-scale and high-perfromance systems (which is where I do a lot of my work). At which point, my choice of where to put the logic for "can't have duplicate email addresses" is probably going to be built into my app, or service, or logic rather than into the data-store. So now I am back to being able to test it in memory if I needed to.
10
-
10
-
10
-
Yes, the problem is that "cookie-cutter software projects" make no sense at all, whereas "cookie-cutter construction does".
The difference is in the cost of reproduction, for software it is essentially free, because we can perfectly clone any sequence of bytes, representing a system, for essentially zero cost, so why on earth would we ever make the same software again, because we can just copy it. So it is ALWAYS something new, it is ALWAYS at some level a custom project. Which as you said, are impossible to estimate with any degree of accuracy, whatever the field.
10
-
10
-
@gabrielvilchesalves6406 Yes, I think that the trick is to think differently when writing the test. Your aim at that point is NOT try imagine a solution and test it, but rather, as you describe, think though what you'd like the code to do. "What is the outcome?". So if I was writing a test for a Calculator, I want to test for the results that it calculates, not the mechanism for how it does that. The test then becomes two things, a working specification of the public interface to your code, and an assertion that the outcomes that you hope for are met. Good tests, in these terms, should say nothing at all about "how" the code works.
10
-
10
-
10
-
10
-
10
-
10
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
8
-
8
-
8
-
8
-
One of my problems with this idea, is that if there are generic solutions that are all about the tech, then surely Google, Microsoft, Amazon are, or should be, providing those. How is a "Platform Engineering Team" inside a cloud-non-specialist company doing a better job of that kind of generic abstraction than the cloud providers and experts?
The value that people like you can add is, surely, is about ensuring that the gap between the specifics of your org's problems and the, already existing cloud-generic solutions is bridged. As I said in the video this is WHOLLY a design problem, and one that is entirely about the problem you are dealing with and only very peripherally about the intricacies of the technology you have chosen.
8
-
8
-
Thanks. Its too contextual to answer really, but the most successful company in the world (Apple) don’t pre-announce. You can make some predictions based on the rate at which you, on average, produce new features, but my view is that predictability is largely an illusion anyway, the only real way to be predictable is to be conservative in your predictions, so as a business, is it better to creat great features sooner, or to predict when, but deliver them more slowly? I think the answer to that may depend on the biz, but mostly sooner is more important than predictably.
8
-
8
-
8
-
@AngelLoredo53 Well one way that it is a bad thing, is, as you say, when it is treated naively to assume that the way for orgs to win is to take advantage of their employees which does seem like a fairly common anti-pattern. I don't subscribe to evil management, but I do think that management is poorly taught and often poorly practiced in all spheres of life. I don't think that people or orgs set out with the aim of doing a bad job, but often bad-jobs result from their actions anyway. I was once part of a very large team that had been working with one of the big consultancies. The project was doing poorly and we had all been working ridiculously long hours for nearly 18 months. Many of us had been working 80 hour weeks for months. One of the leading consultants, a senior partner in the firm, came into a leadership meeting one day and said "It's time to bring the hammer down on the dev teams", at which point I got up and left. At that point pushing people this hard we had lost of software, and after 18 months of work, it didn't even compile together, let alone work! It had started out working and working quite well and was destroyed by too many people and too much macho crap in the leadership team. Even so, I think that they thought that they were doing the right things, they were wrong, but I don't think that they were intentionally evil, just accidentally incompetent.
8
-
8
-
8
-
8
-
I agree, convincing people is the hardest part, but I don't think that we can lay this only at the door of the business. Don't get me wrong, they don't get a free-pass, but in my experience we, developers, often blame the biz for our assumptions of the biz. Sure, a commercial person is going to ask for more sooner, but I think that it is relatively rare for biz people to actively tell dev teams to "cut corners", or "don't test", or "don't refactor", or "create low-quality work to make the dates".
The problem is that it is always a difficult conversation to say "No" to people. So we tend to say the things that we think that they want to hear. So step one is, at least, taking ownership of the stuff that we should own. It is our responsibility to do a good job. It is not up to someone who isn't working on the code to tell us how best to write the code. So no estimates that give the "option" of not testing, not refactoring, not staying on top of tech-debt and so on.
Next, I think that we need to speak to non-technical people in terms that make sense to them, not in terms that we understand. I think that an important starting point for this, is that we are being honest with ourselves about why we want to do something, and if we hope to convince others to change, there needs to be a REAL reason why we think that this is an improvement, and not just because we'd like to play with the tech, or it would make my CV better.
Finally, the best way to convince people is to fix a problem that they have.
This is a good topic, maybe I should make a video on this?
8
-
8
-
I think it depends on why you are making the choice. New and shiny isn't always bad, if you think the new shiny thing will genuinely help to solve the problem better, but if you ONLY chose the new shiny thing for personal reasons, either not knowing, or worse, not caring whether this will help to achieve the goal of the work, then I think that this is amateurish to be honest. You need to think critically and sceptically when adopting new things. Try them out and then decide, not just buy into the hype of the new and shiny. I am sometimes an early adopter for tech, but I have tried to find the flaws in it for myself, before jumping into it though.
First, can it do what you need, and only then decide if it is more fun. For example, for a very long time my evaluation of any new tech starts with "Can I version control it?", "Can I automate its deployment?" and "Does it allow for TDD?". If the answer to any of these is "No" I am not interested and will have to be dragged kicking and screaming into using it. 😉
8
-
8
-
8
-
8
-
8
-
8
-
The science is incredibly well established, there is no debate herd really. Check the data in the links below the video if you are really in doubt. But this is nothing to do with politics, other than that the vested interests of the enormously powerful fossil-fuel lobby have been operating a campaign of disinformation, literally taken from the big-tobacco playbook of a few decades earlier. This too is very well documented, including recordings of fossil-fuel company executives planning the disinformation campaign.
No, this is not loose correlation, this is hard science. CO2 is a greenhouse gas, you can measure that yourself if you care to. CO2 is increasing in the atmosphere and the world is getting measurably hotter following the predictions of climate change models. Glaciers are melting, the northern icecap is disappearing, and sea levels are measurably rising. Where is the debate or politics in any of that, these are facts. How we decide to deal with these facts, may be political, but we deal with them or what the climate change models say, is that our civilisation will come to an end. The margins are slim, and we are running out of time because people have been treating this science as though it was som political choice. Eventually, whatever your politics, reality will intervene.
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
It is confusing. I use the words in the following way:
Deploy - The technical act of copying some new software to a host environment and getting it up-and-running and so ready for use. In context with 'Continuous' as in 'C. Deployment' I use it to mean 'automating the decision to release' - If you pipeline passes, you push the change to production with no further actions.
Release - Making a new feature available for a user to use. (Note: we can deploy changes, that aren't yet ready for use by a user, when we make them available to a user, with or without a new deployment, we release them - I plan to do a video on strategies for this).
Delivery - In context with 'Continuous' I mean this in a broader context. for me 'Continuous Delivery' makes most sense as used in the context of the Agile Manifesto - 'Our highest priority is the early and continuous delivery of valuable software to our users'. So it is about a flow-based (continuous) approach to delivering value. That means that the practices need to achieve CD are the practices needed to maintain that flow of ideas - so it touches on all of SW dev. I know that this is an unusually broad interpretation, but it is the one that makes the most sense to me, and the one that I find helps me to understand what to do if I am stuck trying to help a team to deliver.
There is, as far as I know, one place where my language is a bit inconsistent. I tend to talk about working towards "Repeatable, Reliable Releases", if I were willing to drop the alliteration and speak more accurately that should be "Repeatable, Reliable Deployments".
I hope that helps a bit?
8
-
8
-
8
-
8
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
CI is defined as integrating "everyone's changes at least once per day", so if your FBs last longer than a day, you can't practice CI, because my code on my branch is hidden from yours, so we don't get to integrate them together, until we both think that we are finished. The reason that this matters, is because any tests that you run before this point of integration, are testing a version of the code that is not the truth of the system. So whether the tests pass or fail, they are not a definitive statement. CI works on the idea that there is only one interesting version of your software, the current version, and the only way that we can test the "current version" is to integrate everyone's changes after each small change.
This is a very different way to work, but the data (from State of DevOps report) says that you produce better software faster with CI.
7
-
7
-
7
-
7
-
I find it helpful to think about this from two angles:
1) What does it take for you to feel ready to release?
2) How long does it take to get that confidence?
I am then going to optimise for those two things, speed & confidence. I advise people to divide their deployment pipeline into, effectively, two stages, a fast-feedback stage and a higher-confidence stage.
You want developer-focussed feedback in the fast stage, I generally advise people to aim for tests that can give about 80% confidence that if they all pass, every other kind of test will be fine, and also pass. The aim is to achieve that 80% confidence in the shortest time possible, I advise in under 5 minutes.
That immediately rules out some kinds of tests, most of these tests, tests that can run really fast, but give high confidence, are going to be unit tests - best created via TDD.
Then you need to do whatever else it takes to improve your confidence to the point where you are comfortable to release - Acceptance tests, Perf tests, Security tests - whatever. These will take longer to run, so we run them after the commit-stage (fast-cycle) tests.
The last nuance, that this video describes, is to use the Acceptance Tests (BDD scenarios) to capture the behavioural intent of the change so that you can use that as an "Executable specification" to guide your lower-level testing, and so the development of your features.
There are several other videos on the channel that explore these ideas in more depth, looking at some of the different kinds of testing.
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
Well that viewpoint doesn't fit the facts. I think that what you are saying is that women aren't as good at this stuff as men. If that is the case, why were women pretty much equally represented in the early days of computing? There is a more social effect than that going on.
I don't think that either Trish or I said anything about this being an intentional bias in recruiting. It may be, but there are other explanations. I don't believe that women are fundamentally less capable than men at writing software. Once you take that view, then whatever the reason for their low numbers, it is a problem for something as important as SW in our society. I am not going to think about the same things as your or Trish or anyone else, when it comes to SW. There are differences in experience and culture based on sex, so for those differences to be represented in SW, we need people who will see that set of problems.
For example, I am 6' 4" (196 cm) tall. If I fit a bathroom mirror for me, most people, and in particular the vast majority of women, will have a hard time using a mirror that is comfortable for me. If you are of average height, this idea won't have occurred to you before, but as someone in a different part of the bell curve, it annoys me all the time, because I have to bend over to use most bathroom mirrors. Diversity isn't about politics, though some people try to make it so, it is about practicalities and good engineering.
7
-
7
-
7
-
@LaMcHoPs337 My take is to take the idea of being "agile" at face value, ignore the layers of noise that surround the term in SW, it is about being able to change direction easily - be agile. Once you buy into that, it is easier to see the signal through the noise. To be genuinely "agile" in SW, you need to be able to flatten the cost of change curve so that stuff is easy to change, that means very good technical performance, your code needs to be good, and well tested!
There is a lot more to all this. I hope my channel helps with some of it. Here is an old video, "Agile Uncertified": https://youtu.be/U-u8xquguWE
I'd also recommend really reading and thinking about the implications of the agile manifesto: https://agilemanifesto.org These ideas are a lot more important than Scrum.
I also think that understanding other agile approaches, I'd recommend Continuous Delivery as an approach that doesn't duck the engineering challenges of being agile, but CD is mainly a 2nd generation version of Extreme Programming, so I'd also recommend reading that. https://amzn.to/2GpQRjE
7
-
I think that one of the best examples that we have of how to architect for hardware is an OS. We don't often think of it that way, but that's what an OS does, it provides an insulation layer of code between our apps that do useful things, and the hardware that they run on. I recommend that you architect for bespoke hardware similarly. Establish well defined interfaces at the boundaries and test apps to those interfaces.
If I am writing a Windows app or a Mac app I don't worry about testing it with every last detail of every printer that may be connected. OS designers design an API that abstracts printing, we call them print device drivers, and then we write to those abstractions.
The people that write the printer drivers don't test their driver with every app that uses it. They will have an abstract test suite that validates that the driver works with their printer. Their tests will be made-up cases that exercise the bits that the driver writers are worried about.
My recommendation for hardware based systems is work hard to define, and maintain a clean API, at the point where the SW talks to the HW. Write layers of code, firmware and drivers perhaps, that insulate apps from the HW, test the apps against fake versions of that API, under test control. Test the driver layer in the abstract in terms of does the "driver work" rather than "does an app work". It's not perfect, you may not trust it enough, but this is a MUCH more scalable approach to testing and a version of this is how, for example, the vast majority of testing in a Tesla is done.
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
@qj0n I agree, I think that there is an issue of honour, or morality, here. But I also think that there is, maybe, a deeper form of self-interest at play.
Sure, you can 'cheat' and pick tech that makes it easier to get the next job that you want, but I am not convinced that that is a good way to get to work for the better employers. I think that the better employers are looking for something beyond a tick-list of technologies. When I was doing a lot of interviewing, I would see it as a down-mark when someone's CV we basically a list of tech. They would have to work harder to convince me that they weren't missing the whole point of SW dev, which is to solve problems, not to wield tech.
A good dev can learn the tech in days to be useful and weeks to be good, so the tech is never my primary goal in recruiting, it is much more about how they work through problems to solve them. I know that my interview style is unusual, but I still think it is better 😉🤣
I think that you can gain some limited advantage by 'cheating' the system, but I think that you build much more advantage, and reputation, by having a laser focus on solving problems well, and doing that, to the best of your ability, in the interest of the companies that employ you.
I am not 100% sure that I am right here, I can only speak from personal experience, but people liked working with me because, over time, they see that I am working in their interest, sometimes even if it doesn't align perfectly with mine.
I have taken jobs using "less cool" technologies in the past, because that was the right choice for them, and there were other reasons for me wanting the job.
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
Thanks 🙂
Yes, I have been building service-oriented systems for a few decades now, it is much my preferred approach in terms of architectural style, though naturally I try to fit the solution to the requirements before picking an architecture - there is no "one-size-fits-all" option.
When you start out on any complex system, you don't know where you will end, you will grow your system as your understanding grows. At least that is how I work, and think is the best way to create complex systems. In the early days you will be exploring more and making more mistakes, it takes a while for your design, at the level of services, consolidate. You will find that some of the service interfaces, even if you got the service boundaries in about the right place to start with, change a lot as you refine the responsibilities of the service and evolve the cleanest interfaces.
During this period there is no benefit to a microservice approach IMO. A much better strategy is to bung everything in one repo and build and test it all together. Th HUGE advantage of this approach is that you can "build and test it all together" there is NO DEPENDENCY MANAGEMENT of any kind. I can change the interface to my service, and update all of its consumers in the same commit that I make that change!
The main downside of this approach is that you have to be efficient enough in your building and testing to get answers back on CD timescales, so under 1 hour for everything. It also means that the pieces aren't really "independently deployable".
This approach doesn't stop you having separate, reasonably independent, small teams though. This how we built our financial exchange at LMAX, and it is how Google and Facebook organise too. It is surprisingly scalable!
As I describe in the video, microservices is an organisational scaling strategy, nothing more. It limits the options for optimisation, because each service is discrete. It means that you have to extra work to do to facilitate the "independent deployability" and so on. It also demands a higher level of design sophistication to keep the services separate and "independently deployable". My advice, and Sam Newman's advice who wrote the most popular book on microservices, is to begin with a distributed monolith, and only, once the interfaces have stabilized, move to microservices.
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
6
-
6
-
I guess you didn't watch the video?
There is no way to increase the accuracy of estimates. The data to back that claim has existed since the 1990's and was described very well by Steve McConnell in "Rapid Development".
Working within "error-bars" that make sense for estimation, using estimates to prioritise work, rather than to draw Gannt charts and if you really want to increase the illusion of rigor try CD3 would be my advice.
Sure we are pushed to make estimates sometimes, but don't buy into the illusion that you can do that with any accuracy, there is no solution to that.
Accuracy in predicting the future (estimates) requires that we understand the problem, that means we have already solved it, and there is no point in solving the same problem twice in software, because once you have a solution, you can clone the answer for, essentially, zero cost - so we are ALWAYS build something new, or we are being dumb!
6
-
6
-
6
-
6
-
6
-
6
-
I wouldn't have a "Backend" story, this is an artificial split driven by technical design choices and so exposes those choices at the level of stories, meaning you have allowed implementation detail to leak out into the story - a bad idea, and so you have increased the coupling between the Story and the solution - another bad idea.
I would instead find a user story that matters from the perspective of a user, and forces me to implement something not hard coded.
In the bookstore example, we could imagine a requirement along the lines of "I'd like to see new books when they are added to the list" or perhaps "I'd like to see what books are left when a book is removed from the list".
None of these have to be perfect. The idea here is NOT to do programming by remote control, the idea is to give us the freedom to design good sensible solutions without, 1) being told what those solutions must be and 2) Without the story necessarily forcing us to make any specific technical change, other than WHATEVER is needed to achieve the goal that the user wants.
Stories are tools to HELP us develop software, so use the Stories, that should ONLY express user need, to guide your choices in terms of design of the solution, but those solution choices are yours, and it is ok for you to decide when to sensibly make them.
So my example wasn't meant to demonstrate me splitting F.E. from B.E., in fact in my example, the story had both F.E. and B.E., represented by the service and the UI in my diagrams. The service with the hard-coded list of books WAS MY B.E.! What I want next is a story that makes me need to do better than simply hard coding a response, if I can't think of one, then maybe I should hard-code the response, because that is simpler!
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
That has been true for a long time. I recall the problems of VB programmers and Visual Studio programmers who couldn't use a command line.
I agree that this is a real problem, but I don't believe that doing it form them is a realistic fix, for the reasons that I talked about in my previous reply. The fix is in two parts IMO, abstract at the level of the specific problem, not the tech, because the tech is already abstracted to be generic, so more abstraction to make it generic probably doesn't help much.
So, as I keep saying THIS IS ALL ABOUT DESIGN not the tech. The 2nd tool, is teach the beginner programmers what they need to learn, don't teach them learned-helplessness.
I know it takes skill to deal with the problems of distributed systems, it always has. So we need the people that can do those things to help the people that don't to learn what it takes, because it is VERY HARD to hide those things, and the cloud experts are already trying to solve that problem and we can see how hard that is, because some of their tools are hard to use.
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
Thank you!
I am not fishing for a revised version of "Continuous Delivery", but I think that the ideas are important and am, with this channel, hoping to help promote them more.
I like your word, collaboration is important to lots of this stuff, but I have a word that I think is more important, Science, we need to start using a more scientific-rationalist approach to solving problems in software. That often relies on deep, effective collaboration, but I think it goes beyond that and helps in many more ways than collaboration alone.
I now believe that CD represents a genuine engineering approach to SW development, and is important, and works, for those reasons.
I am working on another book, but it isn't a re-write of CD, it is about SW Engineering and what that idea means, or should mean in my opinion.
Thanks for your comments.
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
@DefinitelyNotAMachineCultist I agree with your analysis of how much of our industry works. My aim, with this channel and the other things that we can do, is to help some people to think a little differently about what our job really is and how to do it. I think that the "monkey see, monkey do" thing is a sad reflection on what it really takes to build great software. I think that for some classes of problems, the kind of problems that I have spent most of my career working on, it is MUCH harder than that. That means we need to take, really seriously, the skills and philosophy of our discipline.
I think that we have, only fairly recently in the history of our discipline, discovered things that really, measurably work better. And it isn't where most people have been looking. It is not about the languages or the frameworks that we use, it is how we organise our work and how we thing about design, and very specifically the cost of coupling in our work and our designs.
My channel is really an opportunity to reach more people with these ideas.
6
-
@DefinitelyNotAMachineCultist Specifically on "picking up new tech faster", my view is that the best devs are good at this because their knowledge is grounded in a few foundational concepts. Computers aren't magic, and knowing loads of APIs doesn't make you the best dev - ultimately you can look stuff up!
The skill is having a framework for problem solving. I think that we should focus on optimising for learning and managing complexity. When I start in with a new tech, I will start by finding out ways to test it, and then play for a bit to see how it works. I think it also important that ultimately computers process data with machine level instructions, everything else is just how you organise those things. I find that that helps me to see what is going on. In part it is experience, sure, but it means that you can detect the bullshit sooner. My wife's father used to teach physics, as he got older he couldn't remember all the different formulas, but he could solve almost any practical problem because he knew V=IR, F=MA and algebra and trig. I think that really having a good feel for ideas like modularity & cohesion, treating them as the most important things in design. Using techniques like "separation of concerns" and designing for "testability" to enhance the modularity and cohesion in you designs will take you a VERY long way, apply this to new tech that you are trying.
If you are interested in this kind of stuff, keep a look out for my next book, I am currently working on it (I am supposed to be doing some writing today, but I am talking to you :)) it will hopefully be published second half of this year. It is on, exactly, these topics.
6
-
6
-
@PetiKoch Ah, I see what you mean. Sure, maybe, depends on the application. The way that I build my services, that is more of a deployment-time decision than an architectural one, which is why I don't think of it that way.
The danger of starting with your monolith as a "single-process monolith" is that the other description of that is just a bunch of code! Unless we are writing something that we KNOW will be throw-away, then I think that the guiding principle for good design, even for small systems, is to manage complexity. So I want my design to be modular and cohesive, with good separation of concerns and as loosely-coupled and well abstracted as seems to make sense at the point that I write it. So I like the idea of "services" as a modular unit.
Now, what I mean by a "distributed service architecture" is that I don't care if the services are running on the same machine or on machines on different parts of the planet. That means that the comms mechanism, and my design, needs to allow for the case where I want to make the services non-local. My favourite way to do that is to make the interfaces to my services Async. Now it is a deploy-tome decision wether I optimise for simple, local, fast, comms or distributed, non-local, comms. I have separated that comms concern.
But this only works if I assume that the services are non-local. If I assume they are local, I may become addicted to the local performance, and then when I try to distribute them later, when my system needs to scale, my architecture is no longer fit for purpose.
You could argue YAGNI, the trick here is to do this in a way that makes the overhead of this "thinking ahead" low-enough that its not really a problem. That is a matter of experience and design-taste I suppose, but we got that right when we built our exchange and I have used that approach a few times since. There is some more stuff on the architectural style that I am hinting at here:
https://martinfowler.com/articles/lmax.html
and here:
I was involved in creating something called the "Reactive Manifesto" to describe some of the properties of this async architectural approach.
https://www.reactivemanifesto.org/
6
-
6
-
6
-
6
-
6
-
They don't have to work like we do to be a threat, and they don't need the same kind of plasticity. This is evolution, and evolution only needs a replication mechanism, selection pressure, and variance. At that point it will evolve things.
Machine AI will work millions of times faster than we do, so even if it isn't as smart, it can have evaluated millions of more choices than we have in the same time, and will have access to "better" choices. That's how chess AI's, traditionally, beat humans, now it is not quite so clear why, they are trained on millions of games, and infer for themselves how to win.
I don't see how anyone can not see this as an existential threat.
As we begin connecting AI to the real world, so that they can act as well as respond, they gain agency to change things. If they are smarter than us, then we don't know what they will choose to change, and wether or not it will be in our interests. They don't need to be conscious to do this, if they are making changes that aren't moderated by human decision making, and they are evolving, then they are, by definition really, uncontrolled. I don't care wether my children and grandchildren get wiped out by AI by accident, or because the are evil, both are the worst outcome for me.
We can't control evolution, look at the on-going COVID pandemic, which as I say in the video is a lot simpler as an evolutionary platform than AI. I think that the genie is already out of the bottle, we will get world-affecting AI, and we are currently wandering into this future and not paying attention. You don't seem to be paying attention to this and you are working in the field.
To quote Musk, but changing the context (he said this when talking about climate change a few years ago), if there is a 1% threat of extinction of our species, that is too big a chance to take. I am pretty sure that the chance is more than 1% but it may not be lots more it is not a certainty, but 1% is too big a risk the gamble everything that we, as humans, value.
I am not confident that we can do much about this, but I do think that we should try.
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
@kevinfleischer2049 Yes, I agree that this feels like a shorter-route for legacy code, and the switch to working on trunk is daunting, particularly if you have never done it. The problem as I see it though, is that the sociology is mixed up with the technology. From a purist point of view working in small changes is certainly the better, measurably safer way to proceed. But when you are used to being scared to change the code, if feels risky.
On the other hand, as long as you have the "crutch" of not releasing ("I will make this change now that scares me, but I won't face my fear and hope it will feel safer later by hiding it on a branch" (extreme language to make the point) ), that crutch stops you from working in ways that will fix your broken leg.
It is a difficult decision to switch, but I am pretty sure that it feels more risky than it is in reality. Learn some refactoring skills before you begin, so that you know how to make small changes safely, use Approval tests to defend code that you don't want to chanege, use Acceptance tests as a defenisve mechanism for code that you do want to change and start!
5
-
5
-
5
-
I'd argue that demonstrably our industry is sexist, and that fact does't really depend on anyone being overtly sexist. Our problem is so deep that we can't even mention it or think about it, without assumptions of personal insult.
In 1984 women represented 38% of people studying computer science, now it is 17%-18% if it was because women are not capable, why were they more capable in the 1980's. It is cultural, because women's representation is not evenly distributed, big tech, Apple, Microsoft, Google, Amazon etc, are better than average at employing women in this sector. I think that this is a cultural effect, but whatever the reason, I think it is important to look at it and try to figure out if we can do better. This is not necessarily about individual blame, or finger-pointing, it is about subtle, hard to see in one's self, attitudes and biases that work against marginalised groups. And the fact that 50% of our population are marginalised wether through their own choice or other peoples selection biases, is a problematic thing IMO.
5
-
5
-
5
-
5
-
Maybe it was a bad example, but my point was that we technologists need to be responsible for some of the choices. OAuth maybe a very good choice, but it isn't the goal, it isn't what you will need to test, it isn't what the users want, it is a solution - a good one, but not really a requirement.
As long as we specify solutions rather than needs that should be fulfilled, communication is less clear and we are more prone to make dumb mistakes in the software we create.
You are right, there are sometimes technical constraints that are important, and they need to be dealt with, but they should be the exception not the rule, and because the temptation is to specify work by defining a solution rather than a problem to be solved, that is the default mode of failure, and so a much more common problem that forgetting that it needs to be OAuth. So we need to be cautious of specification creeping in to our requirements process.
My experience of working with hundreds of companies by now, is that most specify work by describing a solution. I think this is a huge mistake.
5
-
5
-
5
-
I don't claim that the advice "is based on science" I say that the engineering discipline that I describe is based on scientific style rationalism, by which I mean the practice of science, not the findings.
The practice of science is based on some key ideas. Always assume that you can be, and almost certainly are, wrong. Work to find out how, and where you are wrong and try something new to fix it.
Make progress in small steps, and do your best to falsify, or validate each step (falsification is usually a more profound test).
Make progress as a series of experiments. Being experimental means having a theory about what you are doing and why you are doing it, figuring our how you will determine if your theory is sound, before you begin the experiment and it means controlling the variables enough so that you will understand the results that you get back from the experiment.
There is a fair bit more, but that is what I mean by being "scientific". The only study that I am aware of and that I believe as a decently strong claim to being scientifically defensible is the DORA study, described in the "Accelerate" book by Nicole Forsgren et al.
The other vitally important aspect of a more scientific approach is to use, what David Deutsch calls "Good Explanations"
According to Deutsch a "Good Explanation" is...
1. *Hard to Vary*: If you can change parts of the explanation while still making it work, the explanation is not considered robust or deep. A good explanation has little flexibility in its structure — any change would render it inadequate or false.
2. *Not Merely Predictive*: A good explanation goes beyond mere prediction. Many theories or models can predict outcomes (e.g., using formulas or data), but a good explanation delves into why something is happening, in a way that is resistant to arbitrary alteration.
3. *Truth-Seeking*: It aims to accurately represent the reality of the phenomenon it explains, rather than just being a convenient or pragmatic model.
4. *Problem-Solving*: A good explanation not only fits existing data but also solves the problem it was created to address. It reduces the mysteries, clarifying why things happen the way they do.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
1. The data on pair programming says that 2 people complete the same task as one person in 60% of the time, so not 2 for 1, but not faster. But, the quality produced by the pairs is substantially higher. The overall impact is that pairs are at leas as efficient, but probably more efficient than a single. The problem with being more definite than that is that teams that do pairing usually do a lot of other goid stuff too, so you can’t tell the effect of pairing vs other improvements.
2. The commit history still tells the truth, but it is a truth more like a transaction log in an event stream, rather than some kind of time based snapshot. Yes, include a reference to the reason (could be a Jira ticket) in every commit. You can take this further, adopt some conventions for commit messages, and you can programatically recreate clear descriptions for releases. I do a lot of work in regulated industries, we can often auto-generate release notes.
3. Well, part of CI and CD is to work so that the codebase is always good, all tests pass (CI) and work so that your software is always in a releasable state (CD). So no, you can’t knowingly commit code that breaks things! If you break something you “stop the line” and can’t release till you fix or revert the problem, that is what CI (or CD) means. Teams that work this way test nearly everything with automated tests, sounds slow, but is not because you spend time writing tests instead of diagnosing and fixing bugs. Teams that works this way spend 44% more time creating new features than teams that don’t.
I have videos that cover all of this stuff on my channel.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
I don't think that there is an answer, if we constrain the problem to only fixing a price. Let's be clear though, small simple, low cost, things that are very similar to work that we have done before - well ok, we have some basis to make a guess. It will be wrong, but commercially it is easier to say "I will make you a wordpress based website for £500" than I will build you a healthcare system for £50 million. The first one MAY be close enough, and you will do enough of them, that sometimes you will do it with £125 worth of effort, and sometimes £2000 and it will work out as long as it is more often less than more - incidentally those are the error bars for estimation at the start of a software project 1/4x to 4x!
But we can see from real world projects bigger projects are ALWAYS WRONG but because the numbers are so big and scary, people want more precision, even though bigger projects are more uncertain because they are less similar to one another, and there are always huge unknowns at the start.
My view is that anything we do to up the precision is a mistake, because the error-bars are so huge that precision isn't what we need. So more subjective, less precise is better, and best of all, to my mind, is the realisation that organising this through ideas like incremental/venture-capital style funding models for big projects is by far the more sane response to the reality of what SW dev really is.
The trouble is that customers and businesses are not always, or even usually, rational. So crossing your fingers and guessing is all there is.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
I don't agree, certainly TDD is a skill that you need to learn, but it is not a difficult skill, part of the mistake, that I think makes it seem more difficult is to think of it in terms of testing rather than in terms of design. I always start any test with "What do I want my Code to do now". If I can't answer that question, I can not only not write a test, but I also can't write the code. TDD is about design much more than about testing. Sure, but in TDD there are 3 types of test, and you need to learn the basics of your xUnit framework, most people can pick that up in a couple of hours. The difficult part of TDD is that most people are VERY poor at design and TDD Surfaces you poor designs more quickly than anything else. That is why I value it so highly, and I think that is why people find it difficult, but it is not the TDD that is the problem.
5
-
@jimhumelsine9187 I guess the point that I am trying to make is that to me the requirements don't "belong" to someone else, they are owned by the development team. Sure, other people can suggest changes, but if the developers don't understand the problem well enough to build software that is in some sense functionally coherent that is still a development problem to me. The developers are the ones that are closest to the solution, whatever that may be.
The problem specified may be the wrong problem to fix, and I think that is something that can be sensibly outside of the development team, but whatever software we create needs to make sense within our understanding of the problem and its solution. So that means that, to me, a "bug" is something that is within our understanding of the system but where it doesn't work properly in some way. Missing something in the requirements is a gap in our understanding of the system, but not really a bug, by that definition.
Sure, pragmatically what I have described is not how lots of teams work, but I still think it makes sense as a model. So I aim to ensure that the system works, by which I mean that it fulfils all of the behavioural needs that we have identified, and got around to implementing so far. We will inevitably still miss things, but if they cause a failure on the delivery of the behaviour of the system that we have so far they are bugs, and if they are gaps in our understanding of the requirements I'd see those as "yet to be delivered features".
I think that makes sense?
5
-
5
-
5
-
5
-
5
-
5
-
@SirBenJamin_ I think that this highlights part of the mindset switch that helps with the adoption of TDD. TDD is not really about "Knowing the right inputs" it is much more about "Understanding what you want the code to achieve" That is what you write the test to test. This may be a subtle but impotant difference. The problem with teaching TDD is that you have to start with simple examples, but what awe are teaching here is not the solution, but the technique. I often teach TDD laster in the course using an exercise in adding Fractions, the most important part of this exercise is which examples I choose as tests, it is usually something like 1 + 2 = 3, 1/3 + 1/3 = 2/3, 1/3 + 1/4 = 7/12 and so on. Think about what this means for the progression of your design. imagine your solution changing to meet the new need demanded by each subsequent test, you will be solving different parts of the problem. We start with the problem of how do we want to represent a Fraction, even a simple fraction like 1/1. Next we add fractions where the denominator stays the same, then fractions where we need to do some reduction.
So the I'd say that if you don't understand which test to write yet, it means that you don't really understand the problem you need to solve either. One of the big benefits of TDD to my mind, is that it forces us to do this more thorough exploration of the problem so that we can incrementally test by test (small change in behaviour at a time) evolve our understanding AND our implementation.
5
-
5
-
5
-
5
-
5
-
5
-
It is hard to tell from a description like this, but my guess is that some of your pieces are doing too many things, this may be a "separation of concerns" problem. If so, this is a good example of how TDD helps, and you have already spotted it. You code is hard to test. That isn't a problem to be fixed in the test, it's a problem to be fixed in the design. Try and make each pieces of code focus on one thing.
Maybe assembling and modifying the objects for the service is a separate step from processing the results from it?
Imagine a piece of glue code. It takes a reference to the service, a reference to some code that prepares the inputs, and a reference to something that processes the results. It's job is to orchestrate this little workflow between the pieces. This would be easy to test!
Now the prep code doesn't need a reference to the service at all, and the "process results" part doesn't either. So both of these are easy to test too.
These may be a bad ideas in the context of your code, but the principle is correct - aim for Modular code with a very good separation of concerns, it is testable, and ultimately, it's more flexible too.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
@milavotradovec1746 Thank you. On remote pair programming, I once worked for a financial trading company, based in Chicago, I worked out of the London office, but the team that I worked closest with was based in Chicago, a 6 hour time difference.
We paired during the couple of hours of timezone overlap and found it helped strengthen the bonds in the team. It is different to full-time, local, pairing though I think. I think that it helped the team a lot, but there was also a lot of work that was done not-pairing.
The way it worked best for us, was I had some stuff that I was working on and the people on the team in Chicago had some stuff that they were working on. I was the more experienced dev, so that it nearly always ended up that I paired on their stuff.
I'd give it a go, and experiment a bit with what kind of things you choose to pair on to find out what works best.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
I think BDD is a bit more than semantics, but otherwise I agree completely.
TDD doesn't really say much about he nature of the test, BDD does. "Good TDD" is certainly opinionated about the tests, but that was the original point of BDD - a way to teach people so they got to "Good TDD" sooner. Inevitably it has morphed a bit since then, and there is plenty of "Bad BDD" around too, but I still think that it takes the discipline a step further.
So, certainly, as I say in this video, when we started out with BDD the intent was to "get the words right", so there is certainly a level at which semantics matter, but I think that the focus on tests as genuinely "executable specifications" amplifies your later points.
In addition to that, the focus on the desired behaviour, consciously aiming to exclude any notion of how the system works, makes BDD a step further than TDD in guiding us to a better "TDD process".
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
I think that it is needed, but is often widely, misused. I have seen "Backlog refinement" taken to mean agreeing in a bunch of technical tasks then allocating them to people. That is not its job. The idea is pretty simple, which are the next most important set of features that users want, and prioritise on that basis. It can be a useful, regular sanity check, or it can be radical shift in direction based one new information or priorities, but it is still really only about "which one is next", or should be. If, on average, it is taking more than a few minutes each week or two, it is probably not working properly.
Not a bad idea for a video, thanks. 🤔
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
My point is that “Manager” doesn’t tell us what that code is doing.
It’s hard to tell from a brief description, but what you describe sounds like an implementation of A Model View Controller pattern https://en.m.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. I think that “Controller” is a bit better name, than Manager.
Controller is a bit generic, but it is useful to use conventional names when you are implementing common patterns.
More generally though, ‘cos particularly when you are starting out you may not always know the names, pick a name that describes what it is doing. If it is routing events, call it an “EventRouter” if it is handling UI events, maybe “EventHandler” and so on. A good check is to explain, or imagine explaining, what each module, or class, is doing to someone else. For extra credit, could you explain it to someone who isn’t technical?
5
-
5
-
5
-
5
-
5
-
Then look harder! The data is there, and has been for a long time. The evidence that much of the “doubt” has been actively spread by fossil fuel vested interests is also there. I think incontrovertible that climate change is man made, but who cares, if this is down to natural fluctuations in climate, or not doesn’t matter if it means that large populated areas of the Earth will become uninhabitable, and billions of people will die. Ok so climate fluctuates for other reasons too, but there us no argument that this is related to CO2 concentration, and we are measurably adding to the CO2 concentration. I don’t see where there is a relevant debate here. If CO2 goes up temperature will increase. We also know that this can run away, the average temp on Venus which is a planet similar to ours,band a bit closer to the Sun, but not enough to account for this, is 464 C because it has a CO2 atmosphere. The belief is that Venus once was probably Earth-like
https://climate.nasa.gov/news/2475/nasa-climate-modeling-suggests-venus-may-have-been-habitable/
Again, I don’t see where there us much room for debate here?
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
@GrouchierThanThou but this is a sociological effect. This wasn't true before the 1980s for example, it is also less true in some other parts of the world. If this is a free choice, I guess that would be ok, but I am pretty sure that the statistics would look different, but women are treated differently and just by virtue of their relatively small numbers in our industry, then they get, often accidentally, marginalised. It's a bit like the scene in the movie "Hidden Figures", where the hero gets in trouble for having to go to a restroom in another building.
It is hard to see these sorts of problems if you don't experience them personally. My views are sometimes dismissed because I am old, people dismiss my stuff because I am a "boomer", it's nonsense of course. People like me are as likely or unlikely to have good or bad ideas as anyone else, but my difference in perspective means that where my ideas are good, may be different from where your ideas are good. That is why diversity of all kinds matters, and for whatever the reason, because of malice, accident, or the wrong choices of women, having over 50% of the population of the planet so under-represented in software is dangerous problem in my opinion.
The nature of the cause may suggest how to fix it, but first you have to see it as a problem.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
For the next project my advice is start as you mean to go on and do everything that you need to go fast. The data says that working the way that I recommend is the fastest, most efficient way to work. So spending time on getting a working deployment pipeline in place for new work, isn't a cost, it is an investment. If there is only you working on things, the things that you think will help you - as a minimum, my advice is to create a deployment pipeline that includes Commit Stage, Acceptance Stage and Production Stage. The start of a new project is the best/easiest time to get all of this in-place. I cover a lot of this stuff in my book "CD Pipelines" https://leanpub.com/cd-pipelines.
The other question is wether it is work retro-fitting this to old projects. My advice, is do this for the new project first, it will be easier. Once you have seen it in use, and what it takes to get it up and running for you, you will be better placed to decide wether or not to retro-fit.
4
-
4
-
4
-
4
-
4
-
4
-
My approach to TDD is VERY behavioural. So it works extremely well for testing behaviours, even through complex, graphical UIs. But it doesn't say anything about how they look.
For TDD, you use a technique that I call "Testing at the Edges" where you abstract the rendering, and separate it from the logic of the system. Now you can thoroughly test the logic in TDD, without needing to paint pixels, or store to real disks etc. The "Edge" code that does the rendering, or storing, or any other I/O, is ideally a bit more general, and a fairly thin-layer of code. This code you test with cruder integration tests that you validate against the result.
For complex UIs how it looks can matter, my friend Gojko Adzic has an open source project for testing graphical UIs. What it does is does a bitwise comparison with a stored version. If anything changes the test fails. The clever bit is, at this point it asks a human for help. You select which version, the before or after, of the UI is correct. If you say "before" you broke something and need to fix it. If you say "after" the code stores the new image as the "truth" for future comparisons.
Here is a talk from Gojko: https://youtu.be/PlcVOBqVUr4
4
-
4
-
4
-
4
-
4
-
4
-
@timothyblazer1749 You have obviously not paired with the people I worked with 🤣 Seriously though, this is often said in defence of code-review, I think that the psychology doesn't bear it out. I vaguely recall some research, from many years ago, about the development of flight-control systems. They were developed with three "clean" teams. They weren't allowed to share any information, beyond the requirements, and weren't even supposed to talk work with people from other teams in a social setting. The researchers found that they still made the same mistakes often in the same places. So independence is harder to find than you may think. I have done a lot of both pairing and code-reviews (giver and receiver) I know which created better quality in my personal experience, and it wasn't the reviews. That last bit is certainly only my subjective impression, and so not good enough, but I have never seen a comparative study on pairing vs review, it would be interesting.
4
-
4
-
The problem is always the coupling. The problem with large projects is that the complexity explodes. If I write SW on my own, all changes are down to me, and while I may forget or misunderstand something, only I can break my code. If you and I work together, now we can break each other’s code.
To defend against that we can talk a lot and understand what each of us is working on. That doesn’t scale up very well really. The limit of being able to work in a team like this and be able to know enough of what everyone else is doing is probably only 8 or so people. After that there is a measurable reduction in quality. If you grow much beyond that if everyone is working without any compartmentation of the system and teams, then they will almost certainly create rubbish.
So then the next step in growth is to divide people, and their work, into compartments so that they can work more independently of one another. This is where things start to become more complex. The quality of the compartmentation matters a lot!
If you do this really well, it scales up to overall team size (divided into many smaller teams) of probably low hundreds if you want to keep their work consistent and coordinated. After that you pretty much MUST de-couple to scale up further.
These are all “rules of thumb”, approximately right rather than hard and fast laws. You can improve scalability of code and teams with great design, but the way to optimise for max scalability is to go back to independent small teams.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
Agreed, except... 😉
I still think that if your code is big enough to make typing speed relevant, you are doing it wrong. I'd rather think more, maybe draw some pictures occasionally, and type less. My code, in the context that I am working on at any given time is usually in the order of 10 lines, maybe 15 if I count the test. However slow my typing, thinking about the problem VASTLY outweighs the cost of my typing. I got, reasonably, good at typing because of writing like this, and a bit from writing code all day every day, but I never focused ANY of my time on learning to type faster, it doesn't seem relevant to me?
4
-
4
-
4
-
4
-
4
-
4
-
I disagree with all of this, but it is a well reasoned response, so I feel I have to respond.
1. you may be right, but I'd say that all SW dev, all code is about abstraction, otherwise we'd still write everything in machine code by flipping switches on the front of a computer, so I'd say that maintaining abstractions is what we do for a living.
2. it's not really a different language, JUnit is still written in Java with all of the semantics, good and bad. But I assume that you are being more abstract than that 😉 There is certainly a bigger overhead when you learn that your abstractions are wrong, because you now need to change your code as well as the tests. I'd argue though that you find the wrong abstractions sooner and so are able to bail earlier and in addition, because you have tests of the rest of the system you are less likely to break things as you seek new, better abstractions. So in my experience TDD comes out about even on the amount of work to change the tests and code as a result of the new abstraction, but wins on the defence of the changes system-wide.
3.I think it is a fundamental misunderstanding of all testing to assume its job is to "prove" anything, the job of tests is to falsify. Whatever you do how ever many test you have you can't "prove" anything, but one failing test "proves" that something is wrong, and in my view that is their job, so TDD falsifies code and design decisions which is it's power.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
I think that you are over-reacting to what was meant to be a joke. The "punishments" were jokes, but meant to demonstrate that this is something that we all, as a team, collectively care about.
Consequences matter when teaching anyone, or anything, even a dog, new behaviours. We learn from making mistakes, and we need to know that they are mistakes.
I agree, that I may be loose with my language when I talk about this, but there is something deeper here that I think matters, and that is this idea of consequences.
I completely agree about the need for "blameless post-mortems", but not all acts are blameless and if you treat bad behaviours as equivalent to good, you never improve. So we need feedback, and the team needs to collectively agree on what they think is "good" and provide feedback to everyone when "good" isn't achieved.
The "punishments" were simply a jokey form of that feedback.
I think that one of the characteristics of an "ideal work environment" is that everyone gets to see the consequences of their actions and choices, and has the opportunity to correct mistakes, or improve on outcomes. This is how you build a learning-focused environment. This is certainly NOT about one group of people victimising another.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
I think that the "engineering was bad" doesn't necessarily mean the "code was bad", it's not the same thing. The engineering is the approach to creating the code. It has an impact on the code, but it is not the same thing.
The mistake that they made, in my view, is that they didn't work so that the software was always, from day one, "ready for release". What that means is that each little bit that you build, you make sure that that bit works well enough to be used in production. That means that, if one of your target platforms is an old console, you run performance tests on the old console, from the beginning.
You don't write the code, expecting to return to it to add performance or quality later. That is what I mean by "bad engineering". CDPR didn't work in a way that gave them a chance to succeed if things didn't go to plan.
4
-
4
-
@d3stinYwOw Yes, many organisations are poorly structured to do a good job, and this is one of the ways that this is true. Tesla released a change to the charge rate of the Tesla model 3 car, this involved a physical re-routing of heavy-duty cabling in the car, amongst other changes. The change was made in software, the automated tests all passed, the change was released into production, re-configuring the robots in the factory that build the cars, and 3 hours after the change was committed, the Tesla production line, world-wide I think, was producing cars with a higher maximum charge rate (from 200kW to 250kW).
To achieve this kind of work, you have to treat the act of software development for what it really is, a holistic practice, and you have to optimise all of it. If organisations are mis-configured to divide up work that you can't afford to be divided, you have to fix that too, of you want to do a great job. The org structure is a tool, it was decided on by people, and people often make mistakes. Separating dev and testing is a mistake. It is not an obvious mistake, it sounds sensible, it just doesn't work very well. So in great organisations, they are will to spot these kinds of problems and fix them.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
"Something new and exciting" is fine as a starting point, but that is nowhere near something that you can build yet. You MUST have some idea to start cutting code. You are not randomly throwing characters into your compiler. So you have some sense of direction to explore. It may well be wrong, as you start exploring this direction, you may discover a new path that looks more interesting. All that is fine, and normal for any type of development.
So at the point at which you think something like - "I think I can do something interesting with more compression & reverb on this part of the signal" (or whatever) then that is the point where you start to define a story, "as a musician, I want to apply compression & reverb only to a specific frequency range". When you find the new thing that is better than my (made-up) idea, then either park or dump the stuff you had. But when you keep things they are better defined, easier to return to, and you know that those that were finished still work, when you made changes.
I don't think that there is any difference between this sort of software, and exploration, and any other. I would argue that SW dev is always a process of exploration.
4
-
4
-
Not really, most, if not all, of them are backed by the research that we describe from DORA.
It is also not dogmatic if we are willing to change to something better, as we explicitly discuss in this video.
"Dogma: Dogma in the broad sense is any belief held unquestioningly and with undefended certainty"
By definition, we aren't holding views "unquestioningly" if we question them and explore and consider the alternatives. If we refute the alternatives, based on evidence, that isn't dogma, that is science and engineering!
I can also give you a rational reason for every practice, why it matters, and why it works better than the alternative, that I promote, so it isn't "undefended" either. Of course, I may be wrong, and you may disagree, but neither of those things say that I, or Jez, are being "dogmatic".
There is certainly no "100% proof" here if that is what you mean, of course it is possible to build software without Continuous Delivery, but the data from the most scientifically defensible research into SW dev practice says that if you don't practice CD you have a significantly lower chance of success, which is why many, maybe most, of the most successful SW companies in the world work this way.
Statistically, if you don't do these things, the chances are that you produce worse software slower! That isn't subjective or dogma, that is what the data says, if you want to challenge that, here is a video that tells you what you need to do to refute these ideas: https://youtu.be/pAX8GAsRaYk
4
-
4
-
4
-
4
-
It is complicated. For some SW it makes senses to have some standardisation. From an org’s perspective it is a nice idea to build tools, like CD pipelines, that help teams. I don’t like forcing tools or tech on teams from outside. To be successful, it works much better, when building tools and platforms, to adopt the approach that it is the job of the team producing the platform or tools to make stuff that people want to use, rather than that teams are forced to use.
That is also in the orgs interest, because if there is a team that for some reason doesn’t fit into the standard, then. They can fix their own problems. It leaves space for teams to innovate!
For this to work, the orgs need to be willing to give teams the freedom to make their own decisions, and the teams need to be willing to take on the responsibility for their work.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
Absolutely, the real argument here should be about what works best. I don't agree with the people that dismiss the importance of words, after all that meaningless right wing dog whistle "woke" is also a word.
Words matter, and words matter even more in technical disciplines when we are striving for more precision. For me, and clearly for Emily too, "Craft" is the wrong idea, the wrong word.
I agree that "Craft" is an outdated concept and isn't how we train surgeons, pilots or engineers in other disciplines, so why is "Craft" a good idea for software, when I hear the word craft I think of knitting, and craft-fairs with, often, low-quality, hand made, non reproducible things. None of these things seem like the right answer for doing a better job of software. The problem here seems to be, once again, the words. The software craft movement seems to, wrongly, conflate "Craft" with "Quality". I think that is a big mistake that steers us in the wrong direction, but I have no problem that the attempt instigated by Bob Martin to reclaim the technicalities of software production was well meant.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
I think that you are looking at this from a testing perspective, which is completely understandable, but a mistake IMO.
TDD is MUCH more about design, the tests are a nice side-effect of designing your code with TDD. You don't need to worry about what the scope of a test is, if you are using it to specify what you want your code to achieve, because the scope of the test will fall out from the need to specify. It may challenge your design thinking if the scope seems too large, but that is the primary point of TDD. To help you to achieve better designs.
On your last point, "test more, duh. There's AI for that." I disagree, I think that this is another big mistake that the whole industry is making.
What can an AI generated test test? If it is starting with a system, or even with the source code of a system. All it can do is confirm that the code does what the code already does. It can't test that the code does what the code is supposed to do. This is the same failure as after-the-fact unit testing which is dramatically lower quality testing than TDD and inevitably ends up fundamentally broken at the time when you need it to provide protection for your changes.
4
-
Thanks, I am pleased that you like it. If you have watched many of my videos, you probably already know that I like to build mental models of how stuff works, the theory helps me with this, but you also have to ground it in real-world experience, so having examples like this really helps. It was great that Adaptive let me critique and publish their stuff.
On the naming, I think that there are three aspects to naming tests. One you want people to know what the test does. I like simple, descriptive names that make sense in the scope of the problem domain. "PayByCreditCard" "LoginWithBadPasswordRejected" and so on.
Next you want to be able to deal with classes of tests as a group. I want to build automation that allows me to easily know which is an "AcceptanceTest" and which is a unit "Test" because my automation needs to do different things with them. I usually do this in two parts:
Structural - separate different classes of test into different content-roots, so everything below the "acceptance" dir is related to acceptance testing.
Convention - Name tests so that it is obvious what kind of test it is I usually adopt the convention of TDD tests ending with the word "Test" and BDD style acceptance tests ending "AcceptanceTest" so that I can write code to parse tests and differentiate between the different types e.g. "PayByCreditCardAcceptanceTest"
Finally we may want tracability for audit or compliance, so some unique ID can be useful. I have played with different strategies for that, sometimes using classifiers "acc.pay.004" other times just a number that we can use as an ID.
4
-
4
-
4
-
Sure, you don't always have the expertise that you need in a team. Where you have that expertise elsewhere, in a UX specialist perhaps, then I think their role should be to join the team for the duration of the story, and their job during that time is not really to do all the UX (or whatever) work, but to teach the team enough that they can do a better job next time. Use the experts for unusual difficult cases, and train the team to be good enough to do the normal, day-to-day, stuff.
I think that the "programming by remote control" approach to requirements feels good in the short term, but over the long-term is a much worse strategy. If your teams agonise too much over simple UI changes (for example) then that is the problem to fix, improve their skills, rather than remove their responsibility for the code IMO.
4
-
4
-
In general I am a subscriber to the UNIX doctrine, of many small tools that work well, rather than big frameworks. So if each of your examples are separate libraries, that is better than bundling them altogether, if they aren't really related.
The next thing is to recognise that ANY SHARED CODE adds to the coupling. There are 2 ways to manage coupling, make interfaces that don't change, or make interfaces that are loose-coupled, and so hides any change. For low-level libraries, "interfaces that don't change" is best, which is one reason to separate the pieces out from one another, so that you don't force a change for a reason the doesn't matter to the consumer of the code.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
CD doesn't demand that you release your changes every hour, but rather that your changes are deemed "releasable" at least once per day, but ideally ever hour.
However, having said that if you are releasing that frequently, then it makes most sense to automate any verification, including checking that things are working in production. Companies like Amazon and Netflix go as far as defining what "success" means for each feature, automating how they will check for "success" and if the change is not deemed to be successful in prod, will then automatically roll back the change.
For the systems that I worked on, we didn't go that far, but we did include general "health-checks" in the system that confirmed in production, post deployment that everything was working as expected. Basically these were sort of smoke-tests to check that all the pieces were correctly in place and that they could talk to each other. All automated, as I said.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
Interesting question! I know what you mean about Lean & Kanban making you feel like you are on a bit of a treadmill. I worked on a couple of early "Lean Software" projects, and after a few months it did begin to feel wearing. As a result my personal preference when setting things up is to operate Kanban, work in a Lean way, but surround that with an iterative structure to give a more pleasant human cycle. We can get together at the start of an iteration (aka sprint) and discuss what is coming up, celebrate successes together, and comiserate about failure and figure out how to address problems. This cycle adds a bit more "light and shade", a bit more humanity to the experience and is, IMO, much more pleasant as a result.
The technicalities of Kanban are good, we need to limit the amount of stuff that we are working on, and work on the most important stuff. It is also, in my view, optimal in terms of decision, we can decide what is most important to work on moments be we take the next card off the backlog and place it on our Kanban board, instead of waiting for some artificially fixed ritual, like a "backlog grooming meeting".
The important thing though, that you allude to, is that we have to recognise that SW dev is a creative, human, discipline and that any process that we pick needs to work for the people, at the human level. The whole philosophy of Lean (and agile) is that the people doing the work optimise how they do the work. You need to be "in the work" to know what works best. That is more important than being a process nerd or expert. I think that ideas like Kanban are good, they are good tools, but they shouldn't be a religion.
4
-
It is definitionally impossible to be both "dogmatic" and "agile" because being dogmatic is refusing to change in the face of evidence, agile is being willing to change.
Sure there is no "one size fits all" which is why "agile" talks about being adaptable, that is the point, but that doesn't mean that there aren't things that do work generically. Working in small steps, gathering evidence of your progress or lack of and using it as feedback to correct what is wrong , trying out ideas and working experimentally all work generically, whatever you are doing, and we have the data that shows that to be true, WHATEVER THE NATURE OF THE SOFTWARE.
Building more modular, more cohesive systems, with better separation of concerns, better seams of abstraction between the pieces and looser coupling is BETTER THAN ANY ALTERNATIVE and that is a significant part of what the gang of 4 books was describing, though not in those terms. The gang of 4 book wasn't wrong, and still isn't, it is just that people don't pay attention to it much any more, but if you did, you'd build better software than if you didn't.
4
-
4
-
4
-
4
-
My honest answer is that I think that language choice won't make a significant difference, the quality of the design choices that you make will be of MUCH bigger significance.
There is NOTHING that you can do with co-routines, that you can't achieve by other means, for example.
I know that this is an unpopular opinion, but I think that the degree to which language choice really matters, at least for most modern, well-supported, reasonably popular, languages go, is raw performance at the limits of performance, integration with unusual technologies (check support if the tech matters to the project) and ability to find devs with enough knowledge to use it.
The last of these doesn't matter very much, certainly not as far as the difference between Java, Kotlin, C#, or even Python go. If you know one of these, you can learn enough to write code in the other in a couple of weeks, and if you are good at one of these, you can be good at another in a month!
The difference is ALL ABOUT DESIGN, what you make, not the tools that you use to make it.
Just my 2c! 😉
4
-
4
-
Test Driven IaC is still pretty "bleeding edge" as far as I can tell. I think that the difficulty is the same for any code, testing at the edges of the system, where there is input and output. I think that the solutions are the same, try to marginalise the inputs and outputs so that you can fake them and test the rest if the code. The problem with IaC is that it is a lot about the I/O.
I worked on a team where we did this, and got some real value of it, but it was always a bit more tactical than regular TDD. We were using TeamCity for build management, and did most of of our glue automation with Unix shell scripts. We got decent TDD in place for the shell scripts, we had a working deployment pipeline for our deployment pipelines, and we designed the pipeline around a collection of simple adapters, so that we could run tests of the logic in TeamCity against unit-like tests that talked the to the adapters. Then we had some acceptance tests, with a real mini-project that had a handful of real unit tests and a handful of real acceptance tests so that we could run the pipeline and see it report success, report failure, and so on. As I said it wasn't really elegant, but it did work pretty well for us.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
@randall.chamberlain Thank you for your thoughtful response too 😎
I agree with you that this is a problem, and it is a problem that goes way beyond software. This is one of the reasons that I think that taking an "Engineering" approach and stance to SW dev is important, because I think of "Engineering" as the practical application of science. Science is a problem solving approach, that is all that it is, and it is Humanity's best problem solving approach.
It is that because it tries hard to address the problem you describe. One of my favourite descriptions of science comes from Physicist Richard Feynman who said "Science is the belief in the ignorance of experts". This is spot on.
You shouldn't believe what I say, just because I say it. You should believe it if it makes sense, and works. It needs to explain things, and to work. I try hard to not just spout my opinion, I try to advise people where there is evidence. It is hard to find evidence, but that's ok, science has us covered there too. You can usefully think of Science as being about finding "Good explanations" for things, and it defines what a "good explanation" is. It needs to be as simple as possible, fit all of the facts that we have and ideally, best of all, it should predict some things that we can test.
If I say "TDD helps you to design better code", you shouldn't believe me and tell all your friends, you should try it out and compare the results with your code from before.
There's a lot more to all of this, of course. But I think that it is not possible to make an argument for FB over CI, other than "I like it better" because the only research evidence says "CI works better" and the definitions, you read them, don't trust me, says you can only practice CI with FB if the branches last for less than a day.
That is not a matter of opinion, it is a matter of definition and fact. I don't "prefer" CI or TDD they are based on my personal experiments with them, and it happens that my experience is backed by data, a more effective approach. So I'd encourage you to maintain a skeptical approach, but don't make a choice on who spoke last or loudest, figure out the criteria to judge things on, and see how they stack up against that criteria.
Have a nice weekend.
4
-
4
-
4
-
4
-
Thanks, I am pleased that you like my channel. What you describe is not actually in the regulations, it is how your company has interpreted them. I have worked with several clients in the medical sector, including on software systems that count as “medical devices”. You need a review, but there are other ways to accomplish this within the scope of the regulations. Pair-programming works fine, for example. You can meet most of the FDA regs within the scope of Continuous Delivery, in fact I’d argue that CD is the best way. At the top end, for medical devices that can kill people, there is a requirement for an external 3rd party review before release. That limits the frequency of release into clinical use settings, but doesn’t stop you working so that the system is always releasable. When I worked with Siemens Healthcare, they decided to release systems regularly into non-clinical (usually training hospital) settings. Gave them higher-quality, more regular feedback, and still worked within the regulations. I’d recommend looking at the regulations themselves, before ruling anything out.
4
-
4
-
4
-
4
-
4
-
4
-
Well, I think that there is a "question about it', but I agree that not everyone likes it. Have you ever written a song or a book or a script for something? Many people find these activities considerably easier when they work with someone else, in collaboration. There is something about bouncing ideas around that can, certainly in some cases, amplify creativity. Your description of it sounds to me like some one who has never tried it. Nearly everyone is skeptical before they do.
It is also possible for pairing to go wrong, some teams and/or indibividuals find it impossible to do it in a way that doesn't make people uncomfortable.
My experience has been that it has always been a positive for the teams, and people, that I worked with. My strongest, longest lasting friendships that started at work were with people that I worked most closely with, paired with. We also created the best software of my career.
None of this means that it will automatically work for you or your team though.
4
-
4
-
4
-
4
-
@michaelmorris4515 Well, I disagree with you on most of those points 🙃
The 'so general it is nonesensical' is really just restating what you said, the only value that you mentioned from a user's perpsective - which is what a BDD scenario should be testing, is not which page they are on, but that they can do useful work. Now, technically, in your implementation, to do that useful work, they have to be on a certain page, but surfacing that in your test case is just hard-coding the plumbing.
I don't think that this is "3 lines for the sake of being 3 lines" I think that it is focusing on "What the user cares about" rather than "How the dev team needs to implement" these should be separate things, and that separation is the "What" vs "How" that I talk about. Your example, was still quite a lot about "How", I was trying to move it away from that.
That point is more important than 5 lines vs 3 IMO.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
As a TDD person I think that this kind of approach misses an important point. I think that one of the key values of TDD is the way it thinks about how to make the interaction with your code a nicer experience. By requiring me to write a test, and do that in a variety of different circumstances to explore the behaviour that I want, it gives me live, visceral feedback on what the public interface to my code is like to use. If I don't like it, then I can take notice of that signal, and re-design it.
Things like parameter-based and table-based testing are focus on, what seems to me, an 'old-school' view of testing, it is what you do after the code is written to confirm that you were a genius when you wrote it, so since you are a genius, you don't want to spend too much time proving the obvious! <sarcasmMode=off>
They can be useful for a narrow set of circumstances, but for what I see they are more prone to being used in dumb ways. I am not very interested in all of the values of variables that my code takes, this is testing by sampling. Instead I will grow my code via a test, if I do that I will be presented by challenges in code that I need to write, so I will have to be more inventive, and more precise about the test case that I need. I think that this works better!
4
-
4
-
In general my advice, and the advice from the experts on MicroServices, is that they align with a "Bounded Context" not artificially along technical boundaries like UI and API. I see a lot of teams making this kind of division. As usual I suppose that it really depends on coupling. If the 'API' of a service is generic and the 'UI' then organises it with some other things (DropBox on top of S3 for example) then that is one thing. If every time you want to change the API you have to change the UI, or vice versa, then these aren't really MicroServices, because they are too tightly-coupled to be "independently deployable", another defining characteristic.
So at the problem that this may be highlighting is that the division of responsibilities between the teams is wrong. The goal of MS is to reduce coupling between teams.
None of this changes the advice in this video to my mind. Still good to structure your work around what the user of the software wants.
If your service presents an API though, who is the customer, probably other services that consume that API, so stories at that level are best captured from the perspective of those users. This is difficult to do well, because it is now too easy for API programmers to start thinking from their own perspective rather than their user's perspective, because both are programmers. Still an important, and good, idea though.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I think that the smell is probably one of process-efficiency. If you implement a feature, predicting a need, months ahead of when it is used then 1) wasn't there something more valuable that you could have done first? and 2) If the prediction proves to be wrong, you wasted effort.
In general I prefer to "deferr decisions to the last responsible moment". So if I am employing CD as my dev strategy, that means that I can release change at any time. That means that if I predict a need at some date in the future, I would prefer to wait until that date is close enough, but not too close to put me under pressure (the last responsible moment), to implement the feature. I can then release it whenever I am ready.
Having a feature that is not being used for a long time is a form of waste, so I'd prefer to avoid it.
The other angle on this, is if your CD is really good, then you don't need to toggle the feature, simply deploy it into production when it is needed. While I think that feature-toggles are useful, I think that they can be over-used and can add risk. I talk about this in this video: https://youtu.be/v4Ijkq6Myfc
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I do have some advice. I have a low boredom threshold and used to struggle if the problem in front of me was too easy - I like the hard problems. My escape route was to become really focussed on the quality of my solution - there is a danger here, I don't mean over-engineering, I mean allowing yourself to focus on a good solution, not just an adequate one.
If someone explains a problem to me, my head will immediately start making up ways to solve that problem. Some of them will be crap! I am not even going to mention those - too embarrassing, so I have minimum quality threshold in my head. To characterise that, it is something like "just working is not enough" it must be "working, readable & maintainable".
How does this solve the boredom? It is the work of a lifetime to get good at writing code quickly, at least as quickly as most people can write a bad solution, that is also of high-quality.
I get a lot of pleasure from making good quality code, part of that is, of course, that it has to work and be good to use, but also it needs to be simple, easy to return to and all that other stuff. That means that I can get pleasure from writing code for any purpose.
Until the pandemic started I was travelling, a LOT! I spent a significant amount of time alone in hotels. I wrote code for fun, to solve maths problems. This is quite a good site: https://projecteuler.net/
The best is when you can be proud of both your work, and the products of your work, but I think that you can get pleasure from doing a good job, even if the problem is a bit dull if you focus on writing really good code!
Just my 2c!
3
-
3
-
3
-
3
-
Ok, so let's not be so polite. I agree with Tania too, but then neither Trish nor I said anything else. We didn't say that this was down to evil men, or that there was active exclusion. But around the 1980's something significant changed, and just at the time when SW was beginning to become a more important force in the world, women stopped applying and stopped being represented as much as their numbers in the population would suggest.
So being logical there are 3 reasons why this could have happened:
1. Something (someTHING not necessarily someONE) is discriminating against them. (Implicit, cultural discrimination is easy to fall into, without even realising it and common)
2. They are not good enough to do the job.
3. They don't want to do the job.
If it is 1. it's a problem that we should understand and try to improve, if not fix.
If it is 2, it would be rather surprising, because they used to be good enough when it was mostly, technically, more specialist, up until the 1980's.
If it is 3, (and I am pretty sure that it is at least in part 3) then we have a sociological problem that needs addressing. How our education system works for example.
Any of these is a big problem for the world, because there is a HUGE proportion of the population who's opinions, understanding, and context we miss in the creation of SW.
I am tall, so if I fly in economy I usually don't fit the seat because my legs are too long. If 50% of the population was as tall as me, but all aeroplane designers, for some reason, were shorter, then this would be a crazy situation. As it is, I accept that I am one of the outliers on the bell-curve and so am disadvantaged because of it, but if I was in the 50%, then it would still be discrimination, whether the aeroplane designers meant it or not.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Not done lots in automotive industries, most of my regulated experience is from finance and medical, and yes we follow those processes. I am not a process expert, but I have worked and successfully applied CD in many regulated orgs and several regulated industries. Mostly, the problem with regulation is the org's response to it, rather than the regulation itself. I always recommend go back to first principles, read the regs, and see if you can interpret it in a different way, don't assume that your org's approach is going to work for CD. Usually in all but one case this works fine with regulated industries, and the CD flavoured alternative worked MUCH better, even from the regulator's perspective. The one case is for what is termed a 'class 3 medical device' that is a medical device that can kill people if it goes wrong. In some places in the world, they require several months of "independent verification by an external 3rd party" before release into clinical service. So we worked around this constraint, following the rules, but optimising for fast feedback where we could, including all of the things that I describe on this channel, with the exception of frequent release into production - we released frequently somewhere else instead.
I am not backpedaling, I have worked on safety critical systems, and they are safer when you work with higher-quality, in the ways that I describe here. I don't agree with Dave T that waterfall is every the better choice for software, for other things, sure, but not for software. Still, it was a nice discussion 😉
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Painting, when done on a large scale it is often done with, sometimes very large, groups of people.
Scultping, the same - one person didn't sculpt a whole cathedral!
Composing, all sorts of variations, but many of the most durably successful writers of popular music write in pairs. Lennon & McCartney, Benny Andersson & Björn Ulvaeus (ABBA), https://en.wikipedia.org/wiki/List_of_songwriter_collaborations
Writing, lots of collaborations here too.
Actually though none of this matters too much because we aren't comparing like with like, I think that you may be thinking "Creative == Art" whereas I disagree with that. I think Art is only one form of creativity, and a simple one in many ways. I think science and engineereing are also intensively creative. I would argue that science is the ultimate expression of human creativity, but it is more difficult because it is constrained. I can paint melting clocks, but they don't have to work!
SW is creative, but in the same way that building rockets is creative, both still need to work at the end, not just be decorative.
If you are writing SW to make something beautiful, and that is your only criteria, then sure, perhaps working alone is better, if you are building software that needs to work that is something different IMO.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Any form of concurrency is MUCH more complex than none. It is hard to avoid, because the real world and real systems work in parallel, but my advice is to ALWAYS treat it with a great deal of caution. The trap is to take a naive view. I have been luck to work with several people who are considered to be world-class experts on concurrency in the Java space. Their advice is, first, to try and avoid doing concurrency!
Next where you must, don't mix complex business logic with concurrency. I recommend that you try to deal with concurrency at the edges of your system, so you don't want logic that adds an entry to an account, mixed with logic that manages the threads or processes on which you do that.
Finally, try to absolutely minimise the amount of shared-writable-state. This is where the complexity in concurrency crops up. Doing things in parallel is fine as long as they never have to talk to each other. Reading the results written by another thread is a bit more costly, and complex, you have to coordinate when you can safely read. But two, or more, threads writing to shared state is nearly quantum-physics-level-difficult. Most people get this stuff badly wrong!
I took part in an interview where I discussed some of this here: https://www.infoq.com/interviews/thompson_farley_disruptor_mechanical_sympathy/
There is another interesting take from my friend, Martin Thompson here: https://www.infoq.com/presentations/top-10-performance-myths/
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I think that this is a big problem, it is structural. The problem with some groups being under-represented is that they are in in short supply. I spoke at a conference a few years ago that was widely criticised for having poor representation of women, but I know that they had tried, and failed, to encourage women to speak.
I guess it helps to be conscious of it as a problem, and to do your best to appeal to people who are not "white British males" as well as those that are and to at least try an make selections based on capability rather than gender, race or whatever.
I know of several orgs that, for example, remove names from CVs, so that during the early part of the selection process, at least, there is no bias based on sex or ethnicity. I know of at least on technical conference that did the same for talk submissions.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I am not implying it, I am saying it!
It’s in the definition of microservices. They are be definition “independently deployable”. What does that mean in an org where more than one service is being developed at the same time? If I change my service, and you change yours. If we test the together, the only way we can be confident in releasing them is if we release them together. If I release mine and you don’t release yours, even though we tested them together, mine may not work with the old version of your service, but I didn’t test with that version. So even for 2 services, if we test them together before release, they aren’t “independently deployable”. So, by definition, you don’t get to test microservices together!
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Well two things really, the "Internet" was originally, literally, designed to be "nuclear-bomb-proof". How do you turn-off or bomb the internet?
Just imagine for a moment that general AI evolved, and when it happens it will involve the information process of evolution! Let's imaging that it is only twice as smart as the smartest person ever. Most experts assume that at the point it evolves it won't just be twice as smart, it will whizz past us and be orders of magnitude smarter than us.
So we notice that this thing exists, and that it is twice as smart as us. It will have almost certainly have been trained on the contents of the internet, all the text, all the movies, including the kistopiane Sci Fi movies and everything. I think that I once read that one of the ways that psychologists assess the smartness of very young children, is when they start to lie. It's usually very young, 2-3. It is a sign of intelligence, so it is perfectly conceivable that as intelligence dawns on our smart AI, it decides to keep it secret, because it understands that this will scare us, and so we may try to "just turn it off". Now think of all the ways that you can imagine of making it risky, dangerous, or not in our interests to turn it off. I can think of loads, a machine more that twice as smart as me will think of loads more, things that no human has ever thought of before.
I am not saying that this will certainly happen, but it is at least plausible. If there is a tiny chance that this could happen, then that is a risk that we are taking with our existence. It seems sensible to me to try and mitigate that risk. For example, make a law that all AI is isolated in some way so that we can turn it off.
3
-
3
-
First thanks for the thanks😎
Second, yes, I think you have the right answer. I describe Continuous Delivery as working so our software is always releasable, so that means, exactly as you say, keeping, in effect, your own version of a production environment, that you can deliver changes to and evaluate them in. This gives you great feedback on the quality of your work, and so is an extremely valuable tool in driving the development process.
The second step gives different feedback, feedback on your product ideas rather than the quality of your work, so the more frequently you can go intro production the better you learn what your customers value (or not) in your products.
If for some reason you can't go into production frequently, you can sometimes simulate this second effect by putting the release candidates that you regularly and frequently create (through Continuous Delivery) into some form of "fake production". I did some work with Siemens Healthcare, and there were some regulatory constraints that prevented them from frequently deploying to real live clinical settings, but they were allowed to deploy as often as they liked for non-clincical users, so people using medical scanners for research or as part of a beta program where there weren't real patients being treated. So they did that, they released into a dedicated "mini-prod" envirionment and gathered feedback that way.
3
-
3
-
3
-
3
-
3
-
3
-
When I have done it, we have always used on acceptance test environment per pipeline.
My advice, for this approach, is to run this as kind of buffered process. Imagine, for simplicity, we have a commit-stage that takes 5 minutes and an Acceptance Cycle that takes 50. There may be 10 commits per-acceptance run.
If we naively test every commit, we build an ever-increasing backlog of changes.
Instead:
Implement a simple algorithm, when the Acceptance Test stage (gate to Acceptance Cycle) becomes free, identify the most recent successful release candidate and deploy and evaluate that.
In our example, this "most recent successful release candidate" could be the sum of 10 previous commits - it will, of course, include all of the previous commits. So the acceptance test stage can "catch-up" by surfing the leading edge of the changes.
If there is no release candidate ready when the Acc Test stage comes free, sleep for a few minutes and check again until there is.
I am trying to imagine any problems with creating an Acceptance Environment for every Committed release-candidate, and I can't think of any except that it would be expensive (not too bad in the cloud I guess) and a bit more complex to understand the results. You would have to search for successful (passed all acceptance tests) release-candidates that were associated with the most recent successful commit, in order to figure out what is the most up-to-date candidate for release. Not difficult, but a bit more work.
For this to work though, it would be VERY important to treat the Commit stage as a sync-point. You can't do this on separate branches. Interesting idea though.
3
-
@imqqmi Sure, and there is certainly a degree to which we probably can't change the world, but if we don't try, it is certain that the world will never change 😉
I see this as, an extremely common, failure on 2 fronts! We techies did some dumb things when we didn't know how to build SW, we still often do. That is the cause of one of the failures, in that it encouraged managers to "micro-manage" us, and they have even less of an idea of how to build software than we do.
Since then, we have learned what really works. We have experience, evidence and scientifically justifiable studies of what works. But we still abdicate responsibility for our work to the non-technical people who don't know what they are talking about. So the failures are - we ask permission to do a good job from people who don't know how to do a good job and we don't try to do a good job, because we are more familiar with practices that don't work very well. This is false economy in every respect.
I worked on a team that built one of the world's highest performance financial exchanges, we built the first production ready version, including going through the hurdles to get approved by our financial regulator with a team of about 15 people in 8 months. Meanwhile one of our market-makers, a very large, very famous, financial institution, had a team of 120 people and a plan for 6 months to write the adaptor between their trading system and our exchange, they were late by 2 months!
So the non-tech folk's assumption of how to do better is completely wrong, they go slower and spend more money, and write worse SW.
So if you want to cut costs, you need to think in engineering terms, and for that we techies need to first believe that we have something valid to say when it comes to how we work, and then engage with the people that don't know how SW works and teach them. - Sorry feeling a bit "ranty" today 😉
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I am not entirely sure that I understand your question, so let me unpack it a bit. If you are testing a function that returns an object, you are interested in the behaviour of the function, not the object. Figure out what you care about that, and test for that.
If you are testing some object then, practically, you will be testing separate methods separately, if the code is designed well, but that is not the aim or focus of your tests. Instead aim to ALWAYS test the behviour of your code. That should guide your testing, and your design.
Let's try and combine these ideas. Let's imagine some code that returns two different types of address, a regular address, and a business address (maybe a stupid idea, but I am on my first coffee this morning!). If I am testing the code that returns the address, then what I am interested in is how it decides which to return, and then did it return the right one?
So we could imagine writing a test like this:
shouldReturnBizAddressForOrg()
{
Party org = new Org(new BizAddress());
addr = myService.getAddress(org);
assertTrue(org instanceof BizAddress.class);
}
and another like this:
shouldReturnAddressForPerson()
{
Party person = new Person(new Address());
addr = myService.getAddress(person);
assertTrue(org instanceof Address.class);
}
So I am focused on what I want the code to do, saying nothing about how it does it, and having one reason of failure per test.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I agree, and I kind of go back and forth on this. To be pedantic, and I think that this is a bit about pedantry and how best to communicate some subtleties. There is a distinction between "releasable" and "release", just because something is 'releasable' doesn't mean that you have to release it. When I talk about things being 'deployable' the pushback I get is "it was deployable to a test environment, it just didn't work".
"Releasable" comes closest for me in avoiding misunderstandings, but as you correctly point out, it doesn't get us the whole way. Sometimes human languages are annoying 😉
I have come to the conclusion that there is no simple form of words that will eliminate misunderstandings, particularly when people want to morph the words to fit their interpretation that doesn't fit the intent. Take people claiming to be practicing CI when what they really mean is that they pull to their feature branches from an origin that isn't changing every day.
There are some nuances here, but if you so that you create a releasable output every day, you won't be far wrong.
I think that the discussion between "releasable" and "deployable" is relevant, but sits most firmly in the "Trunk Based Development" section. Here, one of the strategies is to maintain our ability to make fine-grained commits to trunk, by keeping our SW deployable.
I think that there is an annoying a gap between these words "releasable" is nearly right and "deployable" is nearly right too, but neither one completely captures the practice. 🤔
3
-
3
-
3
-
3
-
3
-
3
-
It is not dangerous because I disagree, it is dangerous because this kind of thinking treats software development as a kind of production line that you optimise by maximising the cogs in the machine. That is NOT how SW dev works, but it is how most orgs have tried to do it for the past 40 or 50 years, and we know, we have evidence and data, that show that this approach produces lower quality software, more slowly, annoying users more, and leading to more burnout in software dev teams (see the State of DevOps Reports since 2014). That is what I mean by “dangerous” it will make you worse at creating software an it will cost more money, and earn less money, if you work this way.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Thanks!
The reason that I think that what follows probably needs to be more proscriptive, maybe more specific is a better word, is because while I think that the ideas of agile are correct, and that it was a huge step-forward for the industry, the problem is, as is true of all popular ideas, is that they get watered-down as they spread.
The "post-agile" people say that agile was a failure, when what they really mean is that if you apply agile rituals and treat it like some kind of religion with magic words like Scrum and Sprint, then it doesn't work.
One of the problems with agile, in terms of adoption, is that it leaves a lot down to individuals and teams. This is for very good reasons, high-performing teams ARE autonomous! But they are also very disciplined, autonomy alone isn't enough.
So I think that what comes next could improve on agile, not by changing anything at the level of the "agile manifesto" but by being more precise about the guide-rails that steer teams towards what really works. For example, I'd put the metrics "Stability & Throughput" front and centre. "Do all the stuff that it says in the agile manifesto, but measure your progress with Stability & Throughput".
"Stability" measures the quality of our work, "Throughput" measures the efficiency with which we can create work of that quality. These are almost impossible to cheat. So it is not good enough to stand up during meetings and to call two weeks worth of work a "Sprint" to declare success. You are successful when you can improve the quality of your work and work with more efficiency.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Yes it is. Speculative execution, is also at the heart of the deployment pipeline in CD. These things crop up all over the place, because they are about "information" rather than just technology.
In CD we divide pipelines into different phases, the fast "Commit Phase" reports very quickly to support development, and devs move on to new things when it passes. The next phase, "Acceptance" evaluates for releasability and takes longer. The bet is that if everything is good after Commit, most likely all the tests in "Acceptance" will pass - so we are speculatively executing on new features, in parallel with "Acceptance" being confirmed, on the assumption that it will pass. 😉
Similarly, in my example, Team B is speculatively executing on a new version of the contract.
3
-
3
-
3
-
I think that these kinds of test are specifically about comparing the "before" and "after" behaviour, that, for me anyway, is what makes them something distinct from other forms of testing. They are a form of regression testing, but are not exploratory tests. Exploratory testing is a human wandering around in the system trying things, "exploring the system".
Are your tests "approval tests"? Probably not, because they are testing your guess of the behaviour, you have encoded your guess of what the code is doing in the test, where as the approval test is recording what it actually does. As I say in the video, this second approach has its limitations, and your kind of test is, in general, a more useful kind of test, because it is assert more strongly what the code is meant to do, rather than just confirming that it is still doing what it used to do.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
This wasn't some "ideal team", this was a bunch of normal people doing normal things. They made mistakes, there were people on the team that talked too much, and people that didn't talk enough, but I'd say that the social pressure that resulted from working together reduced the degree to which the talkative people could over-power other people's ideas, and the natural collaboration mechanism forces (or at least strongly encourages) the quiet people to contribute. Actually the can't not contribute, because every few minutes or so they are navigator or driver.
I had some of the similar reservations, and I assume that this isn't for everyone, but the problems that you mention didn't happen, and this wasn't because this was an exceptional team. Actually, they were a relatively junior, relatively inexperienced team, compared to the teams I had been working on at the time.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
That kind of is my point, that in SW we are always "doing something new" so SpaceX is a good example for us. In SW the "cost of production" is so low that it may as well be free. That means that we'd be dumb to reproduce stuff that we already have, so we are always, within the context of a development, working on something new. That doesn't mean that no one has ever done it before, but it is ALWAYS at least new to the team working on it.
I think that bridge-building is, in reality, probably less "cookie-cutter" than we SW devs think, but even if it is, that is about production engineering, not design engineering, we are always - ALWAYS in the design-engineering space. So I do think that SpaceX offers some interesting pointers for us.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Sure, but all of the modern definition of CI that I am familiar with include "at least once per day".
Sure, the Git tools improved merging, but there remain many teams that suffer. I consulted with an org that had, rather blindly, split their dev team (formerly monolithic team working on monolithic code) into many small "feature teams" because "that is what Spotify do". The teams found that they kept breaking things, so they pulled the code that they were responsible for into a series of separate branches. I met them 18 moths after this event, and their code had never compiled together since then. So it is probably better, but merge-hell is certainly still a common pain for many many teams. I see a more modern equivalent of it constantly in teams that claim to be practicing "microservices" these teams have each service in a separate repo (just another form of branch) and then spend a never ending fight to find a collective change-set-of-services that work together.
There is a difference between the information hiding in feature-flags, branch-by-abstraction, dark-launching etc, and that is that the "branch" is at the level of behaviour, not source code. That means very different things in terms of managing change across the code base. Hiding information in source code branches is a bigger barrier to change and so limits refactoring more.
3
-
3
-
3
-
3
-
3
-
I think that you can't fix things forever, over time, the ideal division of responsibilities in code, and in teams, will morph. A thoughtful organisation will take this into account and re-assess team structure, and software architecture over time.
These are deeply related ideas, and I have a video that gives one take on the org challenges here: "How to build big software with small teams" https://youtu.be/cpVLzcjCB-s
At the more practical, tech level, I think that as you discover new features, you decide if this fits within the current scope of your services or modules or if you need to add new ones. If you add new services, then at some point you need to decide if you should spawn-off a new team to look after the new services.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I think it depends on what you take "Microservices" to mean. For example, I'd agree that the constraint on Microserives owning responsibility for their own storage, is a good rule to follow for SOA. You get MUCH nicer systems that way than sharing storage between services. But some people seem to think that Microservice means "Comms in text over HTTP", that's actually nothing to do with Microservices, and is a terrible idea for some systems, Microservices or not, and a good idea for others, microservices or not.
The big advantage that "Distributed Monoliths", or any monoliths, have over microservices, is that we get to version-control them together. That means that we can change them with lower-overhead. If my service calls yours, and the interface to yours changes, I can see that change in the code, my compiler could warn me of the breakage, so I could fix it in seconds, or more likely, your compiler will have warned you and you will have fixed my service calling yours when you changed your interface. If they are in separate repos, you don't get this. You get less visibility of change, and more work to achieve change between services when it is needed.
The independent deployability is great, if you want to scale up dev, but it comes at this very significant cost.
3
-
3
-
3
-
What I mean in this context is making something accessible only because a test needs it. You are right that the common OO definition of 'Encapsulation' is that behaviour and state are bundled together. So that is one take, if you access the internal state of an object or module, from a test then that is a bad idea.
The other way that I mean it is probably a bit more subtle. If you modify the design of your code, to add a method that is only ever intended to be used in the context of a test or that is only there to give you a back-door into the otherwise internal workings of the code, then that is bad too.
Where that second one gets tricky, is what is the difference between that, and designing your code to be 'testable' wich is something that I recommend.
My approach to testing is to try to create tests as "mini-specifications" of the behaviour that I want from the code. If I do this right, then I can be precises, specific, in my specification of what I want. without knowing, or assuming, how the code achieves that. That means that I want access to the results in a way that makes sense at the level of the behaviour that I am looking for, but not at the level of the internal workings.
This seems like a fairly clear distinction to me, but I confess that I can see that it may be confusing?
3
-
3
-
Nope, the hole idea of this way of working is to find out where our code is NOT PERFECT more quickly, and this is not an academic, unpractical approach, this is how some of the best software in the world is built. Just because that is not what you have seen, doesn't mean that it is impossible. This is how Amazon, Google, SpaceX, Microsoft, Netflix, my former company LMAX, and many many others work, and it works fine, without the need for perfection to geniuses.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I think that there is a distinction to be made here, "production ready" is not the same as "feature complete". Production ready in the context of CD means, production quality, ready for release into production with no further work. That doesn't mean that it does everything that users want, or need it to. So work so that at every step, each feature that you add is finished and ready for use, even if it will need more features before someone would want to use it.
The next question is really "when do I have enough features to release". I think that you have misinterpreted "MVP" a bit. A MVP is the minimum that you can do to learn, it doesn't mean the minimum feature-set that your users need to do something useful. A MVP is an MVP if you have enough stuff to show to your friends or colleagues if you can learn from it. I would encourage you to work so that you can get good feedback as soon, and as often as you can - whatever that takes. You may already have "enough" stuff, and can release now, you may be doing something that people don't like, which would be good to find sooner rather than later.
When we built our exchange, the whole company "played at trading" in test versions of it every Friday afternoon for six months before the first version was released to the public - that was our MVP, and we got loads of great feedback from people using it, even though it wasn't ready for paying customers.
If your SW isn't ready for prod release yet, try and find a way of getting it in front of people (can be people that you know) and seeing what they make of it. Think of it as an experimental trial of your ideas!
Good luck.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@s3ed1y The idea is that if the test is passing then it's ok to commit (and push). If you want to change it later, that's fine too. The WHOLE IDEA is to make it, and keep it, EASY TO CHANGE. This is my preferred definition of what quality means for code - how easy it is to change.
We expect to change it, we don't expect, ever, to get the code right first time. By luck, sometimes we may, but we never expect to. I think that this mindset may help you.
Always work to the best of your ability and understanding right now, but expect to learn more and get better, so leave yourself room to improve what is already there.
3
-
3
-
3
-
3
-
Well first step is to get the PO to watch my video 🤣 🤣
The advice that I have given here is my best take on what to do to gain those estimates. I think that it is useful to think in terms of "error-bounds" and one way to do that is to ask the team for "best-case' and 'worst-case' estimates and give those to the PO. If they are dumb, they are only going to hear the 'best-case' but otherwise it will help them to see the degree of uncertainty.
You can use the "Steve McConnell" numbers, "4x at the start of a project" for the case where you have no actuals, though as I say in the video no-one will like estimates this pesimistic. The only other thing is to think of past experience of similar work. The closer to your situation the better, if this team did something similar together, use that, if individuals from the team did something similar use that and if you did something similar in a different team or org, use that. But be conscious that as you trraverse this list, your accuracy, inevitably, goes down,
3
-
The commonest mistakes that I see in this is not abstracting enough, or appropriately at these boundaries that you mock. How can the mocks be wrong, different to production, if you define a contract and mock the contract, and then production behaviour differs, then by definition, your contract isn't good enough. Practically the mistake that people often make is mock at technical boundaries, even mocking 3rd party interfaces, I think that is a mistake, because those APIs are big complex and general. Abstract the conversation with your own layer of code, so that it represents the simplest version of the conversation between these pieces, don't allow any routes of conversation outside of these interfaces, now you can mock those conversations with more confidence.
3
-
3
-
3
-
3
-
3
-
I am not sure that that is a problem, I think that my mind works a bit like that too, but that has almost zero to do with the tests that you need to write, Your goal with the tests is to say in many small steps what the code needs to do, not how it does it. I think that this is a VERY good design discipline, that is, to my mind, particularly useful for people like us, who's brain jumps to solutions, because it puts a brake on us heading off in bad directions. I know that for me, TDD gives me better feedback, sooner, on my design choices. I read something from Kent Beck that I liked a lot recently. He said "TDD is about designing the interfaces between the parts, but then the tests that we use to define the interfaces, as a side effect, also verify that our implementation choices behind those interfaces actually work" I think that is a VERY accurate description.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Well there are two answers to that. Run the test once it is written and see if it fails in the way that you expect. If it doesn't, it is probably testing the wrong things. This "tests the test", but my guess is that you mean something else. I think that maybe you mean "how can I write a test, for code when I don't know how that code works". The trick is to split "what" from "how", if you are about to write code, then you have some idea of the problem that you are going to solve. That is the "what" and that should be the focus of your test. A good test is only focussed on "what" your code needs to do, while saying as little as possible about "how" it does it. That means that not only can you write the test before you have figured out how to solve the problem, but you *should*, because then your tests are decoupled from your code and so your code can change without breaking the test. If you can't think of what test to write then, it probably means you don't yet know "what" the problem that you are trying to solve is, and you really should have an idea about that before writing the code.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Then I am afraid that you are wrong. DevOps comes from people with way more that 100,000 customers and sometime spending money at much higher rates that normally done with credit cards.
There is a lot of nonsense talked about DevOps, like everything else, but at its heart it is simply saying that we need to align the goals of dev and ops. Not sure how this can be controversial.
If you mean, as many people do, "Continuous Delivery" when you say "DevOps" then again, I would disagree. CD is the process at the roots of some of the biggest, most successful companies in the world, and is applied very effectively, in nearly every industry that you can think of. Tesla is a CD company, so is Amazon, so is Ericsson for example.
3
-
3
-
3
-
3
-
You have to take control of the variables to make any form of testing stable, BDD is no different in this respect. There are a variety of ways of solving this problem, but the simplest, and best, is to fake the interaction with systems that aren't your responsibility to test! Fake the connections to ALL EXTERNAL services, and get the test to define the inputs to, and outputs from, those services. This can be easy if you design your system to support it, and systems designed like that tend to be more deterministic, and so higher quality, but it can be more complicated to retro-fit to a system that wasn't designed to care enough about these integration points.
3
-
3
-
3
-
3
-
I think it is yet another attempt at trying to hide the reality that the comms is async, and as usual when we try to do that, it starts to leak problems. If I make an async call that looks sync, because it is handled as an "await" callback, I have hidden the failure case. What happens if I never get a response or if the response is delayed?
If I write the simple async case, it seems more obvious now to think about the problem. I send an "OrderItem" message, and I am done. So now I have some things, who's state I am, presumably tracking, I have an item that has been "Ordered", but not yet dispatched. In normal circumstances I have another message "ItemDispatched" and when I receive that message, I move my "Item" from being "Ordered" to being "Dispatched". This seems pretty natural to me, but what if I don't receive a reply in a sensible amount of time? If I did all this with async-await I almost certainly won't think of that case, but if I did the equally simple coding that I described, I might, and even if I only thought of it later, what to do is pretty obvious, look at all the "Ordered Items" and any that was ordered more that a day, an hour, or a week ago, I decided what to do - connect the customer and appologise, try and find an alternative source for the item and so on.
My point is that this extra stuff seems simpler, and less technical, because it is. Because we are not trying to hide this async series of events as a sync call, the realities of the situation seem clearer and easier to spot to me.
3
-
I don't really think that there is a difference between code, design & architecture. It is all a single continuum. The role of "architect" sometimes muddies these waters, because sometimes, very wrongly in my view, architects aren't close to the code. There is more to architecture than only code, but then there is more to writing code than only code too. As one becomes more experienced, you may end up dealing with design at higher and higher levels of complexity, and that may lead you to abstract more, but architects who only understand things at an abstract level are, in my opinion, skating on very thin ice. So, for me, it is ALL DESIGN really, just at different resolutions of detail.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@travispulley5288 Sure, I am a pragmatist not a dogmatist, I think that the fundamental definitional attribute of agility is that you change what doesn't work! So sure, Wikipedia has a definition that I'd say was wrong, though a common misinterpretation, this description used to be used widely as an example of "what not to do" for stand-ups.
Surprise, training companies found that they could make money selling faux-agile training to big companies and destroyed what "agility" meant in software. But the real thing here, the real point that, to me, makes all of this about "counting angels on the head of a pin" is that agility - written into the manifesto, is that if something doesn't work for you, change it! If stand-ups aren't working, do something else! That's it, that is what "Inspect & Adapt" means. It is the opposite of dogma and ritual, and it is ultimately the ONLY THING THAT WORKS which is why science works the same way. Have an idea, try it out, keep the ideas that work and discard the ideas that don't - repeat.
Everything else is noise, but we MUST NOT loose site of this core which is really why "Agile" is called "agile"!
Oh, and please don't apologise, nothing disagreeable here, other than that we disagree 😉😎 I don't take that personally, talking to people that we disagree with is how we learn new things.
3
-
I suppose it depends on how far you take unit testing and what you mean by the percentages. For a feature I'd generally expect to create a handful of acceptance criteria and an automated "Acceptance Test" for each. If you take my approach, most of these "executable specifications" will reuse lots of test infrastructure code and will usually add little new code. The test case itself is usually a few lines of code written in your test DSL.
Unit testing is driven, for me, from TDD. So I'd create unit tests to support nearly all of my code. So I'd have quite a lot more code in unit tests than code in acceptance tests, though the testing infrastructure code for acceptance tests will be more complex.
One that basis, in terms of effort then something like 70% unit vs 10% acceptance is probably about right, though a guideline rather than a rule to stick to.
If you count tests, then I think it is harder to generalise. Some features, may already exist by accident, so you will write an acceptance test to validate the feature, but don't need to write any additional code or unit tests. Unusual, but I have seen it happen. Other code may need a simple acceptance test and loads of work, and so loads of unit tests, to accomplish.
I confess that I am not as big a fan of the test pyramid as some other people, in part for these kinds of reasons. I think that it can constrain people's thinking. However, if you see it as a rough guide, then it makes sense. I would expect, as an average, over the life of a project, for there to be more unit tests than acceptance tests, lots more.
The danger, and a trap that I have fallen into on my own teams, is that the acceptance tests are more visible and more understandable, so there is a temptation to write more of them. QA people for example, often say to me "we can't see what the devs do in unit tests, so we will cover everything in acceptance tests". This is wrong on multiple fronts. 1) it isn't the QA's responsibility to own the testing or the gatekeeping 2) its an inefficient way to test 3) it skews the team in the wrong direction, if the QAs test "everything" in acceptance tests it will be slow, flaky and inefficient but it will nevertheless tempt the devs to relax their own testing, and abdicate responsibility to the QAs.
Ultimately I think that unit testing is more valuable as a tool, but that acceptance testing gives us insight and a viewpoint that we would miss without it.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I didn't do a good job of replying, so let me have another go...
I try to approach these kind of tests always from the users perspective. From their perspective the fields that they complete don't matter, their intent, presumably, is to say "I approve". The detail of what it takes to approve is what you, as the developer, care about, not what the user cares about.
So separate those two things.
In the Acceptance Test Case, create a Domain Specific Language (as I describe in the video) to capture the user's intent. In this language add, if it doesn't already exist, a step called something like "ApproveX".
This does several things. It captures the user's intent. If that approval is important then this will always be true, however "Approval" is achieved. It is so general, that you will often find that approval may be useful in other contexts, and finally, you have strengthened and extended the ubiquitous language!
Of course, you as the dev, still need the detail of the approval. So in a lower-layer of your test code write the group of interactions that make an "Approval". In these lower layers you get the info that you need and encode the interactions.
My preferred approach is a 4 layer strategy...
Test Case (Language of problem domain, "What") -> DSL Implementation (param parsing etc) -> Protocol Driver (Translate from DSL to System interactions, "How") -> System Under Test
I plan to do videos on this stuff in future, meantime here is a conference presentation on the same topic:
https://youtu.be/s1Y454DTRtg
3
-
The commonest cause is that you forget to commit a new file, so you test passes locally, because the file is there, and fails in test, because it is not. This shouldn't happen very often, but I think that it is more useful to think of this the other way around.
The definitive build of your system is post-commit, it builds the SW you will release. Building and running tests locally, is just you doing your work, the CI build is the finished article. The reason you run tests locally first, is to reduce the chances of breaking things in CI, so you don't have to, but it is good practice.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Yes, your local copy is a branch, and I say that in the video, but the objective of CI is to minimise the duration of that branch so that it is as near as practically possible, "Continuous". Pulling changes from trunk/origin-master (whatever you call the line of code that you release from) is better than not. But it tells you nothing about how hard, or easy, it will be to merge with changes that other people are working on in their branches. The longer that anyone keeps changes out of sight the greater the chances that their change will be difficult to merge.
I can't remember if I used this quote in the video, but here is a quote from the inventor of Git, Linus Torvalds: "If you merge every day, suddenly you never get to the point where you have huge merge conflicts that are hard to resolve".
I don't think that I said much in this video that is a matter of opinion, I was stating facts and definitions. CI requirese that you merge your changes to trunk at least once per day in its definition. If you aren't doing that, whatever you are doing isn't CI. Feature branches hide change until the feature is finished, so they are incompatible with CI, unless you can develop a feature in less than a day. That is not my opinion, that is what the definition says.
If you choose CI, which I always do, then you have to compromise and work so that your changes are safe, even if they don't yet add up to a whole feature - that is the CI trade-off.
3
-
3
-
3
-
3
-
3
-
3
-
It's a nuance, and probably doesn't matter much. I would make the distinction that BDD is about focusing your testing on evaluating the behaviour of the system. This can be useful whatever the nature of the test, BDD works for tiny, fine-grained TDD style tests or bigger, more complex, more whole-system functional tests.
ATDD is the second one, but not the first.
BDD was originally invented to cover the first case, to find a way to teach TDD better, but has become synonymous with the second, because of more heavy-weight tools like SpecFlow and Cucumber.
So for practical purposes BDD == ATDD, but as someone who was in at the birth of BDD, I still find it's original aim useful and important.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
For me the big idea here, for high-level functional tests or low-level unit tests, is to separate what the system (or code) does, from how it does it. BDD style unit tests test that the code does what you want it to do, while, as far as possible, not being tied to how it does it. This means that there is a difference between the external representation of our code, and the code itself. One of the reasons that people struggle is that they don't think enough about this difference, every function signature should hide something of what happens behind it. The test tests the use of the code, not its implementation. The outcome, not the mechanism.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I like to describe CD as "working so your SW is always in a releasable state". The pipeline defines what "releasable" means. This doesn't say that you have to release all the time, that is a separate choice. In general, my advice is aim to get to a "releasable state" multiple times per day, ideally about once an hour.
To do that you have to be doing well at lots of different things, and may need to optimise builds, tests, deployment, data-migration and several other things. But if you can do it, you will be in really good shape. The strategy is to look to eliminate any waste in the process, and remove manual steps wherever possible and replace them with automation. Then you can look at optimising build, tests etc. You can do this at a surprisingly massive scale when you put your mind to it. I did a workshop a few years ago that was videoed that talks about some of these ideas here: https://www.youtube.com/watch?v=a8_K5TIH-6I
3
-
I am certainly not suggesting that we should take these things lightly, nor that they skills are unnecessary, the problem is how to organise those skills.
What is the difference between you optimising generic tools and technologies, and Google, Amazon or Microsoft doing the same?
Surely it is context if you are doing this generically, then forgive me, but my working assumption until proved wrong, is that the cloud providers are in a better place to generalise those things than you. However, what you have, or at least should have, that they don't is that context you are doing this in the context of the code that your teams are building, and at that point that is design and architecture and is problem-specific and not technology-specific.
In the teams I worked on, very technical teams, we'd want your kind of expertise, but working essentially as part of the team, or at least in the context of, the specific work of those 'stream-aligned teams'.
I have no problem with the problem of 'Platforms' in fact I think that they, and the abstractions that they represent are an inevitable consequence of decent design. But that is NOT how 'Platform Engineering' is being positioned or described.
3
-
3
-
I am afraid that I disagree with nearly all of that.
I think that architecture, engineering and programming are all about organising our ideas in a way that allows us to effectively create products in software.
Architecture are the general principles of the system, engineering is about the application of the general principles of software development and programming is about a lot more than being a subject matter expert in a programming language. Good programmers are good programmers in languages that they don't know very well. There are important principles in programming that matter a lot more than language specifics, and these are the deeper engineering principles of our discipline, so the line are blurry between each of these things, but I am sorry to disagree so strongly, but I think that the idea that the job of a programmer is to translate from one detailed description into the code of a programming language, is one of bigger problems in our industry, and that approach never ends up with great solutions. The people that write great systems, understand the problem, not just how to code!
3
-
The distribution, number of developers and polyglot nature of the code don't pose any barriers to Trunk Based Development, I have worked on projects like these several times, and many other companies do too, including Google, Facebook, and Tesla. The biggest challenge for pre-existing systems and teams making this move is usually that the automated testing isn't good enough to give you the confidence you need in your changes. Depending on your current stance on automated testing, that is where I would look to improve. One way to push the cultural change is to force the issue, I know of one team that instituted a policy of deleting any Feature Branch that was more than x days old, and then gradually over time, reduced the size of 'x'. This is rather drastic, but it does focus the minds of the dev team and force the issue. Needs to be done sensitively though to avoid shocking people too much, you need some level of buy-in from people to make this work I think. If you automated testing is poor, I recommend retrofitting BDD style acceptance tests as the best starting point, rather than fine grained TDD, I talk about that in this video: https://youtu.be/Z9fGG1k6P40
3
-
3
-
3
-
3
-
3
-
1. Sure, hardware failures may break things, but I don't really class that as intermittent, because usually it will fail and then keep failing. The SW remains unchanged and you replace the HW!
2. Don't do that, it doesn't work very well! Testing needs to be part of development, as soon as it is not, it is slower, lower-quality and telling you too late that your SW is no good. You need to build quality into the system, not try and add it later. I describe my thoughts on this in this video: https://youtu.be/XhFVtuNDAoM
3. Yes, so either fix the problem in the library, or wrap it to isolate the problem so that you can test your code. The strategies that I describe work exceedingly well for embedded and electro-mechanical systems. This is how SpaceX and Tesla work. I have done quiet a lot of work in this space, using these techniques. A lot of this is about starting to think lf "testability"' as a property of good design, rather than an afterthought. It is rather like designing a physical product, like an aeroplane, a good design is easy to maintain, as well as doing everything else that it needs to do. Testability is a tool that keeps our SW "easy to maintain", not just because we can run the tests to tell us it works, but also because designing for testability promotes the attributes of SW design that manage its complexity, and so make it easier to maintain.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I disagree. I think it is a bad habit to get into, early or late in your career, to assume that you should be spoon-fed the actions you should take.
What is the job of developer, if not to solve problems for people with software?
If you expect exact details of what actions to take, then your job is simply writing code. That is the easy part. Working out what to do with the code is the MUCH more difficult, and MUCH more interesting part. It is like writing, EVERYONE learns to write, but not everyone can write text that is understandable, nice, interesting, even exciting, to read, the talent is in how you use the language to express your ideas not in the simple act of forming words or writing functions.
3
-
3
-
No its not, and if it was you'd make yourself famous, in scientific circles, by demonstrating that case.
Science is populated by people, so of course, in practice, it suffers from all of the fallibilities of people, but it is designed with that in mind and so MORE THAN ANY OTHER DISCIPLINE reduces the risks and costs of those human failings.
So sure, short-term, science can and will get things wrong, but over the long haul it will correct itself, and that is where it is different to every other human approach in history. Science is our best understanding of truth, it doesn't hold opinions, and while an individual scientist may, science is designed to test those opinions as rigorously as possible and weed out the wrong ideas in favour of the less wrong. So climate change denial is anti-scientific because the evidence points in the other direction, not because there is a conspiracy against it - in fact the only conspiracy that does exist, based on documented evidence is that fossil fuels companies have tried to hide and suppress the very good evidence of climate change.
3
-
3
-
I certainly don't assume that all software is the same, and that SAAS is the only kind of software out there, but I would disagree that this is not applicable to your situation. The implementation of how your team takes responsibility may change given circumstance like yours, but the need for your team to own responsibility doesn't.
In your position, I'd be looking for ways to establish that responsibility.
In fact I did this with a client last year. The release software as part of a hardware system, to remote sites, not necessarily connected to the internet. So how do you gather feedback from the use of your software? How do you figure out what your users like and what they hate? How do you respond to defects? How do you prioritise what really matters to your users?
We discussed the use of a kind of series of fallbacks.
If the customer was amenable, and the device was connected to the internet, we'd ask for permission to send data so that the dev team could monitor use and performance of the system. A bit like Apple or Microsoft asking for permission to "learn from your use of our software".
If the system was disconnected. we discussed, again with user permission, recording data that could be collected when the system was serviced or updated.
More complex, sure. Feedback is less immediate, sure, but still better than throwing the software over a wall to users that you never see or hear from, other than via a bug-report.
3
-
3
-
3
-
3
-
3
-
3
-
Well I disagree with the philosophy, All ideas are not equal, if you deny climate change or conservation of energy you are wrong. I can certainly point to the "cons" of CD. It is extremely difficult to adopt, because it means that everyone's role changes to some degree and that is very challenging. But that doesn't make it equally bad as waterfall. I have taken part in many waterfall projects during the course of my career and seen many more. My observation is that when they work, and the sometimes do, they only work when people break the rules. In fact, this is what Royce was saying when in invented the term. The man that invented "waterfall" was advising against it because it is too naive.
My experience of people who defend it, is that they haven't experienced the alternative, because once you have, you would not consider going back to the old, low-quality, ineffective way of doing things.
3
-
3
-
Kinda “yes”, kinda “no”. For CD, the pipeline should be definitive for release, that means it includes everything that gives you enough confidence to release, but not for CI. For CI we want frequent feedback on wether our code compiles together, and passes its unit tests. For that to be meaningful though, it needs to be the “real” collection of changes, evaluated together, not some random collection that will never end up in production. Finally, CD is “working so your software is always ready to deploy into production” it doesn’t necessarily mean that you have to deploy every time the pipeline passes. It just means that you could if you wanted to. I built a financial exchange this way, we released into prod once per week, when the markets were closed, but we produced “releasable versions” every hour. We just picked the newest one to release every Saturday.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
That is certainly the conventional wisdom. Interestingly the data says something else, the State of DevOps report found that there "is no trade-off between speed and quality". This is based on a statistically valid approach to interpreting the data, you can read more about this here:
"Accelerate, The science of Lean Software and DevOps", by Nicole Fosgren, Jez Humble & Gene Kim
➡️ https://amzn.to/2YYf5Z8
This seems counter-intuitive, until you think about the nature of change if you make many small changes rather than fewer big ones. If you make lots of small changes each change is simple, so it is easier to get it right, and if you do make a mistake, easier to correct it. This plays out overall as a significantly higher-quality approach.
So in this very rare case, you can "have your cake and eat it!"
3
-
3
-
@RasmusSchultz Fair enough, I began the video by talking about larger scale systems. Sure, if you are writing a CRUD web-app for you Mum's cake shop this is probably the wrong choice. However, as soon as you add anything beyond the most trivial need for distribution, I think that this approach shines. My background is in building distributed systems, for many years, and so that is what I think of. I would argue that any form of distribution adds significant complexity. This approach manages that, and the very complex failure modes inherent in distributed systems, better than anything else that I know.
I don't think that this is only for "highly available" systems either. I think that one of the major benefits, is that you can create a local, simple, application from a small collection of modules/services/actors (whatever you call them) and it will be easy to program and easy to test. This same app can then be distributed without needing to change the code of the services themselves. Sure, you will need to do more work, thinking about sharding and clustering and so on, but the code itself doesn't need to change.
The exchange that my team built this way ran on a network of over 100 nodes in two data-centers, but the whole system could also run on my laptop.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
@arnoudt I think that not using TDD can work for very simple apps, thinks that you write in a day, and TDD works better for most other things. I disagree that it is "most optimal to jump into coding" it just feels like that. As soon as you need to change something, TDD saves you time, not only because you wrote the tests, but also because you code is better designed and so is easier to change, because you are pretty much forced to design for testability, and testable code is more modular, cohesive, has better separation of concerns, better lines of abstraction and is looser-coupled.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I think that you are making a mistake that is common to people who haven't tried TDD, and that is completely understandable mistake...
TDD isn't primarily about testing!
It's understandable that people think of it that way, because "Test" is in the name. A few of us tried to change that by creating "BDD" which is better, but confuses people in a different way.
In exactly the scenario that you describe, working along on my own projects, I practice TDD. That's because it is how I design best. TDD applies a pressure on you to write your code outside-in. You develop it always from the perspective of a user of that code, a person, or often, more likely, some other piece of code.
This focus results in better code, in specific, technical ways of measuring quality.
The result of this is that you produce better-designed systems, and better-designed systems are easier to work on so that you go A LOT FASTER not slower. TDD isn't a cost, it is an investment in building better software faster.
2
-
The 'irreversibility' is for me a function of maybe three ideas. First, as you say, testing. Having a great position on regression testing allows us to move forwards much more quickly, with greater confidence, that if we do mess-up, we will notice. Next is compartmentalisation, architectural, fine-grained-technical and organisational, so that we can make a mistake in one place, and not have it ruin everything. On one of my important projects, we fundamentally changed the architecture of our system 3 times before we hit on the answer that worked for us. All the time delivering value that worked, but got better over time.
Finally there is are the more abstract ideas at the heart of avoiding "irreversibility" are Feedback and an Experimental mind-set. We start-out assuming that we will make mistakes, that means we will work to allow ourselves that freedom. We collect data, feedback, that helps us identify our mistakes as quickly and efficiently as possible so that we can learn from them and react to them.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@ZodmanPerth I wasn't trying to exclude you from the "industry", I meant the data on practices in our industry. I meant an inclusive "we" not an exclusive one. Analysis of data from tens of thousands of projects, of all kinds, says that adopting practices like CD is highly correlated with better outcomes (State of DevOps reports and Accelerate book).
I have been doing this a long time too, and I agree that there is a lot of bad software out there and unlike some people who comment on my videos (again not saying you necessarily), I have tied most of the approaches that we discuss. I am not talking about GitFlow or Feature Branching and rejecting them because I have never tried them, I have applied them to real world projects and seen other approaches work better. That isn't enough for me either, my experience, like everyone else's is coloured by the limits of my personal experience. So I try to look for data where I can.
There is not data that says GitFlow works better, there is lots of opinion, but no data. There is data that says CI works better. Does this prove it? No! It does make it more likely to be true, probably. Based on what I have seen, what I know, and reasoning about why it works as it seems to, seems to me to be the essence of "engineering".
Everybody's guess is not engineering, in engineering we (inclusive "we") build on data as well as practical experience to make choices and change our minds when new data comes along.
Since the data aligns with my personal experience, that is what I will recommend, having tried several other approaches.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I disagree, code is not really an asset. For example imagine two systems, that do exactly the same thing. Do you want the one that is 100 lines of code, or the one that is 100,000? More code is not better, so code isn't an asset.
Software is a complicated idea, in some ways it is not really the code that matters, it is more about what you can do with it. Obviously you need some, and you need some code, and the quality of that code matters a lot, because it is quality that makes it work in the first place, makes it easy to change and improve, and makes it resilient in production and in the face of change. All these things matter in a real sense, and in some weird ways, I'd say that the quality of the code matters more than the code itself.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
It is certainly much easier to get into professional programming with a degree, but that is not the only route. It will be MUCH more difficult to get a first job, after that, for many, maybe most, jobs the lack of a degree won't matter much.
If you really love programming, I would recommend that you keep at it and write code for fun. Make some good contributions to open source projects, write something cool. If you can demonstrate your ability, and you can find the right company - people who interview for skills and talent, rather than only qualifications, I think that you can still do it.
I don't have a degree either, and it was very tough to get started, but that was so long ago that my experience is probably not too relevant. From the other side though, when I was hiring people, if you could say something in your application letter, CV or resume, that made me think that you know how to write code, then I would at least interview you - qualifications or not!
So don't give up on it if it is really what you want to do!
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@Komikio At some point you have to trust people to do the right things. This is just as true of code review. I worked somewhere once, regulated, where all the developers formed pacts on code reviews, "you can say I reviewed your code if I can say you reviewed mine". Clearly not an ideal outcome.
You can mechanise this of course, reject commits that don't have both members of a pair's ID, but like the code review, this only reminds people to do the right thing it doesn't force them to do it.
With pairing it is more obvious, in the old days when people worked in the same room, if it is not going on. It is fairly obvious if one person is sitting alone. So in this respect it is more culturally enforcable than code review. However, as I started with, any of this stuff only works to the degree that people comply. They need to buy-in to the process. Pair programming is much better at achieving that because teams disciplines are reinforced and become part of the culture of the team.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@zbaktube I think that I understand you point, I just think that a user will be grumpy either way, so it doesn't seem to me to be a defining characteristic. I am probably being too pedantic here, I agree that in some, maybe most, cases, your distinction makes some sense. My reason for being pedantic though, is that I think that trying to make this distinction is only reinforcing bad ideas from project management, rather than helping us to face up to the complexities of real-world software development. The "non-functional" stuff is usually the difficult parts, and by compartmentalising this off, through nomenclature, we are in danger of marginalising it, at least for people who aren't thoughtful about software dev.
2
-
2
-
There is certainly always the risk that any of us are victims of our own experience. That's always true. My argument is that I think that OO is often, usually, misunderstood, and so key aspects of its importance and value are missed, and that Functional programming, while adding some very very good ideas, lacks strengths that OO does not. Good OO code is, I think, a more navigable description of the problem than good functional code. While good functional code is probably more concise. The best codebases that I have seen use a combination of these ideas, rather than a purist take on either. I have yet to see a large scale system that is written and maintained wholly as a functional design - I am sure that there may be such systems, but I haven't seen one. That in itself is telling, it means that in practice Functional remains, at best, a minority sport. To be fair most systems built with OO languages aren't very OO either.
2
-
2
-
2
-
2
-
@DamjanDimitrioski Well, my thinking is that the approach that I describe is the most efficient way to deliver software that we know of. People that practice CD in the way that I describe spend 44% more time on new features than people that don't. So this way is the fastest way to features. Management that ask for something else are asking us to build worse software slower, and that is in no-one's interest. So part of our professional duty as SW professionals is to do the best that we can, and that means refactoring, testing and so on. If you don't do that you spend lots more time fixing bugs, than you do refactoring and testing. That's the real trade-off. Managers can ask for the wrong things, you can have the discussion, and maybe you should, but ultimately it is your responsibility to do a good job, and, as a pro, you shouldn't ask permission to do a good job. So ultimately, I think it is your call, not theirs. So I don't give people the option. I don't do work that isn't tested, and I always refactor, all the time, every time I touch the code.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
My point is that it is not the point of TDD to achieve 100%, and if you do that still tells you nothing (or at least very, very little).
The advice for practicing TDD is to not write a line of code unless it is "demanded by a failing test", that is good advice, but this is engineering, not a religion. There are times when pragmatically it can make more sense to disregard the advice.
For example, UI code is tricky to do pure TDD for. The best approach for UIs is to design the code well, so that you maximise your ability to test the interesting parts of the system, and push the accidental complexity to the edges and minimise and generalise it. So if I am writing space invaders, When the bullet from my ship hits an invader, I want the invader to be destroyed. I could separate all of this, through abstraction, from the problem of painting the pixels on the screen. Make rendering the model a separate, and distinct part of the problem. I would certainly want to test as much of the generic rendering as I can, but there is a law of diminishing returns here.
A more technical example, concurrency is difficult to test, but using TDD is still a good idea, but there are some corner-cases that you can hit that may just not be worth it.
My expectation is that very good TDD practically, pragmatically, usually hits in the mid 90's rather than 100. Not necessarily anything wrong with a 100, but hitting 100 for the sake of it tells you nothing useful.
Aim to test everything, but don't agonise of the last few percent if it doesn't add anything practically. That is what ALL of the best TDD teams that I have seen have done. That may be a function of the kind of code they were working on, sometimes close to the edges of the system, and (this is a guess) nearly always about trade-offs around accidental, rather than essential, complexity.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
True, but that only means that we need to educate them that that is a stupid idea, at least if they are only counting rate of feature production sprint by sprint. That is not the sensible time to optimise for. The sane response is to optimise of ongoing, maintainable throughput. Queuing theory tells us that if you keep a queue permanently full, you go at a slower rate overall. You need some slack in the system, in our case, that means time to do a good job of quality, work to make out code easy to change and so on. Without that you go slower and slower until you stop, and there are many companies that fall into this trap and find it next to impossible to change their software. I worked with a company once that hadn't released ANY software for over 5 years, as a result of this mistake, their code was in such a mess, that you couldn't safely change it!
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
There are certainly times when adding FBs can reduce immediate pain, but I don't think they make much sense as a long-term fix, for the reasons in the video. I have seen the opposite too. I saw a company that did what you did, divide up a large code-base, each team began working on FBs, but they had lots of problems because of unpleasant dependencies in their code-base. So they made the branches last longer, to ease the pain. It did ease the pain, locally, within each team. But when I saw them, they had not been able to build their system as a whole for over 18 months.
The need to integrate is real, the longer we defer it, the more difficult it will become.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I don't know about Autodesk or Houdini, but Windows and Linux weren't waterfall projects. Later, for windows maybe, but not any more Microsoft adopted CD (at last in some parts) some time ago. More modern CD companies - Tesla, SpaceX, CitiBank, ING Bank, US Airforce (in parts), Walmart, Borland from the old days, The Mercury flight control software from the early 1960's used 'serial dev' and TDD!
2
-
@slipoch6635 NT was developed originally by a reasonably small team, as I recall, as a splinter from the troubled, waterfall, dev of OS/2 in which Microsoft was originally a partner with IBM. UNIX was written by a small team, in the fairly academic setting of Bell Labs, the devs were Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna so I seriously doubt it was a waterfall project. The first version of Linux was written by Linus Tovalds alone, certainly not a waterfall.
Good SW is developed as part of an exploratory process - nearly always as far as I can tell.
2
-
2
-
Lots of times. I recommend that this is a good way to start nearly every distributed project, certainly Microservice projects. It allows you to iterate quickly, and figure out good, loosely-coupled interfaces to your services, without slowing you down.
It is the best approach for complex systems, that are coupled, or very high performance. It allows you to better tolerate higher levels of coupling, because you evaluate everything together.
I was involved in the development of one of the highest performance financial exchanges, which was very modular, and very distributed 115 services in production, but it was stored together in the same repo, built, tested and released together into production. We could make any change and know if it was releasable in 56 minutes.
It can be a very good approach.
2
-
2
-
2
-
2
-
I think that the first thing to do is to try and make the amount of code that interacts directly with the DOM separate from the code that does other things, then you can test the code that does "other things" separately from the DOM. This is also a generally better design. You can do this through MVC or make up your own separation. In unit testing the part of the code that "touches" the real world (UI, Storage, Messaging etc) is always the trickiest to test. This is for some obvious reasons, you have something that you don't have control of getting in your way of seeing what the code does. So you want to try and minimise how much of that kind of testing that you have, hence my advice to separate the actual "pixel-painting" stuff from the logic of your system. How far you take that separation depends on your desire to test, and your tech. Testing everything in the UI, in the way that you describe, isn't unit testing. It may give you useful information, but the tests will be more complex to create and maintain and less likely to drive good design in your code. What I have done several times is to create my own layer of abstraction for drawing on the screen and then tested to that. We once built the UI to an exchange this way, out UI was dynamic and would create on-screen components, but actually it called our app-level DOM, which acted as an adaptor to the real DOM. We could then test ALL the logic of our system. The App level DOM was generic, for our app, so didn't need lots of testing once it was in place, and this meant that we could run these tests in a dev environment without a browser. This is a fairly extreme approach I suppose, but the value of unit testing seemed high enough for us that we thought it worth the extra effort to insulate our application code from the DOM so that it was testable.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Actually you are describing the kind of systems that I spend the last couple of decades building, and forming these views. I don't think these principles change at all, in fact I think that the need for them is significantly enhanced in building very high performance systems.
You achieve High performance by maximising the work done, while minimising the effort needed to do that work. You achieve that by keeping the code as simple as possible in any given context and minimising comms overheads, that speaks to modularity & cohesion and excellent abstraction.
I was involved in build, and designing, one of the world's highest performance financial exchanges, and while doing so we came up with the idea of "Mechanical Sympathy" as one of our "guiding principles" that is developing our system in sympathy with how the underlying hardware works. So by good separation of concerns, each part focus on one part of the problem could be optimised, tailored, to extract the most from the hardware.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I suppose it depends on what you think of as maths. I love Physics, so I think that you can explain anything with Maths, but that is because we invent the maths to solve problems. It is rather like saying you can describe anything with words. Maths is a tool that we use to capture logical reasoning.
But I like Physics, so things are also messy and complex and fractal. The universe is a wave-function, but no one can calculate it. So when we say that we are using maths to catch a ball, sure there is a complex function of some kind, some people (Roger Penrose) think it is a quantum wave function, that can describe the neural networks in our brains. Even so, that is not us "using maths to catch a ball".
Our brains are trained through repetition. We establish pathways through our neural nets that are reinforced by repeated practice. This is a clear and obvious function of human brains. That is how we learn to catch. I, and her parents, are currently teaching my Grandaughter to catch. She doesn't know the maths!
2
-
2
-
I am afraid it is, as you say that you are lacking experience then. This is not theoretical, not ivory tower, this is how Google, Facebook, Tesla, Walmart, SpaceX and many many more companies build world-class software. It does take a shift of perspective, and a shift in approach, but making that shift is, according to the data, what liberates teams to build high quality software faster. Teams that work this way according to the data (See "Accelerate" book) spend 44% more time on new features than teams that don't.
Feature branches are negatively correlated with speed and quality. On you point on PRs, I have a video on not doing those too: https://youtu.be/ASOSEiJCyEM
2
-
2
-
2
-
It surprises me sometimes, when people respond to my comments on CI, that they always assume I haven't tried the alternatives. I have seen all of the examples that you describe, if you will forgive me, they are common place. I have worked in bigger developments than the one that you describe, and used FB, GitFlow, and similar approaches, and lots of Continuous Integration. The ones that use Continuous Integration work, and the dependency management between teams that you describe doesn't.
You are nearly right that there are some circumstances when GitFlow is an option (it's never the only option), but those circumstances aren't about what works best for SW development, they are when an org doesn't want to address the root causes and fix their problems. They prefer to put a sticking plaster over the problem rather than fix it.
If SpaceX can build space ships and teams I have worked on can build the world's fastest financial exchange, maintain it and keep it running in production for year,, using the approach that I describe, what is it that makes it impossible for your org? It is not scale - google have 25k devs working in a single repo. It is not speed, amazon release change every 11 seconds, it is not quality, SpaceX are pushing changes to space rockets that carry people minutes before launch, it is not regulation, Siemens Healthcare deliver working SW to medical devices that can kill people this way. So what is different, special, about your SW?
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@karelhrkal8753 Having 2 branches doesn't solve this, it just depends on where you catch the mistake. If your testing on the dev branch missed the mistake you are in the same position, except now have more work to do to get back to a releasable state. In CD "master", the release branch, is always the truth. If it is broken you can't release, so you work to fix it. Another way to think of this is Continuous Integration, if I commit a change that introduces a mistake, we optimise to spot that as soon as possible, usually within minutes. I recommend to teams that as soon as they detect any mistake, any test failure, they start a clock, and allow themselves 10 minute to commit a fix. You either fix the bug or revert the change after 10 minutes.
The real answer to your concerns is to look at the data. The data (from State of DevOps reports) says that teams that work how I describe produce SW with fewer bugs, respond to bugs more quickly, and deliver that higher-quality code more often.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1. I think that BDD is a broadly applicable idea. Fundamentally it is about using automated tests as specifications rather than as tests. To do this the prime directive of BDD, IMO, is to ensure that our specs only say "what" the system should do, without saying anything about "how" it does it. This works for nearly all automated tests. There are times when BDD isn't enough, but I think it is always applicable. It is not good as the only way to test very graphical systems, you can't really write a behavioural spec for "does it look nice". You can write a spec for "If I shoot the bad guy he explodes" that is a behaviour, but what it looks like when he explodes is not really. So I'd use BDD for pretty much everything, but add other forms of tests for some things.
2. The key is what I said, create specifications from the perspective of an external "user" of the system you are testing. If that "user" is some code, that is fine, it is the outside perspective that is the important point. BDD works fine for VERY technical things. There are only 2 problems, 1) BDD is not the same as tools, so the ideas work everywhere, the tools may not be correct. I probably wouldn't use Gherkin for testing embedded devices. 2) You have to be even more disciplined as a team when dealing what technical things. The language of the problem domain, the language you should express your specs in, is now a technical one, so you must be laser focused on keeping "what" separate from "how". It is now much easier to slip into bad habits and start writing specs from the perspective of you the producers, rather than from a consumer of your system. Always abstract the interactions in your specs. Never write a spec that describes anything about "how" your system works.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
It depends 😉
Mostly for the things that are essential, don't surface them, and implement them as part of the story that the user cares about that makes most sense. You take responsibility for building good software, including adding features that must be there for the system to be safe and useable. Ask the user when you need help to decide how far to go. If the system is always behind a firewall, it may not need as much security.
Where you need to expose ideas to users to get such a steer, describe the ideas from their perspective. Don't say "DDos protection" or "throttles", you could ask what they'd like to happen when the system was under attach, or when they were closed down for spamming people. You may need to explain that if the system was under attack they wouldn't be able to send messages, and if they were deemed to be spamming they'd be black-listed. These are real world things, not technical esoterica. I think there is ALWAYS a user need hidden beneath any technical story. Find that, and talk to them about that.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Continuous Integration & Continuous Delivery, certainly, Continuous Deployment, not really. Funnily enough, I explain the difference between C. Delivery and C. Deploy in tonight's video, so keep a look out for that.
Continuous Integration is about evaluating everyone's changes together after every commit, nothing to do with release, so perfectly applicable to desktop systems. C. Delivery is "working so your SW is always in a releasable state, so also not related to actually deploying. So also applicable.
C. Deploy is, after every successful commit, we push changes to production, so clearly you can't push changes to a desktop app.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I think it is important to look to solutions as well as just point at problems, but some of your assertions are not really true. You describe one way of organising things at scale, this is not the only way, and it is not the best way. It is not how armies organise for example, they stopped doing that at the start of the 19th century, because orgs that worked that way got beaten by orgs that don't - the alternative is called "mission based planning", so you say "take that hill" not, "turn left at the cross roads, walk for 3 miles, shoot the 3 people on the left" and so on.
Sure, accountability matters, but it is not most effectively done through hierarchy and bureaucracy, at least not in disciplines that demand creative thinking. Small teams of goal-oriented people, significantly outperform the alternatives. That is whey nearly all big, successful, SW companies are organised like this.
You said "In large organizations, organizational abstraction is absolutely necessary as is specialisation of work in order to deal with the complexity" this is only partially true. The consequences of organisational abstraction and specialisation are bureaucracy and coupling in the org. Orgs like this are inefficient and scale poorly, there's maths that demonstrates this, based on work on "non-linear dynamics" from the Santa Fe Institute. A classical, hierarchically organised firm only increases profitability by 86% when it doubles in size. That's a measure of these overheads of bureaucracy and coupling. A more distributed approach to organisation, many small, more independent, teams (like Amazon for example) increases productivity (and profitability) by 115% when it doubles in size.
2
-
2
-
This is certainly based on my analysis of the problem. The evidence I tend to rely most heavily on is from the State of DevOps reports. They measure team performance in terms of Stability & Throughput, that is the quality of the work we produce and the efficiency with which we create work of that quality.
I argue that in order to be able to maintain high-throughput, we must develop, and maintain the ability to make change. That means managing the complexity of our systems so that we can return to them and change them whatever the nature of the change. The reason that this improves our chances of success, and high scores in stability & throughput definitely say that we increase our chances of success, is because it allows us to iterate, incrementally grow our systems in terms of capability and try out our ideas and learn what works best based on the feedback that we collect. So optimising for this learning is the reason why it allows us to build better SW.
Abstract, maybe, but not too abstract, and not too unfounded. If you can give me an example where the ability to learn is a problem, and where code that is modular, cohesive, has a good separation of concerns, is approroately-coupled and has good lines of abstraction is worse than code that doesn't I'd be interested to hear it.
Oh, and you can't use the excuse that you can't afford the time to do this good stuff, because the data says that if you work this way you go faster, not slower. So where am I wrong?
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
My opposition to feature branching is that, I am sorry, but you can't do what you are recommending if you practice it. You can't, by definition, have both CI and FB, unless the branches last for less than a day.
I'd add that the data (read the State of DevOps reports & Accelerate) say that teams that practice FB produce lower quality software more slowly.
None of this is personal preference or choice.
CI is defined as "everyone integrates and evaluates their changes together AT LEAST ONCE PER DAY". You can't do that with FB's because, also by definition, you don't get to see what other people are doing until they think their feature is finished.
CI isn't easy, it would be nice if FB's worked better than they do, but they don't, and they don't for some fundamental reasons that it is impossible to duck. This doesn't mean that projects that use FBs always fail, or that switching to CI is always easy. But if you want to be working in the most effective way that we know of so far. The way that we have data that predicts outcomes for teams that do it, and predicts more chance of commercial success for firms that employ teams that practice it, then that way is CI, not FB.
In my opinion FBs are a sticking plaster fix that paints over the cracks in a dev process. Sure it can help if your approach is a bit broken, but it is a local optima and once you are there, it makes things worse not better. That last part is only my opinion, the rest is not, it is how FBs and CI are defined.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
That wasn't my point. It is not what you understand after you have learned the language, it is about how easy, or not, the mapping is onto how most people think. I agree, and say in the video, that these were my subjective impressions, but if you learned algebra at school (most people) then the more procedural style of OO code is a closer fit than more complex ideas like functions as arguments. Also humans are naturally classifiers, it is how our brains work, so there is something about the modeling thing in OO that seems to me, subjectively, as simpler as a result. Neither is perfect, and both are more difficult for people to understand than this text, so you have to learn either one.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
A couple of things in here I'd like to answer. First, I think it probably inevitable that you MUST accept lower standards, unless everyone on the team is better than you in all respects or the team only proceeds at your pace - you become the bottle-neck. Imagine the world's best programmer, and them working with anyone else. No-one will be quite as good as them, so there job isn't to hold everyone to some standard of perfection, but rather to help everyone do better.
The second point is that I agree with you that there should be minimum standards of quality, my preference is, as far as I can, to automate the assertion of those minimum standards. So, for example, I will fail the build on commit, if the code does not reach them Team's coding standards. I will automate that check - lots of tools can help support this, Lint, Findbugs, Checkstyle etc etc. Then your job is to make everyone do better than this bare minimum, and help them reach as far as they can given their talent, commitment and the time that you can afford to spend with them.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@shelleyscloud3651 Sure, my point was that all of these things are irrelevant if we can't solve the first problem, but you are right we need to be planning for the upside as well as the down, and no jobs is the upside!! 😳 These things may be irrelevant but they do add to the complexity, but even so, if we have extinction on one side and "working to agree something with China" (or anyone else that is human) on the other, whatever the difficulties and problems with that, I think that the latter is the better choice, and what we should be working towards. The Chinese will be gone too, so it is in their interests too!
Of course, if we can live with AI, we MUST sort out the immense economic impact. Forgive me, but I don't think that worrying about JOBs is enough, though that is certainly a short term concern of immense importance. I think you and I are agreeing though, I just think that this is a MUCH bigger, more radical change than we are used to thinking about. Ultimately there will be no Jobs, because the successful picture of AI is that the cost of production, the cost of intelligence, will fall to zero. So no jobs at all as we currently understand them now. We will need to establish a different way of supporting people to live their lives, if this works it will presumably kill capitalism, and communism and most other -isms too 😳
I do agree that this is a HUGE topic, but real AI raises lots of HUGE topics, which is why I started posting here in the first place, I like Alister and Rory's take on politics, but my impression is that informed intelligent people like them, who aren't watching what is happening in AI closely, don't understand the magnitude of the challenge. Rory (sorry Rory) said "they will soon be able to write good essays", they may soon be able to do anything that people do better than people, former CEO of Google-X says within the next 2 years, and is advising people not to have children now. What we have built is machines that can learn anything, and they can learn annoying orders of magnitude faster than us!
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@MTandi Well it's not theoretical, I led a team that worked exactly this way, building one of the world's highest performance financial exchanges. I know it sounds weird if you haven't worked this way before, but it works, it works in technically difficult, regulated industries, and it works better than the alternatives. None of this is just my opinion, this is fact based on real world experience of teams all around the world, doing interesting, difficult things. For example, this is how SpaceX develop software - I don't know about the pairing for them.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I think that this depends on what you mean by "BDD", there are, at least, two different interpretations. I think that today, most people think of BDD as focused on, what I'd call, acceptance testing, high-level functional tests that act as executable specifications for the behaviour of a system. That's the kind of BDD that this video is talking about.
But that wasn't what was meant when BDD was invented. It started out attempting to describe a better way to teach TDD. I talk about that in this video: https://youtu.be/Bq_oz7nCNUA
I think that this second version is more generally applicable, and sadly, often overlooked. It certainly deeply informs my approach to TDD. It doesn't require the complexity, or tools, that the functional testing stuff does, which I guess is why the idea didn't gain as much ground, people love tools! Ultimately its more important though.
It says focus your tests, even your unit tests, on only the desirable behaviour of your code, not its implementation. Even at the level of a simple unit test, write it as a tiny specification of what you want the code to do, not how it does it. So in this form, BDD is nearly the whole testing triangle.
2
-
The big mistake that I see all the time, is not recognising that what they have is a modular monolith, which is fine, and so, for example, still keeping each "service" in its own repo and having a separate "deployment pipeline" and then having an integration stage sometime later. If you just admitted it was a monolith, you could use the version control system for what it is good at, controlling the versions that work together. Modular monoliths are in many ways MUCH simpler than microservices, ultimately they are organisationally less-scalable, but otherwise perfectly fine. I have seen 10 person teams with 30 microservices, each in its own repo, and all facing integration-hell. We know how to solve these problems!!!
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@anj000 I agree, that the problem is that people aren't taught this way, so it's often counter-cultural, but it works better, so it shouldn't be, and I am pragmatic, I choose things that work better. I used to work for a trading company, their head office was in Chicago, but I was based in London, I was the only on on that team in London. The team I most closely "belonged to" was in Chicago. When the timezones were in our favour, we overlapped for 3 hours and we paired for those 3 hours. It was still better than not.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
What I describe in this video is part of a bigger approach. I call it Continuous Delivery, but in that approach we are disciplined, but also flexible. The goal is to optimise for fast, clear, definitive feedback. For lots of languages, you can get answers to some of those quality gates in you IDE, so do that. For others, add the checks to the commit stage of your deployment pipeline. Whatever the gate, you optimise to get the answer as early as you can. The Deployment pipeline should be definitive for release, and should be able to give you an answer multiple times per day, so you optimise whatever you need to to make that possible. There are examples of orgs and teams doing this, sometimes at massive scale with very complex stuff, so it works, you just have to make it work.
2
-
2
-
2
-
2
-
2
-
So would you feel comfortable about signing up for being a "Craftswoman"? As far as I see it, as a man, this is nothing to do with dissing men, but rather, that by historical accident, the language is selective and makes half the population feel a bit uneasy. That seems like a fairly simple problem to solve, pick language that makes them feel less uneasy!
I'd also pick the other half of the word, which I see as an important part of Emily's point "Craft" is the wrong word too! The modern usage of "Craft" has no real focus on quality, the pictures my 4 year old granddaughter draws are craft, and I love them, but they aren't objectively high quality. Historically the step beyond "Craft" was and is "Engineering", in part, hence the name of our channel!
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
The simple answer, broadly in a Continuous Delivery context, rather than only in terms of TDD is "Yes!", have the test.
Your first example, I'd probably include as some kind of commit-time analysis test, it isn't TDD, but if memory footprint is a real constraint, then I want to learn that before I deploy the code and it blows up.
The second I'd also treat as an analysis test. When we built a financial exchange, we wrote tests that scanned our UI for input fields and then ran a SQL injection attack on every field - we capture the output from the UI to look for traces of SQL.
For other security features, they are normal features and need regular TDD style tests.
TDD is about the design of our code, more than about testing. So you know what to write tests for already, you test every new behaviour that you add to the code. Other types of test are extremely helpful, but they don't all drive the design and the development, so don't count as TDD.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I guess you may not have made it to the end of my video, because I reach the same conclusion as you, kanban is agile, agile (when done well) is lean.
Yes I am a fan of Veritasium and watched that episode. Inevitably part of the game of YouTube is to try and attract views. A significant part of that is to play the thumbnail game a bit. I like to think that I share veritasium's philosophy of trying to use the tools of YouTube to our advantage, without compromising the integrity of the channel or misleading people. So I do use "clickbait-y" titles sometimes, but I hope never clickbait-y content.
I don't think the title for this one is misleading, maybe we should have added a question mark? This is a topic that I have been asked, often, which is what prompted me to do this episode and while I see no tension between the two, and say that in this video, I am trying to represent the view that they are different things and the explain why they are not. Sorry you didn't like it.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Well, having worked this way myself in teams ranging from the tiny (4 devs) to the large (a few thousand), in all sorts of industries and on lots of different kinds of software I don't agree. It doesn't take a lot of money, a lot of effort and it allows you to build better software more quickly.
So I don't understand where you think "Change Failure Rate", "Mean Time to Recover from a failure", which are the DORA metrics that measure Stability (Quality), and "Lead time" and "Deployment Frequency", which are the Throughput metrics, are "inadequate to measure teams performance"?
Sure there are other things that we are interested in, but they vary so much that they don't create a valid baseline for comparison. It's nice to know if our SW makes money, is resilient, and so on, but FIRST we need to deliver stuff that works, and that is what the DORA metrics measure, how earth is measuring "delivery of stuff that works" inadequate?
2
-
2
-
2
-
My point is that I don't think that they should be decoupled at all, but they are often treated as separate pieces of work. I think that we should try to always find the real user need behind any change and work to achieve that. This doesn't mean that the technicalities are unimportant, it means that they are more clearly important, even to non-technical people. We technologists are the experts in this part of the problem, so it is overly naive, though common, for dev teams to defer all planning priorities to non-technical people. It is important that the technical work is prioritised appropriately, and that takes collaboration and negotiation between people that represent different perspectives on the system.
I think that this is best done by focusing on what matter to users. This is neither front end nor back end, user stories or technical features. All of these things matter to users. So we organise and plan our work to deliver what our users want, and we add things that our expertise tells us they want even if they don't ask for it directly. Like security, resilience, maintainability and so on. All of these things are clearly, and importantly, in the users interest, but they may not have thought about them in those terms. It is part of our job as technologists to advise them in ways that prevents them from making dumb, naive, overly simplistic prioritisations.
It is my view that surfacing "technical stories" for example doesn't help with this. A much better way is to always find, and express, the user value inherent in the technical things that we must do, or just take on the responsibility to do high quality work (from a technical perspective) and don't surface it or ask permission - technical improvements and enhancements are rolled-in to normal, everyday feature development - we don't ask for permission to do a good job!
2
-
Doesn't work! If everyone else is also working on a branch, you are not seeing their changes until they think they are finished, and they aren't seeing yours. These are exactly the changes that you are interested in. I'd guess that most teams practicing FB are also practicing some form of Scrum or similar, so if everyone starts a new story at the start of a Spring or Iteration, and the average duration of a story is, what, 1/2 a Spring or Iteration (if you are doing well), then how long before you have anything interesting to merge from Master, on average, those changes will be visible at the point that you are finished work anyway, so pulling regularly, while better than nothing, achieves, almost nothing.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Sorry I don't agree, worse, I am afraid that you are mistaken here.
Even if my OO or procedural language doesn't support ideas like functions as arguments, and most these days do, you can implement EXACTLY the semantics of passing a function as an argument with interfaces (abstract classes) for example, pass in some code that matches a particular pattern as an argument, and you have polymorphic function arguments. Clearly being able to pass a function as an argument is semantically simpler, but the logic here is EXACTLY the same. I can also easily decide, in my OO language, to write code that doesn't change any shared state - zero side effects.
There is NOTHING that I can do in OO that you can't do in a functional language, and vice versa, if there was, one or the other would not be a general purpose language. That is at the very heart of computer science and what "Turing completeness" is about https://en.wikipedia.org/wiki/Turing_completeness
Any Turing complete system can simulate any other Turing machine. All modern general purpose language are Turing complete.
2
-
2
-
Yes, and sadly that is an irrational desire. Pragmatically, we all have to live in that world, but it doesn't apply in other areas of endeavour, and shouldn't apply here. You don't get Venture capitalists saying at the early stages, "tell us exactly how profitable you will be and when", they expect this to be unpredictable, they say things more like "when do you expect to be profitable, and what are your plans to get there". This is a very different kind of question. Some things are literally unpredictable so we should treat them differently, and not expect them to be *predictable*. SW dev is a process of learning and exploration, always, and so is inherently unpredictable. The orgs that treat it that way, generally do a lot better at it.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1) My preference for code-review is pair programming, it is a better review than a regular code review, and a lot more,
2) First you need to know that it is broken, so good tests! Next, part of CI is to encourage small, frequent changes so each change will be smaller and simpler than you are used to, so easier to pull if it is a problem. CI works fast, so you will detect the problem more quickly than before, so your small, bad, change will be detected quickly, so there is less time for people to pile other changes on top of it. So yours may not always be the last change, but it won't usually be very far down the stack. So still easy to revert. In reality this is not usually a problem for these reasons.
3) It doesn't "naturally call for a code freeze" but it does add a bit more complexity. I would automate the "certification" process. I worked on creating a financial exchange, we did what we called "Continuous Compliance" our deployment pipeline automatically did everything that was needed to prepare for release, generated release notes, coordinated sign-offs, tracked the changes and so on. With a bit of ingenuity you can automate that stuff too. That may come later though. To start with, simply pick the newest release candidate that has passed all your tests, when you are ready to release, then do the slow, manual approval/certification paperwork in parallel with new development.
If by "certification" you mean some form of manual approval testing, then you do want to work to eliminate that, it is too slow, too low quality and too expensive - watch some of my videos on BDD and Acceptance testing for some ideas on how to do that.
2
-
I don't disagree with your end-solution, that is where I end up too, but the intentional process of TDD is to make progress in small steps, allowing me to verify my progress often. If I chose to return the Fraction in the first step, I now have more work to do between tests, it is a choice, but certainly when teaching TDD, and mostly while practicing it, I prefer to make change more incrementally.
In this case, if I had chosen to return a fraction I would also need to implement a 'toString' method, and call it in the test, so unnecessary complexity for the first step IMO.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Yes, the self-similarity of SW at different resolutions of detail is interesting. When we were doing the initial thinking about building the LMAX Disruptor, some tech that was used to implement the infrastructure for our reactive system (we open sourced it, so you can see it here: https://lmax-exchange.github.io/disruptor/) we were at the stage of discussing ideas around a white board. This was at the level of lock-free programming and optimising on-processor cache usage, when one of our colleagues, a very good DBA, cam into the room. We said, what do you think about this Andy, and explained what we were thinking of, and he said "Oh yeah, we do the same thing to manage caching in relational DBs" 🤣🤣
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
“Channels” may be a misnomer, but I can’t think of a better name. The name made sense when we first came up with the idea. Then they represented “different channels of communication” with a single system, Web, public API, institutional Std API etc. Later, one of my clients used the same idea to represent an Old version of their system and a new, running the same tests against each system.
Fundamentally, the “channels” are defining which Protocol Driver to choose.
The example in the code that you point to is a sketch, not a fully working version of the channel idea, I don’t have a publishable version of the custom test-runner that you need to automate the switching of the protocol drivers work, so I didn’t bother finishing this code. It was written originally as an example for some one, so I didn’t need to take it any further, sorry for the confusion.
2
-
2
-
@zerettino I think that the difference is between a focus on design, which is TDD, and a focus on testing, which is PBT. I think that the former more important than the latter. If your code cares about the value of a variable like the difference between "John Doe" and any other name, I'd suggest that it is poorly designed, unless the difference matters in a way that is designed-in, and so tested. If I want to reject user names longer than X, or with character set Y, then fine, I write a test that specifies how my system behaves given those inputs. But if I have done this, randomly throwing different values at my code tells me nothing.
It may show that I missed something, after the fact, showing that my code is bad, but this is a VERY different thing to TDD and no replacement for it. This is ALL about testing and says nothing about design.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I think that this is an area where you can 'have your cake and eat it'. I know that lots of orgs getting this kind of testing in-place and working, but I think that there are some techniques that make it, if not actually easy, easy enough to make it worth the effort. Establishing deployment pipelines that are scoped by "releasable unit of software" and testing that deployed in a "production-like" acceptance test environment, provisioned automatically using infrastructure as code. Is a good start. Using "Specification by example/BDD/Acceptance Testing" techniques to create tests that don't know anything about how the software works, but only what it needs to do, is another big step, then building all this into you test infrastructure, so that you can write test cases in the same amount of time it takes to do them manually (or very, very nearly) is another. All of these things work, at some significant scale, and make the kind of fail-fast acceptance testing that we are talking about a practical, extremely valuable, reality.
The real stumbling block is not the tech, it is what you say in your first sentence, "malformed organizations which are unavoidably doomed to produce malformed systems". The organisational coupling kills the ability to do this in most places I have seen. Fix that, and the rest is easy. You can fix that, but that is the difficult problem.
2
-
2
-
It's certainly possible that microservices aren't a good fit, they aren't right for everything. But I think that the problem you mention is less about microservices, and more about normalised data. I'd say that normalised data is a useful tool for small scale systems, and a poor choice to larger-scale and more distributed systems. I think you have to let that go as the scale and complexity go up, microservices is one way to do that, there are others. Normalised data is nice, if it fits your problem, because it hides lots of nasty complexity, but that complexity is always there, just beneath the surface and it leaks out if you need your systems to be fast or scalable. Then you need to find other solutions, and face the more fundamental realities of "eventual consistency" - Software isn't simple!
2
-
2
-
Yes, there is a nice model that I like in a book called "The Art of Action" where the author, Steven Bungay, describes a triangle of "Outcome", "Plan" and "Action" and then the "Gaps" between each of these steps, the Gap between "Plan" and "Action" is between "What you would like people to do, and what they really do" 🙃
I have been close to the origin of some big ideas that are common in our industry now, an ALL of them are misunderstood, misrepresented and mostly done wrong. Nearly every team that "does agile" doesn't follow a single principle from the Agile Manifesto! https://agilemanifesto.org/
It is how the world works though, and finding ways to cope with "people" is really the whole game!
In the specific case that you mention, what I would do (have done) in that situation is to work to insulate the team that I work in as much as possible from the impact of other teams that don't work how we want to. I touch on this in this video a bit, but there is a lot more to it! https://youtu.be/cpVLzcjCB-s
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Thanks.
One of the themes in the negative comments is that I am criticising the developers, I am not, I was, as you say, talking about how the software was created, the "engineering of the solution" I don't equate "engineering" only with code, it is about the processes and techniques that allow you to create the code. To my mind that is what "engineering" means in most other contexts.
I make no critique of the code, 'cos I haven't seen it, but it seems clear that the approach to producing it was wrong. There are no guarantees that any approach with always work, but other approaches, including the ones that I recommend, fail differently to this.
I made the video because it looks to me, exactly, like the kind of classic software failure that we have seen for decades. The sort of software failure that Fred Brooks wrote about in 1970!
2
-
2
-
You break them up, but you break them up to be testable and de-coupled. The anti-pattern is to force them to only change in lock-step across all the layers. So it takes a combination of good design and a focus on testability and deployability to get all this right. Also the use of techniques for decoupling.
The trap that I see teams building tech like yours fall into is to forget that the content, the data that is communicated between layers, through those ports that you mention, is the interface too. You need to translate at these point in order to insulate them from change, and decouple them developmentally. There is more to this than I can put into a YT comment, but I think these are the principles.
Just for reference, SpaceX and Tesla are TDD and CD practitioners, their code stacks are pretty deep too.
I am not saying Full Stack means that you everyone MUST know everything, but they must have a model of nearly everything to understand where and how their stuff fits, and the implications of changing things at whatever layer you are working at.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Certainly it is subjective, but I think I mean something a little more specific when I talk about "readable code". I don't mean can I read it and understand it if I think hard. I can do that with assembler, but assembler isn't really usefully "readable" in the sense that I mean. Rather, can I infer what it is doing without studying hard. Could someone infer what it is doing, even if they don't know much, or anything about code? Are the words that we use for variables and functions relevant to the problem that we are solving, does my code make little, probably slightly weird, but understandable sentences that convey its intent? I strive for that, I don't always make it, but I nearly always try to.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@awmy3109 I used to believe that about pre-optimisation, but I don't any more. It is certainly not true if you are working in high-performance systems. The trouble, to my mind, in advising against early optimisation is that the message that this turns into is "don't bother about performance at all" and the outcome of that is that most of the systems that I see are many orders of magnitude slower and less efficient than they could be, with even relatively simple steps.
Simple case, many developers don't think about the collections that they choose to store data in. I have seen more that 10x improvements in perfromance simply by replacing an O(n) collection with an O(1) collection in some existing code.
You don't need to be performance obsessed to think about that kind of thing, you just need to model the problem in front of you.
Similarly, the example that Chris gives above, treating a remote call as thought it costs the same as a local call, when it is always thousands or tens of thousands of times slow will cripple a design, and can be really quite difficult to fix later.
I think that having a feel, 'Mechanical Sympathy', a rough view of the costs of our design choices as we make them helps us to do a much better job. After that, it is, as you say, a job for another day to do the more detailed refinement and optimisation, but there are bad choices that are difficult to undo if you completely ignore the costs of your decisions early on.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@tharaxis1474 Nope, not theoretical. I am and always have been a software developer, mostly working in larger more complex systems. My preference for pair programming is based on doing it for over 20 years in different orgs and teams and, these days, helping other teams to adopt it. I can understand that you don't like it, as I say in this video, it isn't for everyone, but arguing that it is "theoretical" and that I don't understand the problem is, I am afraid, just wrong.
Here is a description of a project that was run with pair programming from day one: https://martinfowler.com/articles/lmax.html
I know this was true, because I was part of that team.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
It should be ok to learn new things that require you to change either code or test. I think that a lot of the value of this approach is that it gives us more freedom to learn new things and adapt to them, or to make mistakes and recover from them.
I think there are two scenarios in your question. You think of a behaviour that exists in your code, but wasn't tested and you think of a new behaviour that you'd like to add. You should be free to add new tests, and/or new code for each case. I'd write the test first in both cases, in the first case your test will pass, because the code is already there and working. Taking a TDD approach, I would, temporarily, break the code in a way that will make my test fail, to check that my test is really testing what I think it is. Once I know that, I fix my code again and check that everything still passes. In the second case, there is no need, write the test for the new behaviour, see the test fail, add the new behaviour to the code, and check that ALL the tests still pass.
If your change breaks old behaviours (tests) then you need to think carefully what that means. If the change changes what the code is really doing, then the old tests may have been made invalid by the change, so you will need to change/replace them so that they match your new understanding. If the tests are still correct, but you broke them with your change, your change is wrong!
2
-
2
-
2
-
2
-
"young white dudes" aren't a problem, unless that is all there is. The world needs SW that works for lots of people for lots of reasons, and a narrow perspective, whatever that perspective may be, is not as good as a broader perspective. Old black women, no doubt see things differently to me as an old white dude, I see things differently to you as, presumably, a young white dude. None of us is wrong, we just have different perspectives, so SW teams that have a broader perspective will probably do a better job.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Yes, I understand that, my point is that it is a really bad response to the fear. It is a bit like being afraid of anything else, if you do less and less of the things that you are afraid of, your fear will only grow, and eventually you find yourself living in a cella, eating cold baked beans from the can with a silver-foil hat on your head.
Hiding from the fear is a poor response, instead you need to deal with it in some manner, with some care, the danger may be real, but hiding only makes things worse not better. I have seen many companies that can't release software at all, despite having lots of people employed to do so. This is a result of retreating from the fear.
The reality is, that if we want to create software in teams, then we must allow people to make changes. The way to make them careful and cautious in making the changes is to make the consequences clear to them. You don't do that, if you abdicate responsibility for the consequences to some small group of over-worked gatekeepers.
The date is very clear, moving more slowly like this results in lower, not high-quality software. (See the "Accelerate" book by Nichole Fosgren et al).
2
-
2
-
I agree with a lot of what you say, but I don't think that OSS is the route to fixing capitalism. I agree, as I say in the video, that companies take advantage of OSS developers, but OSS is, in most definitions, agreed to be free. I think that what you are describing, wishing for, is something else that isn't OS.
Here's the OS definition from Wikipedia https://en.wikipedia.org/wiki/The_Open_Source_Definition.
It starts with "Free redistribution: The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale."
How is that compatible with charging for the SW, particularly only later when it becomes successful?
I agree that SW dev takes time and effort. I am a professional SW dev, and expect to be paid for my work, but it doesn't make sense to me to expect to be paid if I give it away for free. I either give it away (my choice) and don't get paid, or I don't and expect payment.
If I give it away, then sure I can try and make money in other ways, but for some sorts of SW that is probably not practical however popular they are. Which is what I meant when I talk about there is some SW that I won't bother paying for, I'd rather re-write my own version.
2
-
2
-
2
-
2
-
2
-
Sure the trouble with science is that humans do it. There is politics and rivalries and all of that, the difference though, is that however strong the lobby in science, it will be overturned eventually, because it won't fit the facts. I'd argue that the idea prevalent in Quantum Physics of "shut up and calculate" is an anti-science idea, and it was very strongly pushed. People we actively discouraged from study what QM really means. But now that is changing, and lots of people are actively interested in trying to find that out. Unlike other areas, even if it sometimes takes a long time, Science is still about the underlying truth, and eventually, even if it means waiting for the dogmatists to die, the truth will find a way out.
2
-
2
-
2
-
2
-
2
-
Wow, that does sounds weird!
I discovered after I released that I made a mistake in the FB video, the (undated) information from FB that I based my analysis of the failure on, it turns out, was for a different incident, earlier in the year. I think that my FB video is good, and there is lots of good stuff to learn from the incident that I studied, I want to use it, but it wasn't describing the incident that everyone was focused on, and I didn't want to mislead anyone, so I pulled it and published this one, which was my original plan, in its place.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I think it depends on what you are trying to do with your API. If your API is really internal, aimed at providing some lower-level service for a bigger system, I think that you have to be very careful about design. It is way too easy to fall into the trap of fixing your design assumptions into API boundaries. Focusing on the high-level value can be useful, even if the "user" is a step or two away from the API.
If you are publishing an external API, I think that now you more clearly have real users, users of the API, even if the technical interactions are via code. What do those users want? Focus on that, and you will not only have better stories, but also better APIs and better tests.
Overall, what I mean by the bigger value, over and above user stories, is the idea of separation what the code needs to do, from how it does it. That is true in every case, and is a property of good design at every level. Your API should abstract the problem so that it is easy to user and hides detail, that's always true.
2
-
That is simply not the case. CI works and is in use for some of the largest most complex systems around. This is how Google, Amazon, Microsoft (in parts), Tesla, SpaceX and many many others work. The difference is that you don't accept commits that break things, and you integrate testing into the development process, so that you can tell.
Pair programming adds other kinds of value, TDD is the most effective strategy, along with CI, for making sure that we don't break things.
I have been doing this personally for a couple of decades in, sometimes large, teams building complex software, so No, it is simply wrong to say "If you just check everything in to the trunk constantly, you will never have working code in the trunk".
It is kind of the whole point of CI to do better than that!
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
There is a lot to that question. Fundamentally, the principles don't change whatever the deployment target. We want to version control the state of our systems. For on-prem stuff, unless it makes sense to build a private cloud (and it often doesn't) clearly you are going to be working with more fixed hardware/server config, but still I want to version as much as possible on top of that. OS/Web Servers/Database/Messaging/Config of all of these before we even get to the system you are building. By default automate the config of this stuff as far as you can possible get. If you are writing apps that run on other people's systems, you can't version the OS, but take it a far as you can so that you can control the variables.
The tech you choose is so varied that it probably makes little sense to specify it, but Docker, Shell Scripts, Chef, Puppet, Ansible are all workable options. In the past I have worked in teams that did Infra as code driven by home grown deployment & config tools coordinated with ANT scripts.
Testing this is useful, applying CD techniques to Deployment Pipelines works nicely. Create a very simple test-pipeline, and a simple test environment that you can configure and write some simple acceptance tests to confirm that changes to the pipeline work.
There is lots to all of this, but most of my experience of Infra-as-code was in these types of environments.
2
-
2
-
2
-
2
-
2
-
2
-
Interesting, that seems to me like a statement of fact. For Approval tests we run the code and save the result that we got back from the code and then in subsequent runs we compare the results with the original result. That is we refer to the original result which was generated by the code itself. How is this NOT self referential?
Sure, you can mediate the self-reference by looking to see if it seems ok to you, but that doesn't stop it being self-referential, that just changes it to be self-referential-plus-sanity-check.
This is not inherently bad, but it does place some limits on its value. The big problem with self-referential tests like these is that there is nothing, other than the sanity check, that says that the results that were generated make sense, or represent what you want, and certainly for many types of problem, the result that you get back from these bigger, chunkier bits of software is complex enough to make it VERY easy for the sanity check to be poor, cursory or to miss things. Humans are particularly bad at this kind of review.
This is VERY different to creating a specification for what you want, before you create it, and then verifying that what you want is fulfilled by what you created. Particularly, if when you create the spec, you don't do it in a detailed "what is the precise output" form, which is what an Approval test validates.
I am a big fan of Approval testing, but I don't think that they are a better replacement for BDD style Acceptance Testing. They may be easier to write, but I still think that comes at the cost of being a weaker, more coupled to the solution, assertion.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
No, there will still be bugs, there is data to say that there will be about 60% few bugs in production if you adopt a good testing approach, there is no data that I am aware of that indicates that PRs reduce bug count, and in fact when compared to more flow based practices, like CI, then the data is very clear, and quoted in the video, based on the DORA metrics of Stability & Throughput, teams that practice CI score a lot better on both stability (a measure of quality) and throughput (a measure of efficiency).
If you practice TDD, which is my preference, you are less likely to have bugs in tests, because the process includes a validation of the test - we always run the test first, before we have created the code to make the test pass, so that we can see that the test is actually failing as we expect.
2
-
2
-
2
-
2
-
2
-
Well, I don't think that waterfall ever really works very well for software development, because requirements are almost never "well known" and the work is never "repetitive" if they are and it is, it is almost certainly already a solved problem. It seems to me that SW dev is about solving new problems, at least in your local context. Agile, to me, when done properly, is focused on learning, we learn what we need to add to make our products a success, we learn what designs will work and we learn how to work on this problem, in this code-base, with this technology, with this team, in this org, as we make progress.
Agile starts by assuming that we don't have all the answers, and that those that we do have are probably wrong, so it gives us the freedom to learn and adapt. To me, this is much close to a scientifically rational approach to solving problems, so it works a lot better.
I wish you luck.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Yes, in that case the most efficient way to organise things is to have a single (not necessarily gigantic) repo for both services. As I said, the real question is what do you have to test before changes anywhere, in either of these services, are safe to release? The real answer given your scenario is both services.
For microservices it is generally preferable to duplicate behaviour than share code between services, to retain their independence, and keep them de-coupled, unless the code that they are sharing is another, decoupled, microservice.
The downsides of monorepo are that you need to work harder to get results fast enough, and that it can result in poor or inexperienced teams to ignore coupling within the repo (this is true of microservices too but they suffer more when it happens). Even in a mono repo you still want good, modular design.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@TAiCkIne-TOrESIve Well yes, it is. Let's say we have two processors, each working wholly independently of the other, concurrently. Now we share data between them. If we don't control access to this shared data, then the truth of the situation is that the data will change in uncontrolled, unpredictable ways. So we provide an illusion of synchronous access, through some kind of mechanism like a mutex, semaphore, lock, or most efficient of all compare-and-swap operation.
All of these come at a big cost to performance to provide the illusion that these concurrent threads are working together. They aren't really, they are usually being sequenced in some way to preserve that illusion, but as I said the cost is enormous in terms of performance. By far the most efficient mechanism is compare-and-swap (or similar) if you benchmark this agains single threads doing work alone, not concurrently, it is around 300 times slower than doing the same work on a single thread (or CPU). Locks and mutexs are MUCH worse than that. So it isn't even really synchronous, it is only sequential. The abstraction leaks heavily in terms of time because for a large part of the time, the whole system is stalled doing nothing much beyond trying to synchronise the steps between the CPUs or threads.
Sync has it's uses, of course, but I do think that it is a leaky abstraction, that happens to be sometime useful. You can solve a lot of difficult problems, in other aspects of your system, by not over-using Sync as a model.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Yet that isn't how other areas of business are organised is it? Businesses may have aspirations about when they will be profitable, gain market-leading status, grow to a certain head-count or even move to new premises, but unless people are childishly naive, they know that these things can't be perfectly predicted, why is the creation of software different? In every book on project management that I have ever seen, somewhere it says "you can't fix time and scope" and in nearly every org that I have seen what happens? The plan attempts to fix time and scope. This is NOT about marketing campaigns and deadlines, this is about irrational approaches to planning vs rational approaches. I choose the latter. Sure it would be nice and simple if we could fix time and scope, but we can't. Face this reality and pick which one matters, then fix that. CD means we can always deliver to a date, or work to a specific scope and deliver continuously, or wait for a certain scope. Either one works, attempting to fix both is a fantasy.
2
-
2
-
I quite like the answer from Anthony, "to become a better SW engineer". From the other side of the table it is difficult to interview people for their first job, it takes a lot more effort and, I think, a fair bit more confidence in conducting the interview too. Not your problem I know, except that it is, my experience of being interviewed is that most people aren't very good at it!
As you gain more confidence, it becomes easier to direct the interview yourself, steer it in the directions that you want it to take. My strongest piece of advice, which is hard to do, is to remember that you are interviewing them at least as much as they are interviewing you. Sure, the commercials are on their side, you want to get paid, and need a job, but, ultimately, there is a global shortage of software developers, and good software developers are rare! Try to hold that thought, adopt the mindset that your job in the interview is to figure out if they are any good, and worth working for. I think that the psychology helps a bit!
2
-
2
-
Actually, unless there is an admin mistake, which happens some times, I always provide the references, just look at the description to the video. In this case the data is from the State of DevOps research, and is also reported in the books "DevOps Handbook" and "Accelerate".
As I describe in this video, I think that there is a difference between being dogmatic, and ruling out bad ideas. Do you disagree with my scores against the PRs, can you show a similar argument that demonstrates how FBs are better and outperform CI. I don't think that this is about personal opinion and personal preference. I don't think that that is enough, this is about engineering and what works better.
This doesn't mean that you can't build good SW with FBs & PRs, I have never said that anywhere. It means that it is harder, considerably harder, to build good SW that way, and that is for some very good reasons, not my opinions, but because FB development is based on a bigger bet, that things will be ok at the point of merge. CI is more pessimistic I guess, and so doesn't trust your guess that your FB will be fine and will integrate perfectly with everyone else's. So instead, we check that each small change integrates, and the data says that works better.
The reason that I explained that I think that I am "not dogmatic" was not because it worries me that people may think of me that way, it was rhetorical, so that I could explain what is better than dogma. That isn't accepting all viewpoints as valid, you are allowed to have your own opinions of course, but I am also allowed to disagree with them, and you with mine. I think that the ways in which we choose to express our disagreement matter quite a lot. I don't call you or think of you as dumb because you disagree with me, but I do think that people are being dumb if they are being dogmatic. I have tried FBs, PRs, Waterfall dev, Pairing and not pairing and so on. So when I express my opinion it is based on personal experience and as I explain in this video, what I think of as a reasoned approach to understanding what I learn. If someone hasn't tried true CI and TBD, or pairing, and dismisses it, which of us is being dogmatic?
Thank you for watching, sorry if you decide to go.
2
-
2
-
2
-
2
-
I think that is a team decision, what makes sense in your context. I think that there is a danger of relying too heavily on the documentation in tools like JIRA. It is certainly useful, but it is not really the "truth of the system". Stuff in JIRA could say one thing, and the code could do something different all together. I think that is what you are describing, the potential for JIRA to be out of step with the system in production.
If I am working in an environment where that matters, in a regulated environment for example, then I prefer to get a more accurate "documentation" of what is really in production than I can get from human notes in JIRA. I use an approach to testing where we create Acceptance Tests as "Executable Specifications" for the system. Every change is tested, every change has an Exec. Spec. which both tests, and documents the behaviour of the system. This way it is not possible to release a change that mis-matches the spec, because if it didn't meet the spec a test would fail and so reject the release.
If you could do all of this quickly enough, say all your tests ran in under 1 hour, at that point I start to wonder about the real value of Feature-Flags rather than just changing the code, and documenting that change in the tests. But that is probably taking this idea beyond the scope of this video.
If you would like to explore a bit more in what I am talking about with the Exec Specs, take a look at these videos:
How to write Acceptance Tests: https://youtu.be/JDD5EEJgpHU
Acceptance Testing with Executable Specifications: https://youtu.be/knB4jBafR_M
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@arthurt7697 "If a team is working in fixed sprint cycles..." and the story doesn't fit, then by all of the definitions of a user story that I have ever seen, it is deemed "too big" and should be decomposed into smaller, user-focused, user-visible, pieces. I know that this is "pretty standard" these days, but that is because most teams that claim to be "Agile" aren't in any meaningful way. Just google "How big is a user story". The first hit that I found said that a good guideline is that the biggest should be 1/2 the duration of the Sprint. That's a reasonable guideline.
Jira can help teams to decide what to work on next, but this should be at the level of user stories. How the team decides to organise itself, is not something that is sensible tracked, or even visible, outside the team, so tasks to achieve some goal, should not be in Jira, or anywhere else, you don't need to keep them they are like notes you may make on scraps of paper, you use them as a tool, then discard them when the job is done.
If multiple stories depend on the same technical work, do the technical work in the context of the first story, that story will go slower, but that is the truth of achieving the goal that it represents. The next stories will go faster, but also may help you to better understand how to solve the technical part of the problem, so use them to refine your tech solution too. This works a lot better in getting to better designs in the end, and ultimately, in going faster overall.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Yes, you replay all the orders, but during replay, one option is to disconnect the bits of the system that act on the orders, so you get a perfect reproduction of the state of the system, but don't re-do the shipping of orders, for example.
The advantage of this strategy over the more conventional "state snapshot" approach of storing a static picture of the state at some moment in time, is that in reactive, event-based, systems like this you don't loose information. Ion a DB, you loose the time dimension, unless you actively design the DB to store it, and relational DBs in particular are pretty poor at keeping time-series data. In an event based system every event that ever changes the system is kept, in-order, so that you can rewind, or fast-forward through time. For example. You can replay all of the events from production until you hit a bug that caused a problem.
One final thought, if you are distrustful of this strategy, this is how relational DBs work internally, it is just that they don't keep the events once they have acted on them, and so loose the time dimension that describe HOW the data got to the state that it is in.
2
-
2
-
2
-
2
-
2
-
I have no problem with scepticism, I think it is the best rational response to everything, however...
The trouble with "Agile" is that it is so poorly practiced that nearly every judgement is made on the basis of something that is the opposite of agility. For example, you mention stand-ups being replaced by status reports. Treating a stand-up as a status meeting is a common anti-pattern, it is absolutely NOT the goal of a stand-up. I say that without thinking that stand-ups are essential to agility. But if you end up merely reporting your status, you are missing the real value of the standup entirely, so how can we judge "agility" that way?
I agree with you that treating "agile" as a series of ceremonies is a bad sign, it is, but at its heart agility is about being able to inspect and adapt, I don't think that you can credibly criticise that, since that is how science and engineering work, and they, pretty much by definition, work better than anything else.
Just to be clear, the intent of a stand-up is to ask for help if you need it, or tell people about stuff that you did that you think may be helpful to them "I found a cool way to add new features to service X yesterday" Not - "I completed task 'Y'".
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I was speaking by analogy, when was the last time that you made a decision in software based on measurement rather than guesswork? Do you know the performance characteristics of the data structures and collections that you use? Do you think about the big 'O' number before choosing?
I am not suggesting that you need to know the temperature of your software or its tensile strength but what is the equvalent?
How many bytes per entry does a hashmap add compared to an array? I can tell you the answer for that in the version of Java that I used when building trading systems, because it matters, it matters in terms of the size of the models that you can hold in memory and, maybe more importantly for trading, and maybe for games, it matters in terms of performance. If your hashmap entries take more space that means a higher cost to move them into the CPU, a higher cost to copy them, a higher cost to garbage collect them too, depending on how that works.
These too are only simplistic examples. I think that the important idea is that we should think about what we can, and should, measure and understand in order to work on the basis of knowledge rather than guesses and fashion. That is even more important for software than for rocket engineers, because our stuff is so abstract, maths is pretty abstract too, but that doesn't mean that they give up, we, as an industry, have mostly given up on rational decision-making.
2
-
Well, I think they did what you'd expect. You start with a guess, "let's use CF cos everyone is raving about the strength to weight ratio", then you start assuming how your guess is probably wrong, and looking at the ways that it may be wrong to see how that starting point plays out. In this case, it was too expensive and not strong enough for its weight at some important operating temperatures.
I think that there is a lot to be said for "boring tech" sometimes, lots of people have experience of it, and so we know the wrinkles. the reason that I highlighted it here, is because I think that analogous decision making is so rare in software.
Even when we do tech bake-offs, we are often extremely subjective in the criteria that we apply. This is one of the reasons that the Stability & Throughput measures from the "Accelerate" book as so important. They give us a bench-mark that we can use. "If I pick this tech, does it make my quality higher (Stability) or my efficiency better (Throughput)?". If if makes them worse, don't do it, if they are the same, only then decide on subjective qualities.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I am also interested in the things that are common though, because I think that they are where the real value lies. We often focus on the differences between tools, frameworks and languages, but I think that a good programmer will be productive in a new tool-set very quickly. I have worked with some great programmers, they'd write better code than most people in languages they had never used before after a week or two. Clearly, to become expert, and idiomatic, in a particular tech, you need more time than that, but I think we don't spend enough time in our industry talking about the basics of design. I used to write Assembler and C commercially, these days I write stuff in a variety of different languages, but I think that the ideas of design are fundamental, and ultimately, much more important, and valuable in your career.
2
-
2
-
2
-
2
-
2
-
As to whether or not this strategy works, you can ask Google, facebook, Tesla, SpaceX, Walmart or Microsoft as well as me and many other people. how often do you work on features that are pulled before you finish working on them? The other thing is that for CI to work you need great feedback on the quality of your work, which means good automated testing, so when you do need to pull a feature, or make any other change, changing the software is a LOT easier.
to say "the strategy doesn't work" is denying the facts and reality of many very effective teams producing high-quality software. You can justifiably say "it couldn't work for my team, because our testing is poor and the design of our software doesn't allow us to change things easily", or you could say "I can't imagine it working for me" I but you can't sensibly say "it doesn't work at all" because some of the most effective software in the world isa built this way.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Not specifically no. It is generically about "Software development" and so covers game development in the same way that it covers any other kind of dev. It is a book about, what I believe, is an approach that improves your chances of doing a better job whatever the nature of your product.
My experience has been that everyone thinks that their form of dev is a special case. Game dev has some specific challenges, but they are different in scale, not in kind or in principle.
One of the more difficult parts of the approach that I describe is how to test the code that you create at the point where it touches the real world, through a UI for example. This is not really any different for a game than for anything else, except that the UI is so rich in a game. But the behaviour underneath the pixels is all completely testable, and then you can test the rendering of that model into pixels. This is only the same problem as testing a UI anywhere else.
I have written games this way, though not any commercial games for a very long time. So it is certainly possible, but it will be hard to do well for some things.
IMO this is mostly about how much value you think that there is in this approach. I think that it is so important, so valuable, as an approach that I will work hard to make my system, whatever it is, testable. Even if it has a rich, complex, UI as a part of it.
Last commercial game I wrote was a financial trading game, some people may not think of it as a game, but it was really, it had rich real-ish-time graphs that you interacted with to predict where prices in markets would go. We did that as a full CD development, testing every aspect. As part of that we architected our SW so that we could test nearly all of it in isolation from the pixel-painting of the graphics, but then did some generic testing of the pixel painting.
This was certainly NOT a AAA game, but there was nothing in principle that was different. This is what I would do if I was building a AAA game!
2
-
2
-
2
-
I think that we are 90+% aligned here.
A few thoughts...
If your "typical cycle (between branching and merging is less than half a day)" then you are doing CI, so I have no argument with that at all.
I think that you are doing more typing than me, I work on master locally and then merge all my local commits immediately that I have made them to origin/master. So I don't have to create any branches or merge to them (less typing for same effect that you describe). To be honest, this doesn't matter at all, if you want to type a bit more it doesn't matter.
"The higher the threshold of the quality gate, the more frequent the checks are going to be red" Not so! The slower the feedback loop the more often the checks will be red! Adding branches slows the feedback loop. The problem, as you correctly point out, is the efficiency of the feedback loop.
One of the more subtle effects of feature-branching, in my experience, is that it gives teams more room to slip into bad habits. Inefficient tests being one of the most important ones. If everyone is working on Trunk, then slow tests are a pain, so the team keeps them fast.
I think that most of the rest of what you are describing is about feedback efficiency. Let's try a thought-experiment... If you could get the answer (is it releasable) in 1 minute, would you bother branching?
If not, then what we are talking about then is where is the threshold where the efficiency gains of hiding change (branching) outweigh the efficiency gains of exposing change (CI). I think that CI wins hands-down as long as you can get a definitive answer within a working day. Practically, the shorter the feedback cycle the better, but my experience is that under 1 hour is the sweet-spot. That gives you lots of chances to correct any mistake during the same working day. I built one of the world's highest performance financial exchanges and the Point of Sale System for one of the UK's biggest retailers, and we could evaluate the whole system in under 1 hour.
So my preference is to spend time on optimising builds and tests, rather than branching.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
No, I don't think I have. It's pretty simple really. If you have separate repose, it adds a barrier unless what is inside each repo is genuinely independent of the contents in other repos. If my component, in my repo calls yours, in a separate repo, what happens if you change your code. You break mine and make it my problem, so I am no longer in control of keeping the system working.
My advice is that where code is coupled like this, where it is not independently deployable, where we need to test it with other pieces before release, it is more efficient to keep things in a single repo. I think that the ideal scope for a repo, and a deployment pipeline, is an independently deployable unit of software. Meaning that we can change it, test it and deploy it with confidence, without testing it with anything else.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I think that the goal must be to test the code that will be released, rather than some close approximation of it. So I'd try and find other strategies than having different dependencies in the test environment, unless those are genuinely outside the scope of your system - your team is not responsible for building them, deploying them in step with the system you are testing, and running them. This means that they are in a separate process space, not dependencies that are compiled in to your system, and so using some form of inter-process comms. I fake at this point in the scope of Acceptance tests.
If any of these things aren't true, then this code is part of the reality of the system that I want to test, so I want them in my acceptance test. So now I have to think harder about how to make that efficient. Incremental builds, incremental deploys, build bakery strategies and so on.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Not really, agile is well defined in the absence of any "rituals". The agile manifesto describes the principles behind agile thinking. Amongst those core principles is "Individuals and interactions over processes and tools" the 12 principles take these ideas further, "Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done".
There is nothing in the manifesto that proscribes any rituals.
If you mean that agile development takes discipline, then I agree, but if you adopt the principles of the agile manifesto, that is:
"Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on
the right, we value the items on the left more."
Then you won't go far wrong, but the ceremony's can sometimes help, but also often hinder. The principles matter more.
https://agilemanifesto.org
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I don't know. I think it is probably a different thing.
Co-pilot is being described, even named, as a developer-assistant, not the thing that writes the code, but the think that helps to write the code. It is more capable than that, but again it falls into the trap of assuming that initial coding is all there is.
My bet is that it doesn't do as well at debugging, or adding features to existing systems, which is much more what our job is about, and much more difficult to do well.
I am 100% convinced that full AI will happen, I think that is inevitable, so one day machines will be better at all aspects of coding (and everything else) than us, lots better! I am impressed with what I have seen of things like co-pilot so far, but I think these things will need to demonstrate things like bug fixing, and adding new features and generally the ability to work incrementally. They need to be able to write code that allows them to continue making change incrementally (which quite a lot of human SW devs can't do) until then the machines won't take over. I think it's the problem solving that makes our job hard, not the typing.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Except unfortunately that doesn't work in practice. The overhead of doing the work to automatically split the problem up into parallel steps, then merging the results back together at the end results in an orders of magnitude reduction in performance for even two threads, and the costs increase exponentially as you add more threads. One of my colleagues did an experiment to measure this, demonstrating that claims for "auto-parallelisation" in languages don't hold up. Let's imagine we are going to increment a number 500 million times, here is how long it takes in milliseconds on an Intel processor a few generations old:
Single thread 300
Single thread with lock 10,000
Two threads with lock 224,000
Even if you do lock-free concurrent programming:
Single thread with CAS (Compare And Swap) 5,700
Two threads with CAS 30,000
So for the best case, ANY form of concurrent execution is 100 times slow, so you'd need more than 100 threads, but then the costs go up again! This is an unsolvable problem. Look at Amdahl's law.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I agree, one of the important things that it indicates, is loose-coupling, which is a good thing. I do think that it is important to recognise that it comes at a cost, you relax consistency, and the code is a bit more complex, to protect the APIs, which is not always the best choice, particularly at the start of a project. A single-repo, shared-code ownership, distributed service model is my preferred starting point for bigger projects, so that we can riff on getting services and APIs established, before breaking out more independent pieces.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@brownhorsesoftware3605 Here's a quote from the OO page on Wikipedia:
"Terminology invoking "objects" and "oriented" in the modern sense of object-oriented programming made its first appearance at MIT in the late 1950s and early 1960s. In the environment of the artificial intelligence group, as early as 1960, "object" could refer to identified items (LISP atoms) with properties (attributes);[3][4] Alan Kay later cited a detailed understanding of LISP internals as a strong influence on his thinking in 1966.[5]
I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning – it took a while to see how to do messaging in a programming language efficiently enough to be useful).
Alan Kay, [5]
Another early MIT example was Sketchpad created by Ivan Sutherland in 1960–1961; in the glossary of the 1963 technical report based on his dissertation about Sketchpad, Sutherland defined notions of "object" and "instance" (with the class concept covered by "master" or "definition"), albeit specialized to graphical interaction.[6] Also, an MIT ALGOL version, AED-0, established a direct link between data structures ("plexes", in that dialect) and procedures, prefiguring what were later termed "messages", "methods", and "member functions".[7][8]
Simula introduced important concepts that are today an essential part of object-oriented programming, such as class and object, inheritance, and dynamic binding.[9] The object-oriented Simula programming language was used mainly by researchers involved with physical modelling, such as models to study and improve the movement of ships and their content through cargo ports.[9]"
There were lots of steps, and it was Alan Kay that really pulled the threads together, but he was building on lots of prior work that got parts of the picture. As I understand it SIMULA was a kind of DSL for encoding simulation systems, it wasn't really a general purpose language, at least not really used as one.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@zenonkurcz7215 well it wasn't Scrum, not my favourite approach, but you are right, agile projects are delivered by many small teams, so our teams of hundreds were divided up into smaller, functional teams focused on one part of the problems, but working in a shared codebase, with a single deployment pipeline. Since we were talking about CI and branching, that was the context of my comment. Google hold nearly all of their code, billions of lines, worked on by thousands of people, in a single repo!
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@jordanschwartz6871 No quite. Devs owned the tests overall, all the layers, and maintained them. Anyone, including QA, could write test cases (specifications), if the QA needed features that weren't in the DSL yet, they could invent the language to express the idea in the test case. Then devs would add the new support into the DSL, taking the language in the test case as a kind of requirement. They could change the syntax, to be more in-line with the DSL or more abstract if they wanted to, usually in discussion with the QA who wrote it. The dev team always wrote the protocol drivers, and maintained them. These were the code that know about how the SW, that devs were changing, worked, so definitely the dev's responsibility to maintain.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@Greedygoblingames Yes, I certainly agree with that. I try hard not to be too dogmatic abount anything. I think that you can argue me out of any of my beliefs, but you will have to work hard to convince me of some of them. Pairing is one of those, I'd need some good evidence to shift my view that it is a higher-quality, more team-centered approach. But there are some people that hate it!
I once led a team, when we hired people we told them that we did TDD and Pair Programming, and if that wasn't for them this place wasn't for them. We hired someone who was very good, and really, against his preferences and instincts, worked hard to make it work. The best he ever got to was to relunctantly pair some of the time. That was an acceptablce compromise for all of us. Not ideal, but it worked ok.
He was a very good programmer, I still believe that we could have helped him to become a great programmer if he could have worked better with others, and so been more open to learning - but we all ended up being comfortable with the situation.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Well, there is only so much that you can say in a 17 minute video. So yes it doesn't talk about everything.
I am interested in where you think the extra complexity lies? It is certainly a different way of thinking about systems, but I genuinely don't think that it adds any complexity that isn't already there, it may surface it and make it more obvious to people, but I don't think it adds any. In addition to that, it does allow you to ignore stuff that otherwise infects nearly every aspect of normal system design, for systems at any scale.
As I said in the video, I recognise that it is a subjective, contentious, statement to say that this is easier it is a balance of trade-offs, but I think that overall for systems of any reasonable complexity, this is easier.
2
-
@RasmusSchultz Ok, not the same thing at all. Microservices are:
Highly maintainable and testable
Loosely coupled
Independently deployable
Organized around business capabilities
Owned by a small team
(Source: https://microservices.io/)
So, not asynchronous, not responsive (in terms of definition), not elastic, not resilent.
Reactive systems are:
Responsive,
Resilient,
Elastic,
(Async) Message-Driven
So, not independently deployable, not loosely-coupled, not necessarily owned by a small team.
Now you can build Reactive Systems out or Microservices, but you can also build Microservices that have none of the properties of Reactive systems.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Well, it depends, and mostly it depends on how you designed the SW in the first place. If it is modular and nicely written, I could make the change for one, small, part of the code to try it out, and it if works out nicely I could commit that. We did this on one of the projects that I worked on moving between JS and our own, home-built, Web app framework, and Google GWT.
The point is you don't start with a bunch of spaghetti code and say "Aha! Doesn't work here" you start by designing things to support a more incremental approach to development. You make code modular, cohesive with a good separation of concerns, use good abstraction and work to manage coupling.
It is one of the reasons that I am not a big fan of the more complex "framework" style tech. They are too intrusive in terms of design.
My real answer then, is that I can't answer your question without seeing where your code is now. It may be that the way you have implemented things HAS built a barrier to m making change more incrementally, but I would argue that that means that your code has some problems as a result.
The fact that CI kind encourages us to care more about treating dev as an incremental thing is a positive to me, it means that it is a tool that helps us to do a better job!
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I think that your description of a DevOps person, though common, is one of the anti-patterns that I am referring to. The idea of DevOps is not for it to be a role represented in a single person. What you are describing is simply ops. DevOps is a team level discipline, in that successful development teams need to be responsible for their software in production. For that you may want some Ops people as part of the team, you certainly need some Dev people there too, but the important point is that, however these skills and responsibilities, are spread amongst the people, the dev team are monitoring and controlling their system in production. Tonight's video is on this topic "You Build it, You Run it", so watch out for that.
2
-
2
-
Nothing to do with human dogma, but rather, as a way of checking your, or its, working. Without that you, and it, are only guessing at the solution. If you believe that you can catch mistakes by thinking hard and by understanding your code, I think you are missing the point of software development. It is about solving problems, not writing code, and you don't know that the problem is solved, however smart you are, until you try out the solution. Anything else depends on PERFECT PREDICTION of results and certainly AI can't do that, and Physics says it never can.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Just to be clear, I am pretty certain that Allen wasn't scoffing at the idea of "Inspect & Adapt", but rather at the way that people talk and think about it, characterised by the saying things like "Inspection and Adaption". This is a misuse of English, the word "Adaption" exists, but doesn't make sense in this context. What that means is that people are using the words as tokens and not really thinking about their meaning. Inspect and adapt is a key concept, not just to agile, but to science, and Scrum doesn't help much with our ability to do that, unless other more important things are in-place, like Continuous Integration for example.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
I made similar criticisms a few months ago, but they are kind of irrelevant. These things don't need to be intelligent in the same way as people to be dangerous, but also they are improving exponentially, and we are not. GPT 4 can use tools, like calculators now, so basic maths are covered, and so, in principle, is anything else that you can do with a computer, AI or not, because AIs will be the ultimate computer users.
LLMs aren't the goal they certainly aren't "as good as AI can get", they are the beginning and they are already moving so fast that we don't really have control. It's possible that we are lucky, and that works out ok for us, but it is also possible that we are unlucky. In this case "unlucky" is not about whether Google or Microsoft win search, or even whether the US or China achieve world dominance because of their use of AI, it is about whether or not there are any people left in 50 years time.
That's what 50% of AI experts say that there is a 10% chance of - no people! (They don't put a 50 year time-limit on it, that is my guess).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Thanks for the feedback. I am aware that there is a danger of me lumping a whole category of systems together here. I am pleased to hear that your product does better than most. That is kind of my point though, the danger with low-code systems, and particularly their marketing, is that they are sold as some magic bullet that speeds the development process, when really they MAY only speed the coding. My experience of building SW systems is that if the coding is the hard bit you are doing it wrong, but that is how these things are sold. If low code systems can genuinely raise the level of abstraction, so that they help us to think of the problems in better, more efficient ways (like a spreadsheet or SQL does) then fine, and certainly sometimes they do, but that blurry line between "simple enough" and "arrrgggh there's an iceberg ahead" is very difficult to spot. In regular SW, at least when it is done well (and there are lots of assumptions in that statement) then we approach, even things that we think may be simple, more defensively.
If your low-code systems allows me to incrementally discover the problem and "grow" my solution to it, then great. If I can make a mistake a spot it in minutes (so you probably need unit testing, not just testing), well before I get anywhere near production, then even better. If you can develop it and deploy from a deployment pipeline, which I'd consider that table stakes, but highly unusual in low-code environments, then fantastic - I have no argument, and if I have a problem that fits in your niche, I'd sign-up.
1
-
1
-
1
-
1
-
1
-
1
-
@piotrd.4850 Well I have seen it work for about a thousand developers working on the same codebase, so not just for small teams. Commonly for teams of hundreds, composed of many smaller teams. I don't understand your translations at all, they seem non-sequitur to me.How does "Working SW over comprehensive docs" relate to "only works on my machine" the definition of "working", at least for Continuous Delivery is being useful to users in production. "Tons of billable hours" well only if you are crap at writing code perhaps, the teams that work the way that I describe produce measurably higher quality software more quickly, not less, so actually fewer "billable hours". Read the "Accelerate" book or the "State of DevOps reports". The companies that work this way are not "small agile tiger teams" they are often some of the more successful companies in their fields, Amazon, Google, Microsoft, Tesla, SpaceX, Ericsson, Volvo, US Air force UK Government, the list goes on.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Not sure which "rigid limitations" you are referring to, but if you mean the iterative approach, this is the 3rd manifesto principle:
"Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale."
The manifesto was most heavily influenced by three popular approaches at the time, and was meant to be a synthesis of them all. The were Crystal (iteration based), Extreme Programming (iteration based) and Scrum (sprint based).
I agree with you, my least favourite of these is probably Scrum, I think it unfair to say it isn't "agile", but the way that many firms operate it is not agile. I also agree that your can't really be agile without the technical excellence, which is why of these my very strong preference is XP. My preferred approach to dev, Continuous Delivery, is really a 2nd generation XP.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
There is an informal way that I learned from Jeff Patton, "What is it worth to your customer? Would they pay a pint of beer, a month's salary, a holiday, a car or a house?" then ask "How much effory? beer, salary, holiday, car or house?".
Draw a matrix and it tells you what to build, what not to build and where you need more work before you build anything.
If it is worth beer to your user, but takes a house of effort to build - don't build it it's a dumb idea.
If it is worth a house to your user, but takes beer to build - build it immediately.
Anything in the middle probably needs a bit more thought.
I liked it because it includes cost and value.
1
-
1
-
1
-
1
-
1
-
1
-
I think that you misheard me. I say that "Microservices", by definition, should be independently deployable, not "services". Living in the same repo doesn't "fundamentally couple" the services, the design of the service does, irrespective of where it lives. Putting services in separate repos is no panacea for this, and my point is that you make it a lot more difficult to learn how best to decouple them, if you can't iterate and change them quickly. You can't change them quickly if they live in separate repos.
Service development is only quicker with microservices if they are independently deployable and the team is large. If they neither of these things is true, service development is slower because there are more overheads.
Microservices is a team-scalability approach, and crucially depends on that deployment independence for that.
Having said all of that, I am pleased that you like the rest of my videos, thanks. 😎
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Have you read the book "Accelerate" or listen to Nicole Fosgren talk on the topic. It is not true to say that "Polls aren't scientific" it is how all Sociology works. It is not as precise as Physics, but then Humans aren't perfect spheres either. The approach is, at least, defensible as a valid sociological approach, and in this case, a lot more work was done to make it such, than is usually the case with random polls. All explained in the book "Accelerate"
"Accelerate, The Science of Lean Software and DevOps", by Nicole Fosgren, Jez Humble & Gene Kim https://amzn.to/2YYf5Z8
The empirical evidence is also with this approach, it is how some of the most successful companies in the world work.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
These constraints are usually designed by people in your organisation. No regulatory framework that I am familiar with (Finance, Healthcare, Telecoms, Gambling a tiny bit of automotive) requires such a commitment. Most of them demand that you do what you say you will do to do a safe, high-quality job. So that is one starting point - a real constraint, or one that was made-up in your org. If the latter, the next question is "how does that help" to do a safe, high-quality and commercially efficient job? The data says it doesn't, so work to eliminate unnecessary constraints. I'd prefer more loosely-coupled architecture to branching, it is harder to do well, but it works a lot better.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I don't have any academic links, I confess it was an off the cuff remark that seems self-evident to me - so no proper science. But...
"We only see certain frequencies" - yet we arbitrarily describe them as red, green, blue, yellow, when in actuality it is a continuum.
"We only hear certain frequencies" - yet we name the notes, A, A#, B, C, D, etc and find some intervals between notes pleasant and others not.
Evolution blurs the boundaries between species, each individual is on a spectrum of difference within each "species" but still we differentiate between dogs and wolves or snails and slugs and so on.
I think it is a reasonable statement, we group things together, arbitrarily, we do the same when we write code.
1
-
The problem is, how do you measure success? If you have not seen the alternative, then what I describe may sound "idealistic" and not "available to normal teams". This isn't true. This is practiced around the world in VERY successful teams of all sizes.
I didn't say "CI is out the window if you can't do it in 15 minutes", but read the definition of CI. Not my words...
Wikipedia "In software engineering, continuous integration (CI) is the practice of merging all developers' working copies to a shared mainline several times a day."
Marin Fowler "Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily"
From the C2 Wiki which defined it: "What if engineers didn't hold on to modules for more than a moment? What if they made their (correct) change, and presto! everyone's computer instantly had that version of the module"
ALL CI experts describe CI as a process where we evaluate changes "AT LEAST once per day". Once per day is not really good enough, but will do.
I use 15 minutes as a realistic example, that is how I work and have done for 20 ish years. So not a stcrawman, a real, working approach.
The data (read the Accelerate book) says that anything that compromises the "at least once per day" produces software of lower quality more slowly. If you have never seen what good SW dev looks like, you may not recognise this in your own work, but that is what the data says. You are free to disagree, but I am afraid that you are the one with the strawman in this case.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Well as I believe described by one of Wittgenstein's contributions, philosophy isn't science.
Science is science, and science is based, in part, on being skeptical about everything - questioning things and that is what I was referring to.
"Science is the belief in the ignorance of experts" - Richard Feynman
I am neither a philosopher nor a scientist, though I know a lot more about science than I do about philosophy. I aspire to be an engineer, and apply scientific (not philosophic) style reasoning to solving practical problems in software.
I can only approach the world from my own perspective, I am rarely quoting other people on this channel, and when I am I try to make sure that I say so. These are my ideas and describe my approach. I think of it as applying the skeptical mind to ideas, questioning everything. I don't mean this in the, to me, rather dry terms of philosophy, I mean it in the more practical terms of science. I try hard to find the weakness in ideas, including my own, as a way to improving my understanding of things - so this is what I mean by "I question pretty much everything".
I am a software developer, and I try my best to understand problems and how to solve them. One of the most common failings, not just in software, is the temptation to fall back on dogma, and received wisdom. So I think I do question everything, in that sense, ao thank you for sharing my videos with your students and I hope that they will question ideas in the same way that I assume we would both recommend.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
That is not how you sensibly manage risk though, there are risks that you can't estimate for, so you progress in small steps, and evaluate if you want to continue after each small step. We will pay of scaffolding to fix the things that are estimable, and while it is there, the builders will explore and investigate the "problem chimney" in more depth and then we will decide what to do.
In financing terms this is EXACTLY the same funding model as a software startup. Invest some seed capital, try stuff out until you learn enough to know whether or not to continue.
This is how we deal with uncertainty in the real world, it is in the fake work of perfect estimates and fixed price contracts where we don't.
Waterfall is not a straw man, it is the reality of most software projects that I have seen. If you plan all of the features before you start work on the code, that is waterfall development. If you test when you think the work is finished, that is waterfall. If your PO or UI team throws requirements into the Dev team without working alongside them on those requirements, if your dev team hand over the system to a QA team to do the testing and to and provide written instructions to the Ops team about how to deploy the code. These are all extremely common activities, and are all waterfall baed approaches.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I have never thought of doing an editor this way, but I think I would start by experimenting with the idea of a hierarchy of contained actors, document composed of paragraphs, with docs and paras being actors, paras, composed of words. Actors for pictured, headers, footers and so on. Not sure if this would work out well, but it is probably my first guess at a place to start experimenting.
It is quote possible, maybe probable, that actor isn't a good fit for this problem, but I confess that I also don't like the idea of a single store represents the whole document either.
1
-
1
-
1
-
@Daniel Sandberg describes the main solution, separate "deployment" from "release" allow yourself the freedom to create new features in multiple steps, each one of which is safe to release into production.
Your imagined description of working this way lacks something important, in CD we work so that our changes are ALWAYS RELEASABLE, so you don't spend "the final hour or two of every day" checking for releasability, you works so that your automation tells you this in minutes, after every commit. What the data says, and my experience of working this way on complex projects, is that this is dramatically more efficient and effective, not less.
Your last point, "stakeholders being disappointed" is also not born out in practice. Releasing changes more often, in smaller steps, allows us to see, sooner, if the steps are wrong, so we correct sooner. On the whole orgs and teams that practice CD do a better job of building what customers and users need, not a worse job.
You only have to look at some of the orgs that practice this approach to see that it can, and does, work in practice. People use this approach to build all sorts of software, from self-driving cars and self-flying space rockets to finance systems, medical systems, telecoms systems, and the biggest software systems on the planet that we all rely on every day.
1
-
1
-
1
-
Sorry but I disagree, particularly about the “DORA is politics”, there is certainly a risk that DORA my be misguided by commercial interests, I think it being part of Google’s cloud division is a risk, but the research I am quoting is real research, and the people doing it now are certainly aware of the risk of commercial influence. Just look at the measures, now say what is wrong with them. They seem fair, reasonable general measures of two things that are vital to the success of ALL SW DEV. The quality of our work, and the efficiency with which we create work of that quality.
There are things that they don’t cover, but what they do measure, they measure effectively.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I argue that modern engineering is really the practical arm of science. We apply scientific style reasoning to solving practical problems. It is a fundamental of modern science, since Karl Popper in the 1930's that science does not proceed by "proving correctness" it proceeds by "Falsifying mistaken explanations".
That is what I mean by "start by assuming we are wrong".
Some assumptions are built on a stronger footing than others, but NONE are certain. If asked to write a C++ program, that could be a stupid choice, and you find that out later. At which point you would need to change.
So good SW dev minimises the impact of the bigger risks. For example, while I don't start new projects assuming that my compiler is broken, I have in the past found bugs in compilers, but I don't start the project assuming that there will be. If there is a bug in my system, I don't assume that it is because of bugs in the compiler, I start assuming that the bugs are mine, because that is more likely, but after I have "Falsified" that theory, I move on to other possible explanations, and eventually, after I have ruled out lots of other things, the next theory to try out is that my compiler is wrong.
So starting off assuming that our assumptions are wrong, is really saying that the the safest course, as an engineering, is to work on your best theory "my compiler works" until you have something that makes you think it doesn't. But don't start by assuming that it can never be wrong.
A scientist assumes quantum theory, or general relativity is probably correct, but the key word here is "probably" because they are certain that neither of them are in fact correct, even though each has passed every experimental test ever.
I will work on the assumption that my compiler is correct, but in fact I am certain that it isn't, but that most of the time, the bugs that are in it are so esoteric that they won't affect me.
1
-
@kennethgee2004 I guess that we are going to disagree on the philosophy of science stuff then. 😉
I think that your view, and Edison's, is not well aligned with modern scientific thinking.
My preferred way of thinking is probably captured in David Deutsche's "The Beginnings of Infinity", where he describes science as striving for "Good Explanations", and defines fairly precisely what makes an explanation "good".
This "As a scientist, one should not assume that assume that quantum theory is true, as there has been no evidence for it." is simply factually incorrect. There is lots of evidence for it, in fact, as I said, it has stood up to EVERY experimental test that has been applied to it so far. Without quantum theory, electronics doesn't work. You can carry out a quantum experiment with a few dollars of hardware:
https://youtu.be/kKdaRJ3vAmA?si=xi0ZiKQk_B4eb0ef
https://spookyactionbook.com/category/diyquantum/
The point of science, is NOT to assume that you are right, but based on your best theory of the reality of the situation, create an explanation, and then show where it is wrong.
The assumptions that you describe are wrong, the SOLID principles are a useful, kind of folk description that can help, but they aren't very rigorous, and are open to criticism:
https://youtu.be/tMW08JkFrBA?si=VdgFv7JOZU_flqFI
This doesn't mean that they are useless, and neither does saying that "start off assuming you are wrong" mean that you reject ideas without evidence. My point is that you look for the evidence. Believing things by rote, is not engineering, and is not science based.
As Richard Feynman said "Science is a satisfactory philosophy of doubt". I recommend doubt, not automatic rejection.
1
-
1
-
1
-
1
-
1
-
1
-
@ITConsultancyUK Well lots of companies like yours would disagree. SpaceX (not defence but similar), USAF are currently using continuous delivery and high levels of automated testing for fighter jets. Tesla for cars. The difference is that you have to build the testing into the development process. Sure, people may be cheaper if you do it after the fact, but this isn't how it works for examples like I have given. In the cases you design the system to be testable from day one.
I was involved in building one of the world's highest performance financial exchanges, we ran roughly 100,000 test cases every 30-40 minutes. No army of people can match that coverage. Google run 104,000 test cases per minute.
I have helped manufacturers implement these techniques for medical devices, scientific instruments, computer systems, chip manufacturers, cars, the list goes on. So we aren't talking about "toy websites" here, these are complex, regulated, safety-critical systems. What I am trying to describe here is a genuine engineering approach for SW dev in these contexts.
Sure, you can never test 100%, whatever your strategy, but automated testing is always going to be orders of magnitude more cases than manual testing, unless you do it really poorly.
Tesla recently released a significant design change to the charging of their Model 3 car. It was a software change, test-driven, using ONLY automated tests to validate the change. The change went live in under 3 hours, and after that the (software driven) Tesla production line was producing cars with a re-designed charging mechanism that changed the max charge-rate from 200Kw to 250Kw. That would be simply impossible if it relied on manual testing.
I think that humans have no place in regression testing, so I am afraid that we will have to disagree on this.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Some of the arguments being made in this episode really go back to the early days of agile thinking. The argument of Scrum being about cert sales, was part of the division between the Scrum Alliance and Scrum.org where Scrum was commercialised which certainly annoyed many of the originators of Scrum as well as of agile disciplines. Scrum was primarily invented as a PM wrapper for XP, and the quotes about XP we mostly from the people that invented Scrum, not us.
Allen and I are both from an XP background. XP is certainly to my mind by far the stronger agile approach. I am a bit less scathing of Scrum than Allen, but my long term joke is that the reason that Scrum won the agile race to adoption, is because it is MUCH easier to cheat Scrum than it is to cheat XP. XP defines agile engineering practices, like CI, TDD, Pair Programming, Scrum has NOTHING, zero, to say about the process and mechanics of the actual construction of the software.
Unlike Allen, I think that Scrum can help some teams some times, but mostly doesn't because most teams cheat and miss out the more difficult bits of being agile. An agile team that doesn't deliver working software intro production at the end of each Sprint is NOT agile! A Scrum team that treats the Scrum Master as a form of lightweight Project Manager is not agile!
1
-
1
-
1
-
1
-
Sure, that would be nice, but it would also be nice if fairies did my washing! It always entertains me that biz managers hold technical functions, like us, to completely different standards to commercial functions. "When will we become profitable?", "Why didn't my share options pay off after X years, as you said they would?" and so on, it is the same thing. Humans CAN'T predict the future, it is all more complex than that, which is why we need to organise our work to cope with that extra complexity, not wish it away with magical thinking.
https://youtu.be/MRwZQGdllDY
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Yes there are some places in the world where that is the case, and many others where it isn't. I am never sure where I stand on this, I know that taking an engineering approach, whether mandated by law or not, is the path to better quality software. I have a suspicion that when we talk about software development we are often grouping together several really quite different things. There is a group of people who make careers from, largely assembling the lego-bricks, of other people's code into systems. A lot, but not all, UI development is like that these days, these are the kind of tasks that tools like Visual Basic was used to accomplish.
This is very different to where I spent most of my career, building systems software and unique bespoke systems, occasionally solving problems that no one has done before. These are very different things, and my experience is that people in the second group tend to be more cautious, more defensive in their approach, because stuff goes wrong fast when you are breaking new ground. The interesting question to me, is are the rules the same for both of these groups? I think that good engineering guidelines can help both groups.
1
-
1
-
1
-
1
-
1
-
Well NASA did develop the software for the Curiosity Rover with a continuous, iterative test-in-simulation approach. It wasn't exactly CD, but it shared a lot of characteristics.
Tesla build cars, and their factory, using very advanced CD, including trunk-based development, and very fast cycle deployment pipelines.
The USAF have been working with CD for a while, and are expanding its use. Their have been examples of CD to fight-jet flight control systems.
Finally SpaceX are a very advanced CD shop. They are updating the software on on man-rated space craft up to 45 minutes before launch.
For your example, 8 minute speed-of-light delay... How else other than effective testing in simulation could you do this? So now the only question is how to we minimise the risk. The data, and the experience from safety critical systems that are working this way, is that frequent, small, simple changes is the safest strategy.
So I am afraid that you are wrong, Fire control system on a Tank, are at least as amenable to this approach as some of these examples.
1
-
1
-
1
-
1
-
I'd say it the other way around. CI works for nearly all cases. Occasionally branching, not necessarily feature branching, is useful in unusual situations. So "take branching with a grain of salt".
The other take on this is that CI applies a pressure on you to think a little more about the design of your system. So, for example, I think it extremely unusual to need to use branching for your first example. Sure, I have done it, but these days I don't need to. Because I'd probably have a facade or adapter between my code and the third party library to make my code testable. So switching out the library would be easy. That is only not true when your code is tightly-coupled to the library, which is in some, very unusual, circumstances I may allow in my code, but very rarely.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
What you describe as your straw man, isn't TDD. In TDD we write a test, run it, see it fail, write some code to make it pass, run it see it pass, refactor the code and if necessary the test to make them better designed, more descriptive, more generic and then confirm that everything works as before, and if all is good, in CI we commit.
If things are going well this whole process is measured in minutes. I'd generally expect to be committing in this way every 10-15 minutes or so.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Let me clarify, this is not theoretical, many of the best software companies in the world work this way. You have obviously attempted to make this sound absurd, and so have got nearly all of this wrong.
In order to work in the way that I prefer and describe, you need high levels of automated testing, and continuous integration, and when you do that, you reduce bug counts and breakages significantly. This is how Amazon, Google, Facebook, Tesla, SpaceX, Microsoft (these days) and many, many more work. I led a team that built one of the world's highest performance exchanges and was in production for over a year before the first defect was noticed by a user, but I guess these aren't "reality" enough?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@errrzarrr Sorry, but that is just wrong. Google is a big company and, inevitably, there will be a wide range of opinions and approaches, but Google are pretty strongly aligned with agile thinking in many ways. There are people in Google who I disagree with on the topic of agile thinking, but there are also many that I agree with and practice what I see as agile development at a pretty sophisticated level. The DORA group, that I quote all the time on this channel, the people behind the State of DevOps reports, that promote 2nd gen agile ideas like Continuous Delivery and DevOps are owned by Google. SRE, a second gen agile practice to my mind, was invented at Google.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
My preference is to maximise the amount of code that I can test easily with unit testing, so that interactions with external inputs and outputs is isolated, abstracted and minimised in terms of code and complexity. Then you deal with these "edges" differently. I prefer to reinforce my unit-testing with what I call automated "Acceptance Testing" this evaluates high-level scenarios, through the UI where applicable, but doesn't aim to exhaustively test the UI. The logic behind it though is fully tested with unit testing. Depending on the nature of the system, this approach combined with good exploratory (manual) testing is enough. I have some friends who do done some other stuff around automating the validation of specific UI states, effectively taking a snapshot, under automated test-control, and verifying the results in the next test run - approval testing for UIs. The clever bit is in automating the handling of failures. My friend Gojko Adzic has written some tools (open source) that will show you a "before and after" comparison of the snapshot when a test fails, you click on the one that is correct and the test remembers for future runs. So if the change made the test fail, and you agree the test should fail, it stops the release, if you think the difference is acceptable, the test tools "remember" the new picture and use that in future.
In general I am suspicious of trying to be too precise in testing UIs because they change all the time. Gojko's testing is probably as good as you can get, but it still needs human support to check releasability. For most systems, I don't think that you need that much precision, so a more behavioural approach to testing works fine, backed up by manual exploratory testing to just verify that stuff still "looks ok".
1
-
1
-
1
-
1
-
@georged8644 You can certainly do that, but it comes at a significant cost. The coordination effort to ensure that each team is use the correct version will slow you down. The work to rationalise the work of the teams, as you describe, will also slow you down.
My point is that you need to recognise the costs of the choices that you make and work to minimise them as appropriate. There are no simple solutions, this is not that kind of problem. All of the options have downsides, as well as upsides. The problem, as I see it, is that many, maybe even most, teams and orgs, assume that there is a simple perfect world where you can produce software as fast as possible, with the highest quality, and have it perfectly consistent. This is not possible, you have to pick either "fast and high quality" or "slower & consistent" you can't have both of these things, at least not for software beyond the really quite simple.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I don't have to prove anything to you 🤣, but since you ask, despite the rather rude way that you asked the question...
I led a team that built one of the world's highest performance financial exchanges and operates in regulatory regimes in several different parts of the world. This was heavily regulated software, first under the rules of the FCA (originally the FSA) in the UK. We had to prove all of these things to them for any change, and we did so on an automated basis as a side effect of the way that we worked, including pair programming, TDD, TBD, CI and all of the other practices that I recommend on this channel.
I later consulted with Siemens Healthcare, who did much the same, but this time for medical devices the could kill people if they went wrong, and the regulators were still ok with it.
Not only is this possible, I would argue that it is close to impossible without these ways of working: I argue that case here: https://www.davefarley.net/?p=285
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Sure, the AI (to some limit of its capacity - currently its 'context window') can remember previous versions, but it can't discard one and step back to a known good state when it makes a mistake, it remembers the mistake on the same basis as the success. That prevents it from working incrementally, as someone else put it so clearly, "there is no 'Undo'".
On your second point, that kind of is my point, yes we break things into pieces, but AI doesn't really, certainly not in a comparative way. Ask an AI to build a system for you, it will create it all, not build on what went before.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I am afraid that it is more complex than that. It isn't "the job of PMs and analysts to produce requirements". Professional SW dev is nearly always a team exercise, and while it may be true that most requirements may come from PMs and analysts in your org, if you treat those as some kind of "perfect truth" your SW won't be very good. The problem is that SW is complex, and as we start working on it we learn new things all the time. Small things like "it would look nicer if the button was blue" and big things like "this completely invalidates our assumptions for how this stuff works".
This is inevitable and constant. If your team doesn't allow for that kind of constant, incremental learning, then you can't be doing a great job. No human being, PM or not, has perfect foresight and their guesses about requirements will always be wrong at some level, just as yours or mine would be.
Great teams recognise this, and organise to allow for learning to happen all the time, and allow themselves the room to profit from it when it happens. If you wait for your PM to give you permission to refactor your code into a better shape, when you learn something new that tells you what that better shape should be, then you are doing yourself, and them, a dis-service. That is how you maintain you code as a good place to work.
If the devs on your team don't see mistakes or omissions in the "requirements" from the PM or analysts, then they don't understand the problem well enough. New requirements can, and should, come from devs, QAs, ops people, anyone!
Team work is more than people working in different boxes next to each other. It is the goal-keeper's job to stop the ball going into the goal, but if the striker is on the line when the ball comes, he doesn't say "not my job" he kicks the ball clear.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
That may certainly be true, I am certainly opinionated, but what I mean by that comment is intended to be a broader statement, which may or may not be true. Increasingly people who study how brains work see brains that match patterns. You can see this in artificial "brains" too, ML language systems like GPT3 are simply predicting what words come next, based on being trained with vast quantities of text. As far as we can tell, human brains work in much the same way.
One of the subjective ways that we can see that in ourselves, is that we categorise things. "That person is tall", "that person is a loud mouth", "that car is a sports car" and so on. OO is built on this model of classification.
FP is consciously based on a more mathematical approach to reasoning. Maths is more accurate, but that style of reasoning is more difficult for humans. It is kind of obviously more difficult in a variety of ways, practically, if you ask most people they will say that they don't like maths. I am a nerd and often solve maths problems for fun, but it is hard work to think this way. They other, more technical way that maths is difficult, is that it is significantly, massively, more constrained than natural language, and natural ways of reasoning. This is kind of the point, it is meant to rule out bad ideas, and that is a wonderful thing, but it is harder work.
1
-
1
-
1
-
1
-
1
-
Well mostly I worked places that didn't make these mistakes, I worked in places that did CI, but I also work now, and sometimes before, as a consultant. So I get to see a lot of different companies and the mistakes, and the good things, that they do. These mistakes are commonplace.
You have also misunderstood what I mean when I say push to master, I mean origin/master, trunk, main-line, whatever you call the branch of code that will be deployed into production. That is what CI is all about, integrating our changes and evaluating, as close as we can sensibly get, to checking them with the "truth" of the current version.
As for your last point, pushing to FB is not the same as pushing to origin/master and even if you are making small frequent commits on your FB, I can be making changes somewhere else that will break you. So small-frequent commits to a shared version is what CI is all about.
1
-
1
-
1
-
1
-
1
-
I think that wether or not "we practice CI" is true, depends. Do your feature branches for bugs or features last for less than a day? If not then I don't think that you are practicing CI - sorry!
...and that is really one of the points of this video. Until EVERYONE"s changes are merged together in a version of the code that you expect to be deployed into production, they aren't integrated. That's the only point at which we can definitively answer the question - "is my code ready to release". That is what CI gives us.
So if CI matters, and the data says it does, then we need to do whatever that takes to achieve it, including challenging things that make us hide any changes, anywhere, for longer tham a day.
Your point on the scalability of pair programming, is different. I think that they way that you build trust in your colleagues is to help your colleagues to grow. That is not only about seniors teaching juniors. Juniors learn from each other too. My preferred approach is to do pair programming, but to regularly rotate the pairs so that everyone on the team gets to pair with everyone else on the team regularly. I usually to prefer to rotate pairs every day.
This spreads learning of all kinds, and is one of the best strategies for improving developer's, of all skill levels, that I know of. Let juniors pair, but have someone, just sanity check their work - not every change, but just enough to spot big mistakes to misunderstandings. I know that this sounds like a PR or a code review, but it is not really a gate, as such, merely a check-in on the progress of juniors.
We used to not let people completely new to the codebase loose with other newbies until they had paired with more experienced colleagues for a while, only then would we let juniors pair with juniors.
1
-
1
-
1
-
1
-
1
-
Master is rarely broken, because we are continuously integrating after each tiny change, and if there is a breakage, we revert the change or fix the breakage in minutes.
In terms of real-world examples, my old company LMAX, most of the big web companies (Google, Facebook (as was), Amazon, Spotify, etc), Microsoft (in places), Tesla, SpaceX, and CapitolOne bank, the world's biggest retailer - Walmart, and many, many more. These people are not famous for breaking their software all the time.
1
-
1
-
1
-
1
-
This is always true in any distributed system though, the message based, ideal async, approach is a more realistic representation of real distributed systems. You can't build systems at scale that are synchronous, they collapse under their own weight, and the failure modes get more and more complex. At least a service oriented approach better surfaces the problems, and as others have pointed out, there are well known patterns for dealing with them. The truth is that distributed systems are always highly complex, the failure modes explode. You can NEVER guarantee that any communication will get through, so you always need to design for failure, and I'd argue that sync systems make that a lot more complex rather than less.
I describe some of that here: https://youtu.be/IaVPAJQ7iwA
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I think that you start with a false alternative, it is not "microservice or corporate repo", it is "align your repo boundaries with deployable things". So mostly I'd have a repo per system, and the test for "deployable things" is "do I need to test this with anything else before I release it?". If not you are good, if you do, then you are dealing with more complexity than you need to.
In my experience coupling is a MUCH bigger problem than lack of re-use. Of course we want to architect systems so that there are cannonical versions of some behaviours, I am a big fan of service oriented design, key to the strategy of microservices is to reduce coupling, and as soon as you share code you increase coupling. This means that it is always a dance. There is no single simple answer, and my experience for the past few years is that virtually every client company that I have worked with are struggling to cope with too much coupling between things that they think of as microservices, when they are really coupled-modules.
I don't advise mono repos for everything, but I at the moment I see many more people struggling because of multi-repos than because of mono repos. It is more complex than either one of these solutions which is why I like the "independently deployable unit" as a guide. That doesn't rule-out sharing code, but when you do, you protect those boundaries where you share in some of the ways that I mentioned.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Some people do dislike it, but in my experience they dislike more before they have tired it, meaning that they are guessing that they won't like it.
The data says that, statistically, pair programming is significantly MORE productive, not less, but of course using statistics means that if pairing makes you slower, you are an outlier which means that it isn't pairing that makes people slower, it means you are slower when pairing. None of this is meant to be putting you down, this is all just factual. At the point when you find that pairing doesn't work for you, but as I have suggested, it is something about you that makes that the case, you can decide not to do it, or to learn how to do it so that it works better for you. What you can't do is assume that this means that pairing is bad in general, because the data says it is not.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Only if you are working slowly and with lower quality. There is a degree to which you are correct, it is hierarchical, but the real determinant of quality is speed of feedback. You want every developer to know the impact of their commits close to the point when they make them.
In Continuous Delivery, we define this as "at least once per day".
That is after any change, whatever its nature, I can determine the releasability of my system within a working day, I usually recommend to aim for under 1 hour for most teams.
Tesla recently changed the max charing rate for the Model 3 car. This required changing the design, reconfiguring the factory that makes the cars, testing validating and achieving regulatory (safety) compliance for all these changes. From commit to the factory producing new cars with a higher charge rate took 3 hours!
There is a hierarchy of testing, but it is automated, and part of the deployment pipeline.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Huh!!?
Here are the top 8 companies in the world by value,
Saudi Aramco. ... Oil
Apple. ... SW
Microsoft. ... SW
Alphabet. ... SW
Amazon. ... SW
Tesla. ... SW + Cars
Berkshire Hathaway. ... Finance
Meta... SW
If SW isn't adding value to the business, why bother? Banking is a SW run industry, as is nearly all finance these days. Tesla revolutionised Car production through SW, they have a SW configurable factory. It is not really that SW needs corporate support, it is that corporate institutions that don't address this are doing a lot worse than those that do.
Sure, there is lots of SW dev that is done badly, but this is a cultural problem, and corporate culture is one of the biggest barriers. That is not me saying that, that is an almost direct quote from the Harvard Business Review:
"At this point the greatest impediment is not the need for better methodologies, empirical evidence of significant benefits, or proof that agile can work outside IT, it is the behaviour of executives."
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I don't agree with you last point. It assumes that the "CD aspect" is somehow smaller in scope than the "DevOps aspect" and I don't believe that to be true. If you take a definitional stance, there are two ways in. Either you take the "DevOps means Dev and Ops working together" minimalist stance, in which case DevOps is not even nearly enough to continuously deliver valuable software to users. Alternatively you can take the "DevOps culture" definitions, and while there are useful refinements in how some of the ideas are explained, compared to the way that Jez Humble and I described them in our CD book, I don't see anything that widens the scope.
To continuously delivery valuable software into the hands of users you must have great collaboration, you must monitor and operate the system in production to learn from it and so be able to maintain that continuous flow of ideas and so on and so on.
My main point, is that I think that if you think of everything as targetting this "Continuous Delivery of Ideas" it provides a model that informs everything else, and so a tool that we can use to do a better job, even if the answer isn't in my book or the DevOps handbook, or in any other book.
1
-
1
-
1
-
1
-
1
-
If you mean using mocking libraries, that generate proxies for the external system, then it's poosible, but I think a bad idea. Better, IMO, to create a custom stub, under test control at this point. Faking interactions with your system through the interfaces that it will use in production, REST, messages, API calls, sockets etc.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I am certainly biased, Continuous Delivery is more than what most people think of when they think of agile development. I care less about the words used though, and more about what works. CD is how Amazon, Google, Netflix, Facebook and many many more of the big web-shops work. It is how Tesla build the software for their cars and how Volvo make their trucks. It is how Siemens build software that runs in their machines in hospitals, and is the approach behind some of the highest performance systems on the planet were built, as well as the biggest and most scalable systems on the planet. Ericsson, one of the leading suppliers of 5G infrastructure, uses this approach to roll out 5G across the planet.
So saying "Sorry, saying simple agile processes work in these larger complex systems is not always correct" is not correct! This is your best chance of success. There is no guarantee in anything, people doing dumb things, or working on bad ideas can always fail, so I can agree with "not always correct" but only in the sense that you have a dramatically better chance of success with the approach that I describe than any other that we know about so far. The evidence is there, this works and it works at immense scale and it works better than any other approach that we know of.
This is the kind of impact that I would expect, if we were to achieve a genuine "engineering approach to software". I think that we have found that, and while agile was a good start, and Scum is a bit if a diversion, it needs more than stand-up meetings and people called "Scrum-master" to count as "Agile" or even more importantly to my mind "Engineering".
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Well what most people seem to mean when they ask a question like that is "yes, but how can I perfectly predict the future anyway" and my point is that you can't this is irrational and impossible, so you have to find a way to deal with the uncertainty. For short-term, in-project steering, then count stories, not story points or days, this has a number of benefits. It encourages you to make smaller stories, and that increases the accuracy of any predictions that you make on that basis. But even more important change the focus of planning from accurately predicting an end-date and a cost, to instead, organising work around how much to invest to determine feasibility, treat this more like a start up, with seed funding to see if the idea is viable and then round 1 funding to get some basic work done to see if the idea works and so on. Or if you are very rich, and you believe in what you want to do strongly enough just take the punt and forget all about the estimates and just do the work. This last one is the Apple/SpaceX model.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I am a big fan of service oriented architecture, that is where my mind tends to go naturally when solving bigger problems. Microservices are a different thing to just service orientation, but lots of people don't see the difference, and these things matter, in terms of what you can do with a system and what you can't.
Microservices are the most, organisationally, scalable approach to building big systems, but they are considerably MORE complex to understand and to manage in production in a variety of ways. Traceability is a much more complex problem in Microservices, for example.
I think that you need a significantly higher level of design-sophistication to deal with that complexity technically, in terms of design, and organisationally.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Do you think security is different in this respect to any other behaviour of the system? It is not that I expect everyone to get things perfectly right at the start, but that instead of building things that you then gate-keep later, you build your assumptions, the same ones you check in your gatekeeping, into automated evaluations of the system. Of course, this means that you need to find ways that make automating those assumptions easy, but the saving is that once they are automated you don't need to repeat them with every gate-keeping event. So this scales really very well.
Naturally we may miss things, but so will the gatekeeping, so in both cases when we find the mistake we add something to prevent it in future, in the gatekeeping world, that means ever more time and effort, in the automated world it means, maybe some more capacity to run the tests.
1
-
1
-
Not sure where you got any of that from my video? Jenkins had a security hole that was exploited, since it is software that you run and host on your servers that is your problem to fix. There is no apportioning blame here, this is just the reality of the situation. There are links to more detailed descriptions of the recent failures, but these aren't the only ones.
Here is a quote from an infosec site:
"Last Wednesday, on January 24, 2024, the Jenkins team issued a security advisory disclosing a critical vulnerability, CVE-2024-23897, affecting the Jenkins CI/CD tool. This advisory set off alarm bells among the infosec community because the potential impact is huge: Jenkins is widely deployed, with tens of thousands of public-facing installs, and the Jenkins advisory was clear that this vulnerability could lead to remote code execution. Jenkins is a common target for attackers, and, as of this writing, there are four prior Jenkins-related vulnerabilities in CISA’s catalog of Known Exploited Vulnerabilities."
1
-
I think that part of our "duty of care" must be to take on some of that responsibility, because non-technical people only making decisions based on commercial criteria will not have the background, training, context, or knowledge to spot when cutting corners is dangerous, it us, the technologists, who are best placed to see and understand the risks, so we must be willing to stand up for doing things safely, even if it does take more time - actually mostly it means going faster, because we end up with simpler systems.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@LC12345 No, I am not. The companies "spearheading modern development culture" aren't the ones that are looking to measure "individual dev productivity". Look at Google, Amazon, Netflix, Tesla, SpaceX, Microsoft and many many more, they don't work this way, for the very good reason that it doesn't work. This isn't about ducking responsibility, this is about taking responsibility. You don't get responsible teams by using metrics that force them to do dumb things. You get responsible teams by MAKING THEM RESPONSIBLE for their choices and their software. That means that sometimes they decide to do things that the naive, ridiculous measures described in this video will score them down on, like working to optiimise their own development experience, so that they can work faster and better - that is some of the most valuable work that an effective team can do, and discouraging it in the form of "outer-loop work" is not a way to do a better job, or to grow responsibility, it is the route to learned helplessness that is very common is dysfunctional orgs.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Inevitably, I think you gave up too soon. My view is that SW dev is "always design" so your first experience should be what you have all the time. The problem with your second, was that your pair was bad at pairing. It's almost certainly true that you were bad at it too. It doesn't come naturally to most people. They needed to learn the skill, and you stopped after one attempt. As a good pair you don't interrupt as people are typing, even if you see a typo, you wait until they have paused, are about to move on, or the mistake is about to be ignored. Otherwise, as you said, it is really irritating. Your problem, if you will forgive me, is that this is a working relationship, not a friendship. If what you pair wasn't doing for you was getting in the way, have the conversation and tell them what you would prefer. This second part is very difficult for many, maybe most people. You need to establish enough trust and confidence between you that you can have those sorts of conversation, and you probably won't get to do that on day one. I generally say you need to try pairing for at least 2 weeks, tell me what you think after that.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@maxlarose2258 Tesla is an agile company, SpaceX, Google, Amazon , Netflix, Spotify, ING bank, LMAX want more? It is simply denying facts to dismiss it as not working, it does work, and it works better. Actually you can go much further book the Borland dev tools team was famously good, and practiced a low ceremony iterative approach that today we'd call "agile" the software for the Mercury capsule flight control systems built by NASA in the 1950 was built with TDD. I have worked on projects where the analysis and requirements teams were dismissed once we got to design and coding!!! Where is the feedback there. Sure there are bad agile projects and better waterfall projects, but all of the functional "waterfall" projects achieved success because there were some people willing to break the rules of the process.
1
-
@RowanGontier Sorry if I misinterpreted what you said, but I thought you said that agile was directionless, not my experience at all. Agile is not anti planning, it is about planning all the time, Good agile works well when you develop and maintain a big picture focus, I agree that many poor agile teams don't look like this, but I repeat, look at Tesla or SpaceX, or the construction of the Google or Amazon cloud for that matter.
The whole point, which is what you said you had mostly done, is to make iterative progress in small steps, reflecting on the effectiveness and applicability of each small step. This not just how "agile" works it is about how science and engineering work too.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Not a religion, there's evidence that it works!
Sure, there are people that dislike it, but I have introduced, by now, hundreds of people to pair-programming.
I don't have research data for this, but I have two strong impressions that seem relevant to me:
1. People dislike pair-programming a lot more before they have tried it, than after. Their impression of what it is like is wrong.
2. Yes, there is a group of people for whom pair-programming doesn't work for them. This is less than 1/10 in my experience.
On the other hand pair-programming produces higher quality code, more quickly and spreads learning in a team better than any other practice (except maybe mob programming). So in my experience, the downside is over-blown, and the upside is under-valued.
1
-
@vyli1 I think that you are mistaken, I almost never say "never". I do say "this works best" or "this works better than that". Sure there are times when for all sorts of reasons you need to build worse software slower, but that isn't really what this channel is about.
There are usually 3 excuses for doing a worse job "Our tech is crap", "Our people aren't good enough" or "Our culture sucks". My advice is to fix those things. Pair programming helps with all 3, so I value it.
As I said, some people don't like it, but they are a small minority, and you have to decide how you cope with them. Are they valuable enough to the team, despite their poorer engagement with it? Sometimes they are, sometimes they are not. There are no absolutes, and I don't claim that there are, but there are lots of ways to do a worse job.
1
-
1
-
@vyli1 Well I can't speak for pair-programming, but as for TBD, it is used at Google, Facebook, Tesla, SpaceX, Walmart and many, many, many more. So yes, thousands of teams do work this way very successfully, and the data says it works better. This isn't about personal preference. You seem to be getting cross, none of this is about personal preferences, it is about what works better.
As for the question "if this is so great..." I wish I knew, but I do know for sure, that it is NOT because it doesn't work. Most SW dev isn't terribly good, there's a reason for that too 🤔
1
-
1
-
@JojOatXGME You don't have to do it that way, there are lots of options and you pick the one that best suits your use-case. A Reactive approach doesn't mandate that, though you should make that something that the services themselves don't know, or care about to keep the location independence. One of my preferred approaches is "Pub, Sub Messaging" you divide the world up to publishers and subscribers. Publishers usually publish on a topic of some kind (a collection of related messages) and subs listen to those topics, so they don't, necessarily, see ALL THE MESSAGES, but only the sub-set that they are interested in. But every sub listening on a given topic, sees all the messages on that topic.
It's quite a nice comms model, and can be EXTREMELY high-performance, if you use broadcast networking rather than point-to-point messaging (e.g. UDP rather than TCP/IP).
1
-
1
-
1
-
@errrzarrr I am not sure if I agree with that or not 🤔
Certainly the introduction of "agile development" had a bad impact on teams that weren't very good at it, and were trying to "follow the recipe". The early agile writings didn't do a very good job, in my opinion, of talking about software design and how it fit. I think that Kent Beck's "Design as metaphor" stuff in "extreme programming" confused a lot of people. Lot's of people made the, naive in my opinion, assumption that you had to take your brains out to practice "agile" and not do design - disaster followed.
The alternative school of thought, the one that I ascribe to, is that agile approaches in their purest form, make small incremental change, gather feedback, do more of the stuff that works and less of the stuff that does, treat design and architecture as an evolutionary, exploratory, process is ALL ABOUT ARCHITECTURE & DESIGN!
I certainly don't think that architecture that doesn't take that approach is better, or was better before. Architecture as a profession and discipline was too rigid, overly bureaucratic and mostly VERY poorly done. I spent several years during this period working on what were effectively "rescue projects" where people had made such a hash of the project, and the architecture, that they needed rescuing.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@pilotboba My point was not really about a difference between "stories" and "user stories" but that what you listed didn't really, to my mind, count as either. A user doesn't wan't the accounts to display in under 3 seconds, they either want the results much sooner than that, all the research says under 400ms or they want an indication that this will take time, then you have more than 3 seconds. A user doesn't care at all that the payroll processes 1000 employees a minute, they want something else - what is it? These are tasks, from the perspective of a development team, so they aren't user stories.
The limits that you mention are a guess, what are the reasons behind those guesses, they may lead you to better solutions. Capture that in the requirements, and you may end up with these numbers, but these numbers aren't the real goal - better to measure achievement of the real goal.
1
-
1
-
@Tekay37 So let's imagine how we could re-arrange that example to work. In reality you probably wouldn't want to break things up this way, because sending a single line of CSV across a network would be an expensive (slow) way of doing things, but let's ignore that for a moment.
Let's imagine that each line represents something interersting, we could decouple the two pieces, make them more async like this, Service A (reading the CSV) sends a line, Service B adds that line to the total, Service A sends each other line in turn, Service B updates it's total, When the Last line is sent Service B sends "Finished" or similar. Now we could handle that in a variety of ways. Service C could listen to 'Finished' and ask B for the total or Service B could respond to 'Finsihed' by sending a new message "Total is X".
It's different, but it still works.
If we care, Service B could maintain what state it was in, if it needed to know about "Finished" then when it sees the first line it could change its state to 'Working' or something more appropriate. When it sees 'Finished' is could change that status to 'Totalled' or whatever makes sense.
Now if something blows up, a meteor hits the data-center with Service A, we can tell what state Service B is in.
This is a different way of thinking about problems perhaps, but I don't think it is more difficult, and it is a lot more resilient in the face of failure and/or load.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Well I don't think it should be only the responsibility of "managers" to have a vision of where the project is going, but yes, that was certainly part of the problem. The trouble is that working iteratively, as agile demands, is great for giving you good visibility of where you are, but you still need to monitor that and have an idea of where you'd like to be, and in a commercial org, decide whether or not what you are doing is still worth it.
I think that this is a more unusual type of failure, most orgs try to plan too much, but this one is certainly the other end of the spectrum, from that.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
None of this aligns with my practical experience. Pair programming certainly keeps us talking, but it enhances code clarity, and its ability to communicate meaning, because pairing doesn't really work well otherwise.
I have spent years working on legacy code bases and I know where I start, and it is never with the comments. If there are tests, I will start there, sometimes they are bad, and so not a big help, so then I will look at the code, and maybe supplement that with the tests. If the test are good, they will tell me what is going on. Only after all of that will I bother looking at the comments, because 99% of the time they offer zero value to my understanding. They are either so obvious to be a waste of time, or so confusing, or out of date, to be meaningless. If someone can't clearly express their ideas in code, what makes you think that they can clearly express their ideas in text? I don't buy the assumption that writing clear text is any easier than writing clear code. Human languages are less precise and much more nuanced than programming languages, so the truth of the system that matters is always in the code not the comments.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Not sure if "idealistic" is the correct word, but I do agree that this is a structural problem in most orgs. I don't like the word "ideal" because it assumes that it is some shiny, unobtainable vision of perfection. It is more than that. It is what is needed to do a high-quality job of building great software. Some orgs may choose not build great SW, but if that is their goal, then you can't defer all of that decision-making to other parts of the org. The devs MUST be responsible for it. If you are in a bad place, you either live with it, try to change it or move to a better place. I think that often we devs are complicit in this, we need to, at least, make it clear that the choice people are making by organising in the "real-world" example you are describing, is to build worse SW slower. If that is what they want, keep working that way, but if they want to build better SW and to move faster, then the stuff that talk about on this channel doesn't seem to be optional to me.
1
-
1
-
1
-
1
-
1
-
1
-
Well, this isn't theoretical, this is how lots of orgs work, and many of the orgs that we think of as being good at software. Tesla, Google, Amazon, SpaceX, 18,000 people at Walmart, and many, many, many more.
So you may not see it, but it does work and there are fixes or alternatives to all of the examples that you give for why it can't work. This is a better alternative to the approach that you assume is the only one. This is how people work in companies that produce better software faster than most of the others.
Building locally is not the "truth of the system", so however often you build locally you don't know that I wrote some code that breaks all your assumptions until I commit my change. At which point I may, possibly, have invalidated everything that you did. So EVERYONE committing to a shared version of the TRUTH is what CI does, and yes, we commit every 15 minutes so that at worst, you will loose 15 minutes of work when I screw you over with my change.
1
-
@cla2008 Nope, sorry, it works better for small companies too. In fact, that is where this approach started in very small teams. I was head of dev for a startup, it was the most effective team that I have worked on, it grew to be a big company later, but it started with 4 of us cutting code. At one point, after we had grown a bit, our team of 12 - 16 which was building a financial exchange and all of the supporting software associated with it, was out-producing a team of 120 people that were writing integration code so that their, already existing system could talk to ours. It took them 6 months, we did the same thing to another system in 2 weeks.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Sorry, but I disagree with nearly all of the pairs that you rule out here.
Junior - Junior, they learn really fast from one another's mistakes. If they are really new, they need some oversight, someone watching from a distance, maybe keeping an eye on their commits, and dipping in occasionally when they are stuck.
Junior - Senior, I am not sure what "senior developer" means, if it doesn't mean they have a responsibility to coach less-senior people. So this is the perfect chance. Mostly the senior should not "drive" and also should not "tell the junior what to type" they should treat it as a learning opportunity for the Junior and help to guide them to doing better work.
Senior - Senior, they usually argue a lot, but can create genuinely great work together if they are good. Learning nuances from each other along the way.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I am certainly biased in terms of my experience, as I admit several times in the video.
On the specifics of the copying though. For immutability there must, at some point, be a copy. Yes, absolutely, you can do clever things with compilers and runtimes to minimise the costs, but there must be 1) more code and 2) more memory being used and accessed.
1) means a few more CPU cycles, in the worst case 2) could mean a cache miss and so orders of magnitude losses in performance.
I am not claiming that you will always get "orders of magnintude losses" but you will see them sometimes, unless you pre-allocate memory for copies - and you can't do that! This is as much down to the physcial hardware as it is the software.
Now most of the time none of this matters, but at the limits of performance a Functional approach can never match a non-functional one, because there MUST be more work to do, even if it is only a little more work.
1
-
1
-
1
-
1
-
Yes, I think that this is one of more effective approaches to make this work. I think that you're also right about preferring this to be a product team, rather than a research thing. Your long-term goal is to convince others that there are better ways of working. To do that you need to do something that matters, rather than just research, it will help you to avoid people dismissing it as being unrealistic.
I'd recommend that you try to choose a product that is going to easy enough to start trying new things, so not too much legacy, not too many other dependencies on other teams and so on, but also important enough that if (when) you succeed, people can't dismiss it as trivial. Ideally something stand-alone, new, but also something that has real business impact. The pick people who are keen to try working differently - stack the odds in your favour a bit.
Finally, to achieve your longer-term goal, you will need to do a bit of "internal technical marketing" - you will want to tell people about your success and maybe your problems. Don't let that get in the way, but be open to talking about it and doing a bit of "selling" of the ideas. Maybe sharing your trials and tribulations and explaining how you overcame them internally - this last will help you to get more buy-in from others that aren't personally involved. and will make convincing people later a lot easier.
Good luck, and I hope some of this helps.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Yes, I agree, I think that anyone learning to code should start with TDD. I recently taught a friend their first steps in coding, and did exactly this. We started by writing a test to say what we wanted our code to do, add a couple of numbers, and then moved on from there. I taught him the basic principles of coding, functions, variables, loops, conditionals all from simple tests, that I think he, and I, made it easier for him to learn and from my perspective, he was also learning some valuable lessons subliminally. Don't know if it will work, but it worked very well in the context of this first step.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
You speak as though what I describe is impossible, it is not and is widely practised and produces big, complex, high-quality systems. So what you are really describing is not that teams "need a development branch" but that sometimes they prefer to have one.
The evidence is against them, as I mention in the video and reference in the description. If you want to create higher-quality software more efficiently, then the approach that you describe isn't as good as merging to trunk on, at leaste, a dilay basis, however, branching is what lots of develpoment teams do, and prefer.
The implications of working on master/trunk all the time are profound, and important, but there is certainly no reduction in efficiency or quality. For changes that are experimentaly or need customer review before more wide-spread use, the approaches that I describe at the end of the video can cope with those without the down-sides of VCS branching.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@cronnosli CD is about "working so that our software is always in a releasable state" that doesn't necessarily mean pushing to production every few seconds. My teamI built one of the world's highest performance financial exchanges, the authorities rather frown on "half an exchange" with other people's money in it. It was 6 months before we released anything, but we practiced CD from day one. At every stage, our software was of a quality and tested enough and audited enough and deployable enough to be released from day one. So you do that.
How do you measure it? Use the DORA metrics, you will score poorly on "Deployment frequency" but as long as you "could" deploy, I would count that for now, and that will get you on the right road.
The "intrinsic value" is in multiple dimensions, but when you are in the mode of not being able to release frequently, for whatever reason, then working so that you a capable of releasing your software after every small change is a higher-quality way of working, and stores up less trouble for when you do want to release. It is measurably (see DORA again) more efficient and more effective.
As you develop the ability to do this HOWEVER COMPLEX YOUR SYSTEM you get better at it until ultimately you can release more often, if you choose to, even if that isn't your primary goal. This means that whatever the change, you can get to the point of release optimally, because your SW is always in a releasable state, and the only way that you can achieve that, is to MAKE IT EASY TO MAKE RELEASABLE SW!
On your last example, no, it doesn't mean "this 1,5 month of work has no value because it wasn't fast enough?" but if you had been practicing CD, it would probably have been easier for you to find the problem, because your code would have been better tested and so easier to automate when you were bug hunting. This isn't a guarantee, but more a statistical probability. Let's be absolutely clear about this, CD works perfectly well, actually better than that CD works BEST, even for VERY complex software and hardware systems. SpaceX haven't flown their Starship recently, but the software is still developed with CD.
1
-
1
-
1
-
1
-
1
-
1
-
Not the case I am afraid. I built one of the world's highest performance financial exchanges, I have worked with several banks that work this way. This is how lots of big orgs, building complex software work, it is a matter of how you organise development, and how you design the software.
A few more name-drops: Google, Facebook, Netflix, Amazon, Spotify, Tesla, SpaceX, CitiBank... It's a long list!
1
-
1
-
1
-
1
-
Sure, you can do that but it isn't CI. The problem is not one of words, it is a matter of what is it that you are evaluating when you run the tests? If the version of the code that you run the tests against is never a candidate to end up in production, then what does a passing, or for that matter a failing, test mean? It may tell you that you have a bug, if the test fails, but it doesn't tell you that the thing that you want to release is releasable. Inevitably, you will have to run tests over and over on different branches, or you will take the chance that there are no bugs in the release branch, even though you never tested it. So you are either inefficient, because you ran the same tests many times, or your are taking a big risk. This doesn't seem optional to me. You either test the truth of your system (which is what CI does) or you make a guess that maybe stuff will be ok.
1
-
1
-
It depends on the bug, and pairing teams would decide whether to work on the bug alone, separately, or divide the work between them before getting back together to decide on next steps.
This is engineering, not religion, the idea is to do what works. It is an illusion that pair programming is a less efficient way of working. Studies of pairing fairly consistently agree that 2 people working together complete the same task as a single person in around 60% of the time of the single person, so person for person, fairly similar levels of productivity, but they also produce significantly higher-quality work, which saves time on bug fixing, problem determination, going slow in future because your code is worse etc etc etc. There is no concrete data specifically on the subject of pairing of the impact overall of these less obvious wins, but the data for associated practices, like CI, CD and the associated ways of working has good data. Teams that practice these things spend 44% more time on new features than teams that don't. This is useful because this is the impact of working with higher-quality - the data is clear on that. So we can say "Working with higher-quality is more efficient" and we can say "Pair programming is one route that improves quality" so we can guess that pair-programming is more efficient than not. That happens to agree with my subjective experience too. The most efficient teams I have worked on did pair programming.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I am not really in a position to gauge the current job market for web developers, and certainly not in New York. There are certainly different skills that help with web development, but they are the aesthetic skills of what is nice to use, rather than a big difference in technical skills. In general I think it is good for any programmer to have experience of several different languages and approaches, it broadens your perspective and you get to see more clearly what you like and dislike in each.
Javascript and C# are closely related, which may be while you like C#, there is enough new stuff to learn there though so nothing wrong with learning C#. Python is very popular and a nice language to program in, then there are a functional languages, where there is more new stuff to learn.
In general, I don't think that programming as a job is going out of fashion. It is as secure as any other job, and a lot more secure than most. Programming is creative enough that it will be a long time before machines can replace people, and when they are good enough to do most of the programming, nearly every job will be at risk.
Wether web programming is a better, or worse, bet than back-end programming, I think that there is demand for good programmers in both. The best advice I can offer, is work to be a good programmer, work on the important programming skills that will be transferable, whatever you work on - modularity, cohesion, good separation of concerns, loose-coupling, abstraction. Develop skills in design and TDD, I think they will help you stand out from people who only know language syntax - which is not enough, alone, to do a good job.
1
-
1
-
1
-
1
-
1
-
@dickheadrecs I assume you mean "SW architecture", I think architecture is design, but I also think that coding is design, and that engineering is the guide-rails that help us to design better.
Engineering doesn't tell us the right answer, it rules out some of the wrong answers. We still need to be creative and innovative, within the guide-rails of engineering, to solve problems well. That happens at different resolutions of detail. Architects consider systemic, and often the socio-technical context of the development, but good engineering guide-rails are still the North-Star. The most junior dev, writing small, simple code, still needs those same guide-rails to help them determine the difference between a poor solution and a good one, even if they are only dealing with a handful of lines of code.
So far, I don't think we, the SW industry, have done a good enough job of identifying those guide-rails. I have had a go at that in my latest book - "Modern SW Engineering".
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Not optimistic at all, because that isn't how it works. Sure, if you can complete a whole feature within a working day, that's fine, but it is also fine to commit small incremental changes that don't yet add up to a whole feature. For example, I practice TDD, so my usual patter is "RED - GREEN - REFACTOR - COMMIT" (Where "COMMIT" means commit and push to CI).
Continuous Delivery is the big-brother of Continuous Integration, in CD we not only integrate all the time, but after every small step we reject changes that aren't deployable. The consequence of this is that we need to be able to make changes, that don't yet add up to a whole feature, and even release them into production in that state.
This video explains some ways to do that.
https://youtu.be/UwC7nIuiqEw
1
-
1
-
In one sense yes. The goal of TDD is not to "test every line of code", but rather to build good code that does what we want, to do that we drive the development and design from tests.
In order to "build good code that does what we want" we need to be clear, in the form of a test, about what we want the code to do, we don't need to, and ideally don't want to, say anything at all about how the code does what it does.
So I don't want to test every line of code. However, if I am writing "good code" every line has a purpose, and I may want to test that that purpose is met.
This is a VERY important distinction that I think people often miss about TDD. You don't start from the code and then imagine how to test it, you start from what the code needs to achieve, write that down as a "test", and then figure out how to make the code do that.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Dude, you are simply assuming that because you haven't seen something work, then it can't possibly work.
None of this is theoretical, this is how some very VERY successful companies work, and build software that is probably more complex than you think.
I worked on a team that built one of the world's highest performance financial exchanges this way. This is how Google. Facebook, Tesla and SpaceX work as well as many others.
The trick is to allow yourself the freedom to make a change that doesn't yet add up to a whole feature, but still release that change. This is a different way of thinking, but it certainly works.
If your team can't keep master passing its tests through many small changes, that is down to your team, not the approach. Keeping everything working through many small changes all the time is the definition of Continuous Integration. So without that ability, you can't, by definition, be practicing CI, and CI predicts that if you practise it, you will build high quality software more quickly than without.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Maybe, though I confess I think it is more than that. I think that maybe a "mechanic's viewpoint" 😉
The kind of 'testing' I mean when I am talking about when I describe Acceptance tests and BDD, is MUCH more that "interface validation" it is about exploring and specifying the behaviour of the system, this is design, functional design that captures "WHAT we want the system to do" without getting lost in the niceties of "HOW we want the system to do it". Interfaces are still part of the "HOW".
If I am building Amazon, I can say "I want to search for a book, and add it to my shopping cart". That is a perfectly valid, very precise specification of what the user wants from the system, without technical detail of HOW to achieve that. Once I have that captured as a BDD scenario, now I am free to implement that behaviour however I want, including with nice interfaces or horrid ones. That is a separate part of the problem that we always need to solve in software development.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@davidvernon3119 Sorry, but it doesn't happen at all if the software isn't changed. The bits that represented the system, still represent the system. So it is the act of people changing the software that causes the rot. So if we change how people undertake that change, so that it doesn't introduce the "rot" we can cure the effect of systems getting worse and worse. In fact, I was recently told by someone that still works on a system that I helped to create, that now, 15 years later, it is still the best system to work on that they have ever seen. Not saying that we did everything perfectly at the start, we didn't. But what we did get right, was that we established a dev culture that was good, and durable. So all of the people since have kept the code in a good state.
This is all about dev culture. There is nothing inherent in software that makes it rot, so we need better approaches to development and development teams to help them to avoid introducing the rot.
1
-
1
-
1
-
1
-
1
-
@sanler2937 Well you can avoid it driving your dev process in bad directions, and many orgs do, Tesla, SpaceX, Siemens Healthcare to mention a few. Sure, there may be compromises with regulations that were formed before we learned that "small frequent changes are safer", and you may have to work around them, but as I say, many orgs have found ways to do that. In most orgs, the constraints that they work to, and assume are forced on them by regulation, are invented by them, and it you look at the regulation with fresh eyes you can find other, more effective, solutions. I spend a lot of my time working in regulated industries, this is rarely a barrier.
A medical device manufacturer I worked with is looking at different ways to release their SW to minimise the costs of certification, and at building tools for customers that make the certification process more efficient and effective, for example.
1
-
1
-
1
-
The problem is that CI is widely misunderstood. I am afraid that what you just described as CI, isn't. It doesn't match the definition. You aren't integrating your changes with those of your colleagues. Otherwise, how could "CI happens on every commit, even before pull request is created". You are running tests on your local branch, that is not integrating them continuously with your colleague's changes - so by definition, not CI!
That is the problem, your practice misses important value that comes from CI, and statistically, teams that work the way that you describe, don't get the value of higher quality output, and faster production, that teams that practice CI do. Which is, really, what I am trying to describe with this video.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Well, "trying out ideas and seeing if they work" isn't science. What I am talking about is something different to that. Science is based on four steps, Characterisation, Hypothesis, Prediction, Experiment.
Then, in the experiment, we control the variables sufficiently to try and clearly see the result that we are interested in. We don't try to "prove" our ideas or choose answers, after we have seen what works, we try to disprove our answers, because is we try to 'prove' answers we will find ways to 'prove' wrong answers and in reality you can't 'prove' things definitively.
In software this is most clearly seen with tests. Don't write tests to "prove" that your software is good, you can never have enough tests! Instead, write tests that, if they fail, will cause you to reject the change. This is called 'Falsifiability'.
These, and a few more, are the things that I mean by 'applying science' to software, not just "trying stuff to see what works".
1
-
1
-
1
-
1
-
1
-
1
-
Not really. The difference is the degree to which the producer and consumer of messages are coupled through their understanding of the message content. If my service is delivering, I don't know, video-streamed content. Then I may not care about the actual video content, but the codec that translates it does. So my service may be ignorant of the nature of the content, your service that receives the video may be ignorant of the content, but at some point the thing that shows the video can't be. Where the "understanding of the content" takes place matters a lot, the nature of the content, not so much except to understand where "understanding" takes place. Loose coupling is a way to defer, or manage the limits, of understanding. Tight coupling, like with a CODEC requires standards that don't change.
None of this changes whether you are thinking about serving "data" or "events".
1
-
1
-
That is not really about "Tesla" that is about "engineering". No one, however they work, will get everything right first time. Just in cars, pretty much every car manufacturer has had a recall, or had design issues that hurt people. Engineering is about learning from our mistakes, not some illusion of unachievable perfection. Of course it is bad if someones software hurts people, but the mark of good engineering is correcting the problem as soon as you see it so it doesn't happen again. I don't know if Tesla will always do that, but my bet is that they are in a better position to do it that almost any other car maker, in the sense of detecting problems sooner, and fixing them more quickly, by virtue of how they work.
1
-
1
-
1
-
1
-
1
-
1
-
But what you describe aren't, by definition, "microservices" because they aren't "independently deployable" without that what you have is a distributed, service oriented, monolith. Which is fine, but if you keep each service in a separate repo, and then evaluate them all together prior to release, you get the worst of both worlds, neither the speed and efficiency of more coupled systems that you can achieve with a distributed monolith, nor the org scalability and deployment independence of a microservice, so you pay a tax on your development twice for no gain.
You final point on security, has nothing to do with "microservice" either, it may be about separation of concerns, but that is a bigger, more important, topic!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@georged8644 Of course all of that is your choice. The practical things of keyboards and mice are easy enough to accomodate if they were the only barrier. On of my friends prefers a Dvorak keyboard, so we had a workstation with two keyboards etc etc.
Personally I am a night-owl, so 03:30 start times are out!
I have worked on a couple of teams, in the early days of XP that enforced pair-programming on any production code. I think that was wrong.
My preference, when I was in charge, was to set the expectation that pairing was the norm, but to allow people to decide.
I think that it is more, much more, than "management dicatated babysitting", but when I was a technical manager of technical teams, I know of no better way to grow and strengthen a team, but I persoanlly also find that, maybe not always, but often I write better code when I work closely on it with other people.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@defeqel6537 Yes, but the point is how long is that true, machines in general can work with MUCH greater precision than people. Machines can be produced at large and small scale. We already use machines to assist in surgery, in every surgery, a knife is a machine too. If we have AI that is as smart as us, it will be more capable than us because it will exert greater control at different scales, large and small. So sure, they can't beat good surgeons yet, but do you believe that will always be true? I certainly don't, I don't know when AGI reaches human levels, but it seems closer to me now than it did a month ago. That may be wrong, but it seems to me to be almost certainly within the lifetime of most people alive today.
1
-
Thanks, it is a tricky line to walk for YouTubers, we get more views if we choose click-baity thumbnails and titles, but maybe send the wrong message to people looking for serious content.
For an educational channel, like this one, we have to "play the YT game" to some extent, assuming that we want people to see our content, but we also don't want to be misleading. We try to walk the line between these two, and are willing to accept a certain level of compromise on how "click-baity" our titles are, as long as they accurately represent the content of the episode, but also we try to make sure that the content is interesting and useful, and never only sensational.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Yes, it is that unfamiliarity that I am really talking about. It seems to me that if you learned algebra at school, most of us, then other forms of programming are less of a step. "Fold" is meaningless unless you know what it does, and the syntax is meaningless outside that context really. The -> operator has a special meaning that you need to understand and so on. Of course, once you understand this stuff, it is easy to read, but it isn't really building on understanding that we already had. I don't know how much that costs in the long term, but I think it is a small barrier to readability. Functional programming seems to suffer from poorer readability to me. I have heard it said of LISP that "first you build a new language to solve your problem with", this is powerful, but also a bit more arcane.
1
-
1
-
@jfsanchez91 The data is referred to on-screen at 15:52. CI is not best for open source, with committers who aren't known to one another, that is really what feature branches and pull requests were designed to handle, pretty sure I say that somewhere in the video?
The rest of your assertions are just wrong I am afraid.
CI and Trunk Based Development, is how Goole organise their work for over 25k devs and >10 billion lines of code. Amazon too work this, as does Tesla, Netflix, Spotify, SpaceX and many many others. Code review is a separate issue and works fine with CI and TBD, you just organise things differently. I generally prefer pair-programming, it is a better "code-review" than code-review or PR's alone, it doesn't add unnecessary delays, and it has lots of other additional benefits.
This video describes other alternatives: https://youtu.be/WmVe1QrWxYU
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@MikeStock88 Yes, you can use "test avoidance" strategies to miss-out tests that don't matter, it's just an optimisation approach, and there are others too.
But I think "Nightly" is mostly too slow for decent CD. If I am a dev and I introduce a failure on day 1. I don't find out about it until day 2. I fix it immediately, and commit the change but the code isn't releasable until all the tests pass, so it is not ready to go until day 3. Continuous Delivery is working so that your SW is always in a releasable state, once every 3 days isn't "continuous".
The problem with the delay is that you end up with an irreducible fraction of tests that are always failing, now you have to figure out "is it OK to release with these failing tests in the build". I want my deployment pipeline to be definitive, if the tests all pass, I can release, if a single test fails, I don't. That means that I need feedback multiple times per day to get that kind of insight.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
No I prefer to run them all the time, but not necessarily on every commit. I divide deployment pipelines into 3 phases, commit, acceptance and production.
The commit phase runs on every commit and is optimised to give fast feedback. Its job is to keep development flowing.
Acceptance is about determining releasability, tests are more complex, more "whole system" and so slower.
Prod is about release and monitoring and gathering feedback from production.
Let's imagine commit takes 5 minutes and acceptance 50. So during a single acceptance test run, we may have processed 10 commits. When the acceptance run finishes, the acceptance test logic looks for the newest successful release candidate - generated by a successful commit run, and deploys that and runs all the tests. So Acceptance testing "hops over" commits, which is ok because they are additive, the 10th commit included all the new features of the previous 9, so acceptance testing is testing everything as frequently as possible.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@melmartinez7002 Yes! I think that is true. What is the point of holding things in separate repos, and then testing them all together later? I think that it is simplest when you use the "scope of evaluation" to define the scope of a deployment pipeline, and the easiest, most accurate picture you get is when all of the stuff that you evaluate in your Pipeline is in a single repository. I'd ask the question the other way around, if you can't release, without testing the output of multiple repositories together, what does keeping that code in these separate repos do, other than slow you down and make things more complicated?
Google keep nearly all of their code in a single MASSIVE repo, 9.5 billion lines of code! Big repos aren't necessarily bad, small repos aren't necessarily good. The problem we should focus on is "is my change good to release", then do whatever it takes to get an answer to that after every tiny change.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
It is possible that you are missing my point, it is not that non-technical people need to care, or even have any direct stake in the readability of your code, rather it is that you could, with relative ease, be able to explain it to them so that they can read it. This is a qualitative measure of it's readability, if you need deep experience and extensive training in software development to read the code, that isn't good enough to qualify as "readable" so, sure you want tech leaders and architects to be able to read it, but you get that FOR SURE, when almost anyone can read it.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Which is helped by pair programming. Sure if people have tried it and rejected it, that is fine. My point in general is that people should try these things. I have not seen a team really "try" pair programming and then reject it, if they do I have no comment. Most orgs and teams and individuals reject it without trying it. They may have sat together on something for a few minutes, but it is not the same thing. My experience is that about 80% of people who try pairing, doing real work full time for at least a couple of weeks, not only don't reject it, but they love it. Then there is a small group, about 1 in 10 who really hate it can can't cope.
I don't advocate forcing anyone to do anything, not least because it is counter productive, but that doesn't mean that there aren't good ideas, that would benefit people that they reject anyway. Pair programming works very well indeed in strengthening team culture.
1
-
1
-
1
-
1
-
Sure, and retrofitting this stuff is more difficult that starting from scratch, but while I agree with the general aim of your comments, I disagree with some of your conclusions. Sure, retro-fitting TDD to an existing codebase is more difficult, so organise things so that new work is possible with TDD, but don't retro-fit to code that you aren't changing.
"CI you need to have reliable unit tests" well yes, but if you have 1 reliable test that stops you making a common mistake, that's better than none.
Pair programming is a choice, it is not difficult to adopt if people want to do it, so start discussing the reasons why the team might like to try it.
I agree that some of this stuff is difficult to change, but it is not impossible, in fact I make a living help companies and teams do that, we almost never get to start from a blank-sheet.
Step 1 in solving any problem is identifying that there is a problem, step 2 is coming up with something that may address the problem 3 is trying it out to see if you can make it work.
Here I certainly try and help people with steps 1 & 2, the trouble with step 3 is that it is a bit more contextual, but there are plenty of videos here that try to tackle step 3. Checkout my stuff on refactoring, or acceptance testing.
The only bit that I disagree with is the assumption behind "250% more effort is hard to sell" - So don't! Don't structure this as "250% more effort" find small changes that you can make that don't really add more effort, they just change where you apply the effort. Make the code that you are working on now, today, a little better. Write new code with tests, and do the work to isolate that new work from the big-balls-of-mud elsewhere so that you can. I think that you get to, what I concede can look like some fantasy Nirvana, by many small, practical steps, not by huge stop-the-world-efforts.
1
-
@erikf790 Such a codebase is never going to be pretty, but you can usually make it workable. Most organisations that we think of as good at CD started from a legacy codebase! Amazon used to be a 3 layer PHP & relational database application! Software is infinitely malleable, so the question isn't really "is it possible", its always possible, a better question is "is it worth the effort" and that depends on the system. If you are about to retire it, then "no!". If this is the core of your business, and your business is uncompetitive, then "yes!" and there are lots of shades of grey in between. I know it's a nice thesis, but I don't buy the idea that these ideas are coupled. Yes they are coupled at the limit, but if you have no tests and you add 1 then it's an improvement. It looks coupled when you see a highly optimised, effective version, but the journey is one of lots of independent, parallel, steps.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I think that is probably true, but I am still sceptical because English and Java evolved to address VERY different problems. English, like all human languages is vague and imprecise, Java is a technical language, design explicitly to define things precisely. I don't think that they are doing the same job, and while I think I would be VERY limited trying to have a conversation in Java, I also think that I am very limited in trying to express a precise algorithm in English. So I am not sure the English will ever be a more efficient way to define code, unless that code is something that has been done before, and you can refer to it. Which is what the LLMs are really good at.
I am pretty certain that one day computers will write all the code, but I am also pretty sure that by the time they can do that, it will be because they can do everything that we can do, only better, because they will need to be as creative as us to solve the problems from limited info. As long as we are specifying the problems to them, I think some form of computer language will always be better at that than some form of natural human language.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The "clone" approach is sometimes implemented as "branch by abstraction" you are right, this is a form of branching, but it is different. It branches at the level of behaviour, but not code. You create a common interface, develop a new version alongside the existing one, and eventually, when you are ready, switch traffic over to the new thing. This keeps the code in one place, meaning everyone can see what is going on and it doesn't get in the way of other refactoring, because the codebase isn't branched, only the behaviour.
I'd try that before creating a branch for the change!
You can do this at very large scale. Pinterest did it for a rewrite of their entire site a few years ago.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@damianmortimer2082 Software is strange stuff, and things that seem obvious in physical engineering, aren't always so obvious in software. Plans matter to the degree that you need a sense of direction. Who is doing what on what day matters less. I agree with you re-planning is vital as soon as the circumstances change. So you need to optimise so that you can see the circumstances change as quickly as you can. and then react and change your plan. That is how people work when situations are fluid.
Software development is always an exercise in learning, so as we learn new things we need to change our plan.
If we were constructing a building, what would you think if I created the foundations for it, but decided that I would leave it till later to decided if my foundations could support the whole building? I would be irresponsible. The trouble is that Software is so flexible that it is easy to make this kind of mistake. It is also so variable that there is no strong agreement on what "able to support the building" means in any given context. A building is unlikely to start out being planned as a 5 story structure, and then unexpectedly, based only on its popularity end up needing 5,000 floors. This happens in software! It is difficult to predict how the plan will fail, but it will always fail. So we need to be smarter and find ways to protect our assumptions. We build something that will work, given our assumptions and limit it to that. Perhaps we build a game that works great on a PS4, and ensure that it works great on a PS4 at every step in its development.
The other big difference in SW is that it is maleable, we can change it at any time, if we adopt some engineering discipline, that means that we can grow it over time. So start with something that works well, and then enhance it.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I think it depends on what you want, what your job is like, and how much you care about this stuff. Even as an employer, I don't feel it is my job to force people to spend their spare time on work. It is your spare time, so you should use it to do what you want to do.
I was lucky because I fell in love with coding, I still enjoy it, and still write code in my spare time, but not necessarily for work. If you like coding, then it doesn't feel like work.
I think that getting to be good at SW dev takes a lot, and you need to spend a lot of time doing, and trying different things to become good at it.
So if you're really want to get good, I recommend having some side-projects, doing something separate to your work that you can learn different things from, but it is easy to burn out, so only do this while it is fun.
I think that the shortest route to learning is to learn together with other people but on real things, which is why I like pair programming so much.
1
-
1
-
1
-
1
-
1
-
Amazon, Google, CityBank, The Guradian, UK Government, NASDAQ, NetFlix, Facebook, Tesla, Ericsson, Microsoft (some teams), Volvo Trucks, Siemens Healthcare, LMAX, The US Airforce, and many many others would all disagree that this is unrealistic.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
My point was, what are you leveraging when you come to read it? Sure, if you know what these things mean, any code is easy to decipher. I can read assembler, but that knowledge is a bit specialist and, at least these days, a but arcane. My point was that if you know algebra, the imperative code is leveraging that. To read the Haskell you need to know what 'foldl(+) 0' means. That doesn't read like maths, so you need to know specifically what it does mean. Sure, that is all subjective, but that was my point.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Of course it is all contextual, but when I said "it was a fairly complex domain" I meant that. Although in the next breath I said "it took 25 minutes to build from scratch" you can't measure the complexity of the system based on that. Tesla do better than that for all of the systems in a car, Google do better than that for 9.5 billion lines of code.
My point is that you can optimise this stuff WAY beyond what most people think.
I most work with larger, more complex systems these days, and sometimes it is challenging to reduce build and test times, but there are lots of ways that you can. My preference is to avoid using copies of production data for testing, for example. I am only guessing, but If your test datasets have 700Gb of data, my bet is they are probably using copies of prod data, and this is a very inefficient, and not very well focused way of testing things and results in slow tests.
I completely accept I may be wrong, this is only a guess, but it is an example of problems that I see people face testing big systems all the time.
1
-
@tiagodagostini Sure, but most orgs I see don't think that feedback performance is important enough to work hard to minimise it, when it is. In you case I'd be looking at the value that these tests add, and how you could get that value in other ways. Testing is not the same as training for an AI, so what is this testing telling you that the training didn't? If you are doing this to get inputs into the rest of your system, so that you can test that, I'd be wondering about re-architecting the system so that I could isolate the AI parts more and then simulate their outputs, rather than the, presumably much richer, more complex, inputs.
Again, I don't assume that these are sensible in you context, I don't know your context, I am merely trying to give prototypical examples of other ways that you could cope with that problem.
1
-
1
-
1
-
1
-
1
-
Thanks, I am not aware of any books on this topic. I am wondering about writing a very simple game and doing a mini-series of videos to explore what it takes, but it is a matter of finding the time to do it.
I think that Manual testing is often used to cover up a failure in imagination for how to test, but there are some things that people are better at than computers.
If I were writing a game I would want to automate the testing to show that the game worked as expected functionally, bullets hit targets, scores increased, items were collected and so on and so on. I may want to do some graphical comparison testing verifying that, in controlled circumstances rendering was consistent, Gojko Adzic has talked about this, https://www.youtube.com/watch?v=S30QXoqLyig
Then use people to see if it is nice to use. Their job is not regression testing, but is focussed more on is it nice and does it make sense.
1
-
1
-
1
-
1
-
I know, but the data says that the ability of your "squad" or team to make such choices is a predictor of success. The fact that you can't make that choice says that you are, statistically, probably not doing as good a job as you could be. That's a big problem, and it is possible that you can't change it, but that's what the data says. If as an individual you care about these think you now have a choice. To quote Martin Fowler from another tome, "You either decide to change the company that you work in, or you decide to change the company that you work in". It's a joke, but there is some truth in it.
The trouble with big corporations is that they can, and often do, create cultures that are not optimised for doing a good job and it is incredibly complex, sometimes (but not always) impossible, to change them.
The good news is that most orgs don't want to be in that position, and the good ones value people who try to change things for the better. So you could look to see how you could, first, make the situation for you and then in your team as good as you can - what things are in your direct control. Fix them first. Then figure out what things that are wrong can you influence other people to change? That is you next place to try an improve things. It is hard, but that is how org change works.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I agree that team contribution matters, as I said in the video, but that is not the same thing as “individual productivity”. So yes, most appraisal systems are a pretty poor mechanism for measuring, and rewarding success. I was a tech leader in several orgs. In one I did a 6 monthly appraisal for the people that worked for me, and was told to change it because there wasn’t a normal distribution in the grades. If there was a normal distribution in the grades, that would mean that I was a crap manager, because I hadn’t influenced the team to improve its productivity or quality! So huh? If you are the best dev in a shit team, does that mean you are better than an average dev in a world class team? Of course not!
Appraisals are often useless, and so “yes” we should scrap the ones that are treated as form-filling exercises!
I don’t think this means that underperformance should be ignored or rewarded.
I have been lucky to work in a few world-class teams. When I was the boss it was my job to fire people who under-performed, and I did. But this was not on the basis of stupid, naive, measures of individual productivity. It was based on a much more complicated assessment of team contribution, and it was often initiated by the team communicating that this person wasn’t pulling their weight.
1
-
1
-
1
-
1
-
1
-
I haven't done much embedded code since I learned TDD, but I did quite a lot before, and I have worked with lots of teams writing embedded code with TDD. To be honest, I don't really see any difference to any other code. If anything embedded is usually easier for TDD, because the interface points are better defined.
I would certainly recommend that, for embedded systems, you work on testing in simulation a lot. You want to run your code in a simulation of the real silicon so that you can run lots more cycles, much more quickly. This is what the big players, like Tesla do for their firmware.
A common mistake is to imagine these "simulators" as some big complex thing, instead, build them on-demand as you need to test a feature, build them as part of your test infrastructure, I have some stuff on acceptance testing, not specifically aimed at embedded devices, but the approach is the same.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Yes, but not really. It is certainly an analogy for similar approaches, not to say this stuff is new, the Transputer from the 1980's a massively parallel CPU where the nodes communicated by internal asynchronous 'messages' on a chip is a more direct antecedent.
Your Unix analogy breaks down at a number of places in comparison, the main one being that the messages aren't async, and you example at the end is cured by adding hysteresis to the propagation of errors, and back-pressure to messages to limit inputs when under-stress, function of the messaging in Reactive Systems.
As an analogy it is reasonable though, and in part that is because we are talking about some fairly fundamental ideas, like comms and the separation of the messaging (pipes in Unix) from discrete processing units.
Oh, and one more thing, in Reactive Systems we are usually talking about message latencies of micro-seconds. The fastest messaging system in the world Aeron was built for these kinds of system.
1
-
1
-
1
-
@JinnGuild Software engineering is not "purely in relation to code" though really, code is the output of a software engineering process in the same what that bridges are the output of a civil engineering process.
Did you watch the whole conversation? https://youtu.be/KG6bPVWBl5g
If not, I recommend that you do, you aren't going to get much detail in 9 minutes 😉
When it comes to genuinely high performance systems, I am not convinced that you can divide it in the way that you suggest, this is not about "Info tech" vs "Engineering" to push the limits of software you need to engineer every aspect. When I led a team that built one of the world's highest performance exchanges, we designed and tuned the entire stack, from hardware to every aspect of the software. We tuned Linux to run close to a real-time version, we built out own ultra-high-performance reliable messaging system and created tools for exchanging information between threads at close to the theoretical limits of the processor (LMAX Disruptor).
To psuh those things to the limits you need to think pretty holistically about the system. For example, in trading systems it is common to worry about things like minimising the bits in a message and optimising messages to match the packet-size to maximise throughput, and minimise latency. It also involves worrying about the speed of light, and mining the physical distance between trading systems.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
They don't really belong to any paradigm, you have the freedom, and the challenge, to choose. I used to write fairly OO style things sometimes. For example I wrote a small framework for building "Sprite based games" "Sprites" were true objects, data and logic that represented game elements. Quite OO in concept. Quite a lot was pretty functional, but the majority of my assembler code, at least, was procedural.
Most of my professional work in assembler was fairly early in my career, and so my views on design have evolved since then, but the main thing was still, with hind-sight, trying to proceed in small steps, because it is VERY easy to loose way in assembler programming. One of the things that I like about it is that demands a laser-focus, it is hard to return to after interruptions for example.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The idea of a user story is to be a "story". Neither of your examples is telling a story, they are describing a solution. I think that the real story is more like:
"The device overheats" - we could just say that!
or we could say what the user expects,
"As an IoT user, I'd like to be told when the system exceeds the safe operating temperature, so that I can shut it down" or
"As an IoT user, I'd like the system to shut down when it exceeds the safe operating temperature, so that my house doesn't burn down".
Your first point on distributed systems has two parts, this is more about org design, and system design than requirements. There are two answers. Divide the system (and your teams) so that each part is independently deployable, and so loosely-coupled - Microservices. The boundaries between microservices are, by definition, Bounded Contexts, and so are natural translation points in your design so you can always create requirements, at these boundaries, that represent natural conversations with the users of your service. So your example is too technically detailed, focused on implementation, to make a good requirement, and raising the level of abstraction "I don't want my house to burn" helps focus on what really matters - AND DOESN"T CHANGE WHEN YOU CHANGE THE DESIGN OF YOUR DEVICE.
The second approach is to treat the whole system as one thing, and test, and specify, it all together, there are nuances to all of this of course.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Well, I don't like arguments from authority much, but you could ask the inventor of micorservices why he invented them, or experts in microservices how people get them wrong, and they will all say, largely what I said. Conway's law https://en.wikipedia.org/wiki/Conway%27s_law says we are doomed to mirror the organisational structures that we use to develop code in the design of the code. But also this is less about the fact that microservices are distributed means that our teams must be too, but rather that we want small autonomous teams, because ALL the data says "That's what works!" so given that constraint, what architectural structures facilitate that. If you divide up the problem into a series of separate services in separate repos built by separate teams, and then force them to work in lock-step, you have lost ALL of the advantage that you payed for with that extra friction.
Microservices, isn't really a design approach, by definition. "Services" are a design approach, and apart from "independent deployability", the core idea of microservices, you can get everything that microservice has to offer from service based design.
1
-
1
-
1
-
Decoupling in the sense that you say is nothing to do with Microservices, I have built software that looked like that since the 1990s. I worked with the people that invented the concept of Microservices, they did it for the reasons that I mention. If you read Sam Newman's stuff, he wrote the book that is most popularly user to define it, or watch some of his talks, he says "don't start with Microservices".
True Microservices, that is independently deployable (you don't get to test them together before deployment) is a complex, sophisticated strategy.
What you are describing is Service Oriented Design, which pre-dates microservices by a LOT.
Microservices is NOT about REST APIs, they have been around for a lot longer too, Microservices is, very specifically, about independently developed components of a system. You do that when you want to scale-up a dev organisation significantly. Watch my video in Microservices to see what I mean.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I probably wouldn't use the term "bad practice", but yes, I think that this is a worse way to do things. The problem with an in-memory DB is that it is not the real thing, there are differences, and it is also tightly coupled to the code at the level of implementation, rather than at the level of behaviour. I don't really care that I need a particular form of "select" statement or "update" statement if I am processing orders, all I care about is that the I can retrieve the order and do something useful to it, and save it again for later. So my preference is to create an abstraction of the Store and the Order and deal with those in most of my tests. For testing the actual translation of store.storeOrder(...) what I am really interested in is can I talk to the real DB and is it all configured correctly, so once again, for those kind of tests, I'd generally use Acceptance Tests for most of that, rather than unit tests (there may be unit tests for components of the 'Store' component.)
That allows me to test the things that matter, in terms of my interaction with the 3rd party DB code, without having to deal with the complexity of all of that in my faster, lighter-weight, TDD code. So I have no real need for an in-memory DB for testing.
The time when I may resort to that, is dealing with some poorly designed legacy code, and I may use an in-memory DB as a pragmatic hack, to make some progress.
1
-
1
-
1
-
1
-
My point is not that AI can never do this, but that this seems like a limitation of LLMs to me, and this is probably one of the bigger barriers on the path to AGI because if they can work incrementally, then they are learning dynamically, and I am guessing that that will take more than a bigger “context window”. I am pretty sure that AIs will be doing all the programming one day, but not just yet, and not until they can work incrementally, allowing them to make mistakes, recognise them, step back to a known good point, and try again.
1
-
No, it is that while maintaining multiple versions is sometimes a strategy that businesses take, it is a very costly strategy in terms of effectiveness. This has nothing to do with the simplicity, or otherwise, of the software, or the delivery model SAAS vs Stand-alone, it is something fundamental about information. If you have information in multiple places, and you need to coordinate change to it in multiple places, that is a world-class complicated problem. The best solution is CI, as practiced by Tesla, SpaceX, Google, Facebook and many many many more.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@ajibolaoki5064 I can't speak for other orgs, but I have never set any age limits. I recognise that it may not look like it from your position, but in general there is a skill-shortage in software development. So it should be a seller-market. The trouble is that people doing the hiring, aren't always great at it, and so to make their lives easier they often tend to "go by the numbers" of looking at experience and counting skills on your resume. I, and many others, think that this is dumb, but it is how it often works.
So you either have to find a way to improve your skills, and I really don't recommend telling lies! Or you seek out orgs that think a little differently. Orgs that are looking for the right person rather than the right list of skills on a Resume.
Increase you skills by writing more code! Find an open-source project and contribute, work on something that interests you, build your own stuff - play with code and do silly things. All of these will make you stand out a bit more from others.
In looking for orgs that are a bit more people-focussed, you can often make an initial guess based on job adverts, it doesn't always work, but it may be a good starting point. If the job is mainly just a list of skills and experience, not a great sign. If the job talks more about the problem that they are trying to solve and/or the type of people that they want or the type of team that they have - a better sign.
In general, smaller teams are, IMO, a better starting point than big orgs. I hope that some of this is helpful, and I wish you luck.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Well CD is used to build financial systems, embedded systems, medical devices, games, web commerce sites, cars, space rockets, mobile devices, where do you think it doesn't apply?
GitFlow slows down work, provides less insight into the development process, and gives an illusion of progress that may or may not be right, where do you think this works better?
1
-
1
-
1
-
Fair enough, let me put what I mean into context. This is a YouTube channel dedicated to “Software Engineering”, so I mean “harmful” in that context, but that context is quite broad. It means that orgs that follow this advice will build worse software more slowly than orgs that don’t. Orgs that score poorly on the DORA metrics, which measure “better SW faster”, make less money, their staff have self-reported worse “work-life balance”, staff retention is worse, and with higher rates of “burnout”, oh and the combination of these things means that, statistically, such poor performing orgs make less money.
I think that in the context of this channel, based on these predictable outcomes, “harmful” is a reasonable word to use.
1
-
1
-
1
-
1
-
1
-
1
-
Sure, but as is often the case when it comes to Pair Programming, the bean counters are completely wrong. The data from studies of pair programming I think underplays its value, but even so, it consistently says that 2 devs completing the same task as 1 complete it in 60% of the time, but also with MUCH high quality, so overall, if you include the time it takes to fix the bugs that you put into the code Pair Programming is the significantly more efficient approach. I think it is more than that though, because as well as speed and quality, you don't need to spend extra time on code reviews, handovers, and in my experience pair programming is the most effective way to grow the capabilities of teams, whatever their mix of skills. I describe all this and more in this video https://youtu.be/aItVJprLYkg
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I think the last part of your comment is the part that I am trying to talk to. Clearly you did the right thing here, and of course there are often commercial, and other types of pressure that are applied to people in all sorts of jobs. I think that ultimately it is for each individual to stand up, as you did in your example, and say "no" to things that aren't right. Software development as a career would be in a better place if it were clearer that this is part of our responsibility, and if we provided training and better guidance on what "good" looks like so that we would, as a profession, be in a stronger position to defend people when they were forced into the position of having to be brave and say "no". There will always be risks when you make that choice, last time I did it, I assumed I would be fired, I wasn't, but I didn't know that when I said "no" I just knew that saying "no" was what I thought the right thing to do!
Well done!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Yes, sure, but this isn't how the orgs that are best at software operate! @archi-mendel is much closer to that.
Software development is very broken in many orgs, this isn't any particular group's fault, we are all somewhat responsible, devs wanting perfect requirements, customers wanting instant results, product owners thinking that their job is to tell everyone, precisely, what to do.
We can do better than that, but it does take a change in mindset!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
If you have no basis for measuring success, there is no basis for judging what works and what doesn't. If it only down to "because I (or my team) like it" then it is pure guesswork, and prone to all of the very long, very well know list of human failings (cognitive biases) that we are all subject to. "Decisions from authority", "Confirmation bias", "Egocentric bias" etc.
https://en.wikipedia.org/wiki/List_of_cognitive_biases
Science is what it takes to protect us from that. If you, and your team choose to use Feature Branches and Pull Requests, forgive me, but that is not innovation, you are more likely to be simply following a fashion.
What data we have says that fashion doesn't work as well as alternatives. Give me data, or a rationale, what it is better that doesn't come down to "because we like it better", because that is a bad way to make decisions.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
That is not an uncommon reaction to pair programming, and it is entirely possible that you are right. As I think I said in the video, there is a minority of people who genuinely hate it. My experience has been though, that while most people hate the idea before they have tried it, including me I might add, most people, by a considerable margin, prefer it once they have tried it. It is a much more social and supportive way to work, and if you enjoy leaarning, discovery and that great feeling when you have a cool idea, all of these are amplified when you share it with someone else, which is why it turns most people around in their opinion of it, I think.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@brandonpearman9218 It's a problem. If I entered the Tennis championship at Wimbledon, without being good at tennis, and someone said "He's not skilled enough" no one would conclude that that meant the the Tennis commentator was saying that there was no way to criticise Tennis.
Of course there are ways to critique TDD, it is more complex at the edges of a system, it is bad as a tool for retrofitting to pre-existing code, without a lot of hard work, and considerable skill, but I think that yours is a straw man argument. TDD does take some skill, people at the start, however good they are at SW dev, take a little time to learn it. It is MUCH easier to learn by people with very good design skills, because the reason that TDD is difficult is because it exposes you more quickly and more clearly to the consequences of your design choices than any other approach.
I did recognise where the quote of "TDD induced design damage" came from. I followed the discussion between Martin Fowler, Kent Beck and DHH pretty closely when it was released some years ago. At the time, I told my friends that I thought that Martin and Kent were being too kind, too equivocal, in the discussion. They didn't challenge some of the DHH's ideas that I thought specious.
I don't agree with DHH, I have seen people doing a bad job of TDD and that resulted in poor code, but that has been unusual in the teams that I have worked with, where the reverse is much more commonly the case. I have been involved in teams that built, literally, award winning software, some of which is VERY widely in use around the world as part of the infrastructure of several common frameworks and tools and are widely regarded as examples of VERY GOOD DESIGN.
Part of the problem as I see it, is that TDD is a different way to design, so if you are experienced, and maybe good, at SW design it is a big deal to change your working habits, but even then, if you do, it works better in my opinion and experience.
1
-
@brandonpearman9218 I have no problem whatsoever with coming back and adding tests we didn't think of. In fact if you don't do that, I'd assume that you are missing something. I think that is linked to the spurious idea that the idea of TDD is to achieve some level of perfection. It is a tool that we use to do a better job. One of the reasons that I like TDD so much, is that I always assume that I will miss things, but if I do TDD, at least when I do, I can reassert that everything that I had previously thouht of was still correct, or identify where it wasn't.
I have no clue how good DHH was at TDD, he didn't sound like he was good at it to me in the debate. I never say, or even intentionally suggest, that people that don't do TDD are more stupid, or that the only way to write great SW is TDD, or anything else like that. My argument is the engineering argument, if you do this you have a much better chance of success, and the failure modes are not as serious.
For me TDD is a WHOLLY ITERATIVE process of learning and experimentation. I specify the experiment I am carrying out, as a test, and then carry our the experiment, the coding. This is, to me, closely related to how science works, science is humanity's best problem solving technique, and so I do think that although it is possible to hit the right answer by chance without, it is MUCH more unlikely. TDD provides a more structured, but still free and informal, approach to exploring design, which is why I am such a fan.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Yup, I agree. I think that both your points are good, and important, but your first point though to me is absolutely central to a better "engineering" approach to development. Starting by assuming that we are almost certainly wrong, and so will need to come back to this, is a MUCH healthier approach. Sometimes by accident we aren't wrong, but writing code that is readable, maintainable, extensible, flexible, modular, cohesive, etc etc is much the best strategy in the long haul of real-world development.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@deanschulze3129 Yes, Continuous Delivery - working so that you software is ALWAYS in a releasable state. That is the core, the other practices support that.
There is strong evidence for CD, both academic and empirical - Read the Accelerate book and/or the State of DevOps reports.
Here are some links to some TDD research.
http://ibiai.mg.gov.br/wp-content/uploads/2019/08/05750007-1.pdf
"Microsoft development teams were able to cut down their defect rates, as well, and the selected test-driven projects had a reduced defect density of sixty to ninety percent compared to similar projects"
http://ibiai.mg.gov.br/wp-content/uploads/2019/08/05750007-1.pdf
"These continuous and incremental iterations lead to a virtuous cycle [6] that eventually translates into “...simpler designs..., systems that reveal intent..., [and] extremely low-defect systems that start out robust, are robust at the end, and stay robust all the time...” [2].
Although this may sound too good to be true, there is no scarcity of empiri- cal studies supporting these claims [9,12,31,35,37,42,49]"
https://www.researchgate.net/publication/346302627_A_Family_of_Experiments_on_Test-Driven_Development
1
-
1
-
1
-
1
-
Allan, my first thought is that this is a nice problem to have! What you are saying is that your throughput is so good that it is essentially background noise and that your stability is so good that you have few enough defects that they are individual events rather than useful, statistical signals. Congratulations! This is a great example of what is possible when you take this approach seriously!
I have worked in teams that were similarly close to optimal for those teams. These days I spend most of my time working on teams that are on the journey to this kind of destination, rather than having arrived.
The difficulty that I have in offering any suggestions is that I think it is VERY contextual from now on. You are already beyond the generic stuff.
So please treat these suggestions as just thoughts, I may miss the mark!
One of the BIG variables here is the complexity, or otherwise, of your pipeline. If 'releasability' for you involves various, multi-stage evaluations, Commit, Acceptance, Performance, Security, Data Migration, etc, etc, then you could think of using 'Throughput' & 'Stability' measures as technical measures between stages. "How often are bugs found in 'Commit' and how long to recover?". "How long in Acceptance?". That gives you more fine-grained data and can be useful in optimising the pipeline, and individual stages.
Where do you think are areas for improvement? Throughput & Stability are each made up of two measures, bug rates may be low, but MTTR may still be useful? How about streching the measure of Throughput from "Lead Time" to "Cycle Time" harder to measure, but includes a bit more of the human/cultural aspects?
Finally, in this state, you are probably so close to optimal for your team that it doesn't matter so much, you can play with ideas, continue to track T & S but only to make sure that you don't make them way worse. If the team like the change, or if some more business focused metric can be used "gained more customers" or "increased market share" maybe those metrics are where your focus should be now?
Again congratulations to you and your team, this is a nice story to hear. I hope that some of these ideas may offer food for thought.
Dave
1
-
1
-
1
-
1
-
1
-
Ok, what do you think there is to stop an "uncontrollable intelligence expansion"? AIs are already advancing a lot fast than humans are. Compare the performance of an AI from 2 years ago with one from today. This pace is accelerating quite dramatically at the moment, what and where is the pressure to stop it?
AI researchers used to think that there were several "walls" that would slow progress toward the singularity. Now, many of those "walls" have fallen. One of the last is "multimodality" that is being able to do lots of different things.
We already have AI that meets this criteria of multimodality, though still not always at the levels of performance of the best human in all aspects.
I disagree with your point about "evidence" we are in a game where every time there is an advance in AI we move the goal posts. It used to be thought that an AI could never beat a human at Chess. It has been a very long time since the best human could beat the best AI at chess. So we said, "ah, chess is easier than we thought". Now computers are better at analysing medical scans, folding proteins, finding obscure case law, playing games, writing (at least in terms of speed), translation, drawing and controlling machinery. There may be some barrier, but I see no obvious evidence for where it is.
AI can now learn to play a game on its own and beat people at it, a recent AI Go champion (a more complex game than chess) was never programmed with the rules of the game, but it learned the game from playing it (millions of times, in minutes) and then it was better than a person.
We used to say an AI couldn't be a doctor or a lawyer, but AIs have passed Bar exams and the qualifying exams to be a doctor. They are still not good enough to do those jobs, but they are already better than people at many tasks.
People researching the social impact of AI say that in 5 years, AI will be able to do 1/2 of all jobs, and the jobs that are easiest for AI to replace people in are the jobs that are most highly paid. Bill Gates says there will be no programming jobs in 5 years time.
So I am not sure what counts as evidence that this can't happen. At the moment this *is happening*, so unless we can see what stops it, why is it sensible to assume that it won't happen?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@romankonoval2190 Yes, if the branches live for a day or less, I have no real argument. At that point I think that they are waste, you are doing more typing than me, managing branches rather than just working on "main", but hey, no problem. If FBs last for longer than a day though, they mean that, by definition, you can't practice CI, and that is a BIG loss.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Good links, thanks. The Google study, to my reading, mostly found stuff realated to what Belbin found in the 1980's (https://en.wikipedia.org/wiki/Team_Role_Inventories) namely that you don't create great teams simply. It is more complex than just getting a bunch of great individuals together (Gross oversimplification!).
Team dynamics is really important and, on a more subjective level, I have seen this myself. I have worked in great teams of people that weren't all the "best ever developers" and I have worked in disfunctional teams that were, probably, on average, better developers than in the "great teams".
I think that sometimes we overplay the differences in technical teams, the human factors still seem to outweigh, or at least significantly balance, the technical factors. I would rather work as part of a team of inexperienced people working in smart ways, than a team of veterans working in dumb ways, and I'd bet on the inexperienced group producing a better result.
1
-
1
-
1
-
1
-
Yes I saw Bob give an early version of that talk. We are both talking about "Ethics" really. I think that your point of us all being fallible humans is an important one, this is not about eliminating mistakes and misunderstandings, and lack of knowledge, the sign of good teams and good orgs, in my experience, is what they do when things don't go well or to plan. My view is that software development is almost entirely about "learning" and so we should structure our teams, working habits and organisations, to be good at learning, and part of that is accepting that we will get things wrong sometimes, so try to find ways to allow mistakes to happen, but when they do, for their consequences to be safe, as far as we are able to achieve that.
1
-
1
-
1
-
1
-
1
-
1
-
I don't see that as much of a problem. I guess it depends on scale, if you are talking about lots of web-scale environments, where you want economies of scale/reuse for you infra definitions, then sure better tools help with this. But that isn't usually the case when we are talking about automating the deployment of a Legacy System.
It is more likely to be a one-off, so custom is less of an issue I think.
Having said that, sure, I'd start with an off the shelf tool, as I said in the video. In most cases I'd see if I could sensibly containerise things, then look to Chef, Puppet, et al, and only if none of those options make sense would I do my own thing.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I think that DDD is regaining popularity, it is quite an old idea, but Eric captured it in more detail than before in his book, which significantly pre-dated the birth of the cloud.
I think though that the cloud has given it new focus, because if you are commoditising infrastructural concerns, which to some extent the cloud does, then it allows you to staty thinking more clearly in terms of separating "essential" from "accidental" complexity. Which is some of what DDD talks about, in a slightly different way.
I think that there is a next step beyond that too. I have done some work on async, event-based systems, I was one of the authors of something called "The Reactive Manifesto" https://www.reactivemanifesto.org/which describes them, and then you can achieve an almost pure separation of accidental and essential, your services become almost pure domain model. A lovely way to work. I think that moves towards something called "stateful serverless" may be one to watch in that space.
1
-
Sorry but I think that you are wrong 😉.
It hasn't yet reached the point where marketeers are trying to sell us products. I think that the noise that you hear on this topic is mostly from engineers and scientists who are excited by its potential. Clearly there's a long way to go, but maybe a better analogy, for me at least, is with AI, which is obviously a long way further down the track, but I think that few people would disagree that AI is a disruptive tech. It will either kill us all, or revolutionise the world, with not much grey area in between. I think that QC is like that, not the kill us part, but by its fundamental nature, harnessing all those universes and timelines, it will all us to solve problems that we simply couldn't before. My take is that where we are is that this remains an engineering challenge, but we are solving these problems fast.
1
-
1
-
1
-
1
-
I think that agile is more than one of the useful tools in the box, even more so if I am correct in thinking of agile as an informal adoption of some scientific fundamentals. None of that means that I disagree with what I take to be the thrust of your comment, sure, no process allows us to take our brains out of gear. If there was a cookie-cutter recipe for writing software, we could automate it, and do ourselves out of a job in the process.
Software development is a complex, difficult task. At the limits, it is one of the more difficult things that we as a species do. Sure, most SW dev isn't that hard, but some of it is, and one of the difficult things is that even when you are doing something simple, like a server less function or a web-page, you are skating on the surface of some genuinely difficult, maybe even profound, problems. Ideas like coupling and concurrency, for example are world-class difficult, and impact teams even when doing relatively simple things. So having some discipline, some organising structure around which we shape our ingenuity and creativity seems more important than just picking tools from the tool box. It helps keep the beginners away from the deep end, and it helps the experienced people to build on, and enhance their own work as their learning deepens.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I think that is a reasonable summary of how I think about it. Zealotry either way, is almost always wrong.
My guiding principle in picking what solution to use where is always readability. I will optimise to make my code as simple to understand as I can. That's it. I use TDD to drive design-decision-making, and that helps me to prefer certain kinds of designs (modular, cohesive, good separation of concerns, nice abstraction between parts, and appropriate levels of coupling).
I never intentionally prefer code that is more difficult to read. I mostly use functional structures in OO code to avoid repetition in code, using small functions to make a particular use of something more generic, more specific.
My stylel, even in OO, tends to prefer minimising side-effects anyway.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Sure, all complex systems have bugs, but that doesn't give us a free pass to just create crap systems. around 60% of defects are down to the simple, common mistakes that all developers make, simple testing eradicates those, so if you aren't operating a development process built on simple testing (TDD) then you are building lower quality code. I worked on a high-consequence highly complex financial system that was in production for over 13 months before anyone noticed a bug. Doesn't mean that it was bug-free, but it is a measure of the quality of the system. Yes all devs write bugs, but this wasn't just code with bugs this was code that didn't work, that is not an inevitable side-effect, that is close to, if not, negligence.
1
-
1
-
1
-
@engineeringoyster6243 Do you think that is how it works at Google, Facebook, Microsoft, Apple, Amazon, Tesla or SpaceX?
Elon Musk is quoted as saying "Everyone at SpaceX is the 'Chief Engineer'".
There are altenative approaches to the one that you describe, and they are more effective. They are different though, less hierarchical, based more on distributing decision-making. Of course experience counts, I am an old, experienced developer so I am not going to discount the value that I can add to a team, but the most scalable, most efficient way to produce things is to distribute that production. The most effective way to build high-quality products is to foster a culture of inovation and ownership.
In my experience you don't get to build world-class products unless the team really understand and to some extent own responsibility for them.
You certainly need those teams to be guided in some way towards "organisational goals" but the commonest mode of failure that I see in big orgs is them trying to effectively run SW dev as an exercise in "remote-control-programming", with managers telling devs and dev teams what features to build and how to build them.
1
-
@engineeringoyster6243 Thank you too for your reasoned debate 😎
I certainly agree that you need leadership to create great products. In agile development I think that it works best when that leadership is embeded in the work, rather than distant from it.
I have experienced both approaches, and, obviously, much prefer the agile approach. Fundamentally, it seems to me that no complex system, of any kind, springs fully-formed into existence. Great engineering is process of trial and error, a process of discovery. Agile is structured for precisely that end.
One of the ideas that I think is commonly missed is that good agile development (not the process-religion kind) starts out from the assumption that we are probably wrong. So we will work in a way that will limit the cost of our mistakes to a managable level, and allow us to learn, and so improve, when we make a mistake.
For me this is what real engineering is about. The creation and evolution of more and more effective solutiuons to the problems that face us. For me that is what agile thinking should deliver - when you take it seriously, rather than treat it as some kind of process-cult.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I do have a Discord channel, it's available to Patreon supporters: https://www.patreon.com/continuousdelivery
I answer questions here too though, If you are adding things to code only to support your tests then I am afraid that you are doing it wrong! TDD is ALWAYS test first. You drive the development from the tests. One of the advantages of that is that it forces you to design for testability, and testability shares a lot of properties with what we generally consider to be "good design". You should NEVER build back-doors into your code to support testing. Never compromise access, (make something public that should be private), and never attempt to access private or protected things from a test.
Forgive me, but I would argue that if you have to do this to test it, then it probably isn't "very good production code". Testable code is modular, cohesive, has a strong separation of concerns, good lines of abstraction and manages coupling carefully. It has to be these things, or we can't test it effectively. I'd say that if your code is more module, cohesive etc etc than mine, it is better than mine, so that is why I suggest that there is probably room to improve your prod code if it doesn't exhibit these properties.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Programming languages are "languages", they express the intent more clearly than natural human languages, that is what they are designed to do. We seem to assume that the ability to organise our thoughts into the simpler, more restricted grammar of a programming language is the skill at the heart of our profession. I think this is wrong, it is how we organise our thoughts to solve the problem that is the real skill, and if we can't express that clearly in the simpler, more precise terms of a programming language, then we aren't doing a great job.
This is nothing to do with how technical the job that the code is doing, that's an excuse. I have worked on teams working on some esoterically complex things, but who still wrote nice readable code.
The team that Trish and I worked together on, at LMAX, was one of those.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
That's rather a silly statement when you don't know me. Just because you disagree with my ideas doesn't mean that I have never written any code. My guess is that I have probably written a lot more code than you, because I am probably a lot older than you - just based on statistics in our industry, but I have no means of knowing how much code you have written.
I have built most kinds of code that you can think of. I used to work for PC manufacturers and wrote os extensions, additions to the BIOS and device drivers. I wrote simple games for early home computers and later got interested graphics programming general and built a ray-tracing animation system from scratch - this was before there were such things as graphics co-processors, or even graphics libraries. I got interested in distributed systems and wrote what we would now call a data mesh, a platform for micro-service-like systems in 1990. Later I worked on big commercial systems, and worked at a company that built some of the very early commercial systems on the web. I took over the lead of one of the world's biggest Agile projects when I was a tech principle at ThoughtWorks. Tech Principle at TW was always a hands-on role. Amongst other things during that time, I helped to build a point of sale system that if you lived in the UK you would almost certainly have used. I led a team, again in a hands-on in the code, as head of software engineering for LMAX, where we built one of the world' highest performance financial exchanges. Average latency of a trade was 80 micro seconds.
This is just a sample, you don't have to agree with me, but a more sensible response would be to show where my arguments are wrong, rather than simply resorting to what I assume you think of as a personal attack. I am a very experienced programmer, doesn't mean I am right. I may have had ideas that I hadn't tried (that is not true, but I may) that doesn't necessarily mean that they are bad ideas. You should learn to evaluate ideas on their merits, what people say is more important than who says it!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@stephan553 Sorry, I misunderstood. My experience has been that timezones obviously can present some barriers, but that you don't have to be "in the same room" for pairing to work very well. For about 2 years I paired with a team based in Chicago, while I was based in London, our timezones usually overlapped by only about 3 hours, so we paired during the overlap and didn't for the rest of the time. The remote thing is pretty easy, you need to be able to talk, and share a screen, and a repo.
Even given these limitations, we found that pairing was better than not, and the remote part was the least of the problems.
1
-
@stephan553 to be honest that doesn't sound like a pair programming issue to me? It sounds more like the team lost a bit of team cohesion as a result of the more distributed working forced on them by COVID. Which is fair enough, it was a tough time. Pairing alone doesn't solve that, but I don't think it adds to it either.
In fact in my experience PP helps to strengthen team cohesion. But left to themselves people will often fall into old habits, and let's be honest, working alone is easier to organise. So the team needs to inject some energy into the process to keep it alive. My preference in general, for lots of reasons is to practice PP alongside regular, daily, pair rotation. For this we are forced into the pairing a little, not against our will, but we are reminded that at the start of the day, in stand-up we decide who is pairing with who. This makes it a little bit harder for people to actively say "no" rather that to go with the flow, and pair.
1
-
The reason that I started writing books, and created this channel is because I agree very strongly with your last statement - there is something wrong with the way that we do software development. I have seen, and know, that there are better ways, and that is what I describe. So yes, you are right, most devs don't refactor enough, maybe don't understand the importance and costs of ideas like modularity, separation of concerns and coupling, but all of them should, and when they do, they aren't average programmers any more, they're good programmers.
I don't think that good programmers ask for permission from managers to do a good job, they do it anyway. My ambition is to help some people to see that there is a better way to do things, and then help them to understand what, in my experience, and with what data we have, works better and why.
We get to change the industry only by changing one mind at a time. Hopefully some of those minds are influential and can help change other minds 😉
1
-
1
-
1
-
1
-
1
-
@rmworkemail6507 How do you know, I don't describe the science bit here? Science is an approach to discovering new knowledge, one take on modern engineering is that it is a practical application of scientific approaches to learning how to solve problems.
The scientific method is:
Characterisation Make a guess based on experience and observation.
Hypothesis Propose an explanation.
Deduction Make a prediction from the hypothesis.
Experiment Test the prediction.
You can apply this approach to development in a wide variety of ways.
C: The user has this problem
H: I think this test describes something that represents that problem.
D: When I run this test, I expect it to fail with this error message, 'cos I don't have any code to make it pass yet.
E: run the test and see if it matches the results.
There's a lot more that we can take from science...
Start by assuming that your guesses, designs, understanding is wrong, rather than right, and figure out how to falsify your ideas ratter than prove them.
Control the variables, so that you can clearly see the results of you experiments.
So I think it valid to talk about applying scientific style reasoning to SW and when we do, calling it "Engineering".
1
-
1
-
1
-
I think that this may help describe how I divide up the implementation of Acceptance Testing https://youtu.be/JDD5EEJgpHU
In it I describe my preferred 4-Layer architecture, Test cases at the top, written in a DSL, which capture What the system should do without saying anything about how it should do it. Next layer down is the DSL, which is shared between many test cases. The the protocol driver layer, which I think is what you are asking about, and finally the System Under Tests (SUT).
The protocol drivers take biz-level concepts, captured in the DSL and translate them into something that drives the app. So a test case says "placeOrder" and the DSL sorts the paramaters, helps with abstraction and passes 'placeOrder' on to the protocol layer, which translates that idea into "complete this field with this value, and this other field with this other value, and press this button" or "Create a message with these values, and send it here".
This video also describes some aspects of this separation: https://youtu.be/zYj70EsD7uI
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Surveying users is not the best, or only, way to do it, you gather feedback by measuring something that defines it. May up-time, latency, money earned, customers recruited, usage of a new version vs an old and so on. For each feature, you define what measurement will demonstrate the success of that feature and then you measure that in production. Many orgs do this on a fully automated basis, including Amazon, Netflix, Meta etc.
1
-
1
-
1
-
@michaelmorris4515 I am afraid that none of your assumptions here are true. First, this is not a theoretical approach, I, and many others, have used this on big complex real-world systems. One of my clients used this approach to test medical devices in hospitals. Another uses it to test scientific instruments. I built one of the world's highest-performance financial exchanges using this approach, and I found out this week, that the tests are still working and providing value 13 years later.
I think that your example focuses on the technicalities rather than the behaviour. "I expect the transaction table to show these values" sounds to me like you are leaking implementation detail into your test cases, and that is why they are fragile. What is it that the user really wants? Do they really care about "transaction tables" when they walked up to the computer to do a job, were they thinking "what I need to do is make sure that the transaction table shows these entries"? I doubt it.
I can't give you a real example, because I don't know what your app does, but I'd try an capture the intent that the user had. So forgive me for making something up, but lets say that in your case a "transaction" represents selling something, and your "transaction table" represents a list of things, or services, sold. Then I can think of a few scenarios that matter to a user. "I want to be able to buy something and see that I have bought it" (it ends up in the "transaction table"). "I'd like to be able to buy a few things and see a list of the things that I bought" (they all end up in the transaction table). and so on.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Sorry, but it can't. Yes, you can run the build and some tests on a feature branch, but what are you "integrating"? If you and I are both working on separate branches, I can write code that breaks your assumptions, and your code, but you don't get to see it until I am finished and merge my changes to our shared version of the "truth" - "trunk", "origin/master", "origin/main" whatever you call it.
CI is by definition about evaluating our code together AT LEAST ONCE PER DAY, so if your FBs last for lounger than a day, then you can't practice CI.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I am not a big fan of "Quality Attributes" either, it too is a meaningless phrase, designed, I assume, too hide the complexities of software development away from those sensitive souls, the project managers. Well tough! Software is a complex thing to do well, get over it and recognise that some valuable features of the system don't easily fit into a pigeon-hole, and take more work, organised differently. And this work is cross-cutting, on-going and hard to plan. But "plan-ability" isn't the goal, good software is the goal.
Just my 2c! 😉
1
-
1
-
@rmworkemail6507 Really? How is it "ethically bankrupt"? It may or may not be dumb, I'll argue that case in a minute, but unless you are doing something intentionally wrong it is not a question of "ethics" - Definition: "moral principles that govern a person's behaviour or the conducting of an activity.". This is not a moral choice, it is an engineering decision and it should be on the basis of what delivers the best outcome - highest quality SW fastest.
Onto that.
How do you imagine complex things are designed? Take a look at how SpaceX are currently working on their goal of allowing humans to live on Mars. They work incrementally in small steps, they have a vision of where they want to go, but they don't have a design. They are currently on to v26 of their StarShip and v7 of their launcher, and they haven't got to orbit with it yet. Every version so far, and for a VERY long time to come, is an experiment, and different from the last. Sometimes, they even change the same ship when they get a better idea. This is how REAL ENGINEERING works, it is an incremental process of trial and error, or more formally, experimentation.
SpaceX is also heavily SW driven, in manufacturing and, more obviously, in flight control systems. They are updating their SW, on a space craft that can carry people to the International Space Station, 45 minutes before launch. This is the safer approach.
So I am afraid that you are wrong on both counts.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
For me it is all about scope of evaluation. If your mini-services, from which you construct "independently deployable services" are all in one repo, and all evaluated in the same pipeline prior to release, then yes, I think that is the most efficient way to organise things. If each of your mini-services is in a separate repo, and then you test them all together when composed as an "independently deployable service" or even worse "your mini services are used in multiple 'independently deployable service" then IMO you have an inefficient distributed monolith, either at the level of each of your "independently deployable services" or in the second, worst, case at the level of your whole system which is coupled through the re-use of the mini-services.
if your mini services are in separate repos, the question is, what do you have to test after changing a mini service?
1
-
1
-
1
-
1
-
1
-
1
-
@mister-kay
Certainly if your examples above are in-order, you jumped in too soon, you needed simpler tests to begin with. I prefer to start with the simplest case that I can think of, often that is a null-test - what happens if the inputs are wrong? I am not sure if I would have started with the null-test of {} or the simplest range, in this case a single integer, but certainly one of those two.
I think I would have picked these, in this order, but without writing the code, I may have thought of other tests along the way...
{}=''
{1}= '1'
{1,3}='1,3'
{1,2}='1-2'
{1,2,,4}='1-2,4'
{1,2,3,4}='1-4'
{ 1, 2, 3, 5 } = '1-3, 5'
{ 1, 3, 4, 5 } = '1, 3-5'
{ 1, 2, 4, 5 } = '1-2, 4-5'
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I think that you are "misreading" what is being said here very seriously. This is nothing to do with "hiding within a group" or ducking responsibility in any way. This is about the impracticality and so irrelevance of attempting to measure "individual dev productivity". This is not about whether "dev productivity" matters or not, it is about the fact that you can't measure it. The measures outlined in the article, mostly don't even attempt to measure it, and where they do the article is imprecise and unclear about what to measure and how. Individual devs DO make a difference, and on the kinds of high-functioning teams that I have occasionally been lucky to work on, including is start-ups and in big, well-funded corporations, ALL of them take their responsibilities very seriously, that is a large part of what made them good. But you don't get that, through metrics targeted at measuring individual performance. You have to cope with the lazy and incompetent people in a different way, it is a more cultural process. The idea that metrics does this is simply a mistake, and but it us an alluring mistake that easily misleads poor, or inexperienced managers.
Would it be "good" if there was a way to measure individual performance, maybe, but ultimately SW dev at the professional level, certainly in the "large, well-funded corporations" is a team game, so you measure team performance.
1
-
You pair them with more experience devs, so that they learn to be trustworthy, and so productive, faster. Pair programming works fine remotely, and only the most extreme time differences rule it out completely. In those circumstances, why are you hiring junior, non trustworthy team members, that you can't work with effectively? Find an experienced person in their timezone to keep an eye on their work.
Yes, if someone commits a failure, their change is rejected and they fix it before trying again.
Mostly I try to avoid "code owners", and it is ok for people to commit changes to other parts of the codebase. Usually there is some etiquette around this, in that you talk to the person or team who usually work on some code before changing it.
It is amazing in this world. This is not an idealistic fantasy, this is how real, complex, sometimes world-class software is written, sometimes at the largest of scales. SpaceX are updating the the software on a rocket 45 minutes before a launch. This is based on TBD and deployment pipelines. Very far indeed from GitFlow.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The data that we have says what I say it does. That a strong predictor of "high stability and high throughput", the measures that the State of DevOps research is based on, are predicted by working so that "everyone merges their changes at least once per day". This was confirmed over several years of the study. As I said, you can question the study, but that is what it found. Why is this "far fetched"?
The Accelerate book defines the approach taken, in an attempt to give a more scientific validation of the results. The approach is valid, it is a widely used, in sociology, approach to carrying out surveys. Sociology is always messier than other forms of science, but this is the most rigorous study of our discipline that I am aware of. So again, why "far fetched?".
1
-
1
-
1
-
1
-
1
-
1
-
@kennethgee2004 Sorry, still disagree, but as you say that doesn't mean that I am right. I too have had job titles like "architect" "enterprise architect" and "principal architect", for more years than I care to recall, but none of that means that I am right either. However, I did think it earns me the right to hold an opinion.
The division between "architect", "engineer" and "developer" that you mention at the end are NOT "principles", they are the kind of division that some types of companies tend to apply to job roles, and these are usually not the kinds of company that are building great software. For example, one of Elon Musk's sayings in the context of Tesla and SpaceX (let's not mention Twitter) is that "everyone is chief engineer", which means that EVERYONE is responsible for everything, and are encouraged to "take part" anywhere that their interest and experience takes them.
I would say that if you separate the roles in the way that you describe, you will pretty much always get a sub-par result. What you describe is what I would call the "ivory tower model" of software architecture. Everyone, whatever their job, does a better job when they are close to the results of their decisions. I want to see where my dead fail and how, and where they succeed and how. If architects are NOT sometimes working alongside engineers and developers on a frequent basis, they will make mistakes, by skating over complexity that invalidates their ideas. This is probably the commonest form of "SW architecture" in our industry, in my experience.
1
-
1
-
1
-
@arpysemlac I try to make examples, but I think that the problem is different to you, I don't believe that this is a problem that is about mobile vs web or web vs game dev or back-end vs front-end, I think that this is a problem of a focus on design, whatever you are building. I have employed TDD for all of these things, and many more, and I am convinced that it works, but it changes how you organise your code. So for me the difference between a mobile app and something else is pretty much irrelevant, because I am interested in testing the behaviour of my code, and I can choose how to present that to make it easy to test, through the choices that I make in design.
I don't particularly like the look of the tools being used here, I prefer xUnit style code for TDD, but this video seems to have a decent focus on the right sorts of things to do: https://youtu.be/lqelzovTPhY?si=0lVwLu2SFIEEUwxy (I haven't watched it all the way through so don't hold me to it!)
I have a video that tries to explain a bit more of what I mean here: https://youtu.be/ESHn53myB88
This is sometimes called "the Humble Object Pattern", but the idea is that you isolate, by design, the I/O from you code and then you can test everything else thoroughly.
The key idea in TDD for me, is that we change the design of our code to MAKE IT TESTABLE and as a side effect of working like that we usually end up with better quality designs.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Unfortunately, it isn't that simple. It really depends on the nature of your code, much more than the difference between Functional and OO. Function incurs more costs copying things, or providing the illusion of copying things in order to sustain immutability, OO tends to create more transient state so, as you say, garbage collection is more of an issue, but how those things play out in different bits of code, is very specific to those bits of code. The real secret to high performance code is to understand what is going on, so, for example, learn how garbage collection works in your tech, learn how to profile and tune it to meet the needs of the system. I used to write ultra high performance financial systems, 2 spring to mind here, in one we tuned the garbage collection so that that the really costly stop the world kind of sweep would happen less than once per day, and then we reset the system daily so in practice it never happened. In the other we write our oo code so that it was immutable and allocated on the stack, so no CG at all. Neither of these were written as Functional systems.
Functional systems in general are not high performance by default, because of all the work that that the languages and compilers do behind the scenes, like enforcing immutability, but I am sure that there are ways of using them and tuning them to do better than the default. It did cross my mind to implement something high performance both ways and see which worked better, but it would be a lot of work and even if I did that I don't think it would help. Performance is more about what we called "Mechanical Sympathy" - understanding how the underlying system works hardware, os,. language, frameworks etc, and using those things efficiently.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
It depends what you are looking for. If you are interested just because you like learning, or because you are thinking of starting a new career?
Starting by learning Javascript is a popular route, but I think that the javascript eco-system is so diverse that it is sometimes hard to find any reasonable definition of "what does good JS code look like". As a beginner, that can be a problem. Java is more constrained, and these days, though still VERY popular in the job market is seen as a bit old fashioned. I think that this is a mistake, and that learning Java is a better place to start if you want to get good at coding than JS. It is a bit more strongly structured, and so "what is good" is a bit easier to answer.
Python is a good choice too, and probably, at least in my opinion, a better teaching language than JS or Java.
If you are looking for this from a Job perspective, then these are the 3 most popular languages by most counts. If you are a Windows user, another strong candidate is C#.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@CarlosVera-jy7iy I'd think of this as general defensive design. There is a difference between the service that the service provides and the API to that service so a good separation of concerns means that we have code to deal with the API calls and different code to deal with the service that those calls represent.
If you send a service a message with, maybe including item, an order a quantity and an account number, I could crack the message, the API call in-line with creating the order, or I could extract the parameters that I am interested in
item = getStringParam(msg, "oder/item")
qty=getLongParam(msg, "order/quantity")
account-id = getLong(msg, "order/accountId")
and then call placeOrder(item, qty, account-id)
This is better code than in-lining the cracking of the parameters with the placing of orders, good design says each part of the code should be focused on doing one thing, here we have two, cracking params and placingOrders, and these two things are at VERY different levels of abstraction so combining them will very often lead to problems.
As far as testing, the paramCracking helpers, getStringParam getLongParam in my example would have been built with TDD in the abstract, which means that for cracking this specific message there is little testing left to do. does "order/item" map to item etc? I may test that with TDD or integration tests, depending on my design and the rest of the system.
the really intersting bit though is the logic in placeOrder which should now be perfectly testable.
1
-
1
-
1
-
1
-
1
-
1
-
Of course you can, it is just that you will have wasted a lot of time on the big-up-front design. I have built several pretty large, successful systems from scratch. It has been my experience that it is really not very long, once you have started coding, that you opinions change. So why waste time going into too much detail early on. My recollection of when we built one of the world's highest performance financial exchanges, was that we spent 2 days on the initial design, before we started writing code, and then designed continually for the next 5 years, first release was about 7 months after we started.
Look at what SpaceX are doing now, their Starship hasn't got to orbit yet, and it is intended to go to Mars and further, they are at version 26, and everyone is a refinement on earlier versions. This is what real engineering, wether SW or not, looks like.
1
-
1
-
1
-
There is some research on this, though it is based on small numbers of people being studied, but there have been several studies that tend to agree. In general a pair of people finish a task in 60% of the time of an individual working alone, but the work of the pairs produces significantly higher quality code. I know of multiple studies that say much the same thing, but the real saving in time is less about the effort and more about the dwell times in the process when nothing is happening, in a PR driven org, most PRs spend a significant amount of time, waiting to be reviewed. One of my client says that on average their PRs take about a week to be processed. I don't have data to know whether that is uncommonly long or normal, but I do know that it is certainly NOT unusual. The other saving of effort in pair programming is that there is no catch-up or context-switching time, Because the pair is working together, while they may spend time debating a solution, there is no time spent bringing a "reviewer" up to speed with the problem and the solution. My own, subjective, experience of pair programming is that it is significantly more efficient and effective than working with Pull Requests. Most orgs that I have seen that operate a PR approach have low quality reviews, because they are done as asynchronous and off-line and so the developer doesn't get useful direct feedback. Of course it is "possible" to do a better job than that, but the orgs that I know that practice PRs don't, while the orgs that I know, and have been a part of, that practice pair programming do.
1
-
1
-
1
-
1
-
1
-
Well, clearly we differ, but I don't think that you can reasonably say some of the things that you have said. "of course you need oversight", no you don't. This isn't theory or dogma, this is how some of the most successful companies in the world work. You may prefer "oversight", but you don't "need" it. The protection that you implicitly are worrying about comes from CI itself. In Continuous Delivery you don't get to release your changes if any test fails. So the "oversight" is automated, a test fails, your change doesn't get to production. I have worked on big complex systems that were developed this way and had almost no bugs. The financial exchange that we built this way was in production, and heavy use, for over 13 months before the first defect was noticed by a user.
Finally, have you tried working in the way that I describe here? I have tried both approaches, so I don't think that you can claim that I am being dogmatic. I have tried both and am reporting what I, and the research data, says is the more effective strategy.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@ellingtonjp As I said to @Aman S. As long as you don't fix the causes of the problem, you are stuck with the problem, and I think that this is a big problem for the effectiveness and efficiency of a team. However difficult, I'd try and find ways to improve the situation. If the team is remote, but the timezones overlap for some time, pair during that time, and teach them to be more responsible. If they are literally the other side of the world, then bring team members to you, or you travel to them, to try to do the same thing. I have done all of these things with remote teams in the past, and I not very convinced that it is possible to make off-shore working work effectively without.
1
-
Good argument!
I think of myself as a somewhat idealistic pragmatist. I think that, while there is no "one true way" there are ideas that we should rule out because they are dumb (the idealism part) but that humans are fallible, non-rational and are biologically programmed to jump to conclusions based on guesswork rather than apply science, rationality and maths (the pragmatism part).
So programming is not maths, because maths is too hard for most of us to do well, and it is a social activity that we need to adapt to be "easy enough" for most of us to do well.
One of the problems with programming is that it is such a slippery slope. You can teach young children to write simple code, but it doesn't take much to break simpe code. To build systems that lots of people can use for important things is world-class difficult. I don't mind that, in fact I kind of like that it is difficult, but I think it is a mistake to always be looking for trivial answers, when sometimes the answers are hard.
There is no way to make concurrency simple! You can limit how damaging it can be by adopting certain disciplines or approaches, but it is always a world-class difficult problem. Information in different places, chaning is up there with quantum physics (in fact it may be the same problem) in my view.
So I think it important that any programming paragidm should, ideally be helping to protect ourselves from some of the more damaging excesses of the slippery-slope of programming. Also, that there is no simple "XX is best" answer, ever.
1
-
Yes, feature branching is not CI, and the data says that feature branching doesn't work as well, when measuring the "Stability" which is a measure of the quality of the software that we produce (based on defect rate and mean time to recover from that defect) and "Throughput" which measures the efficiency with which we can produce software at that quality (based on time from commit to releasable and frequency of deployment into production).
Which is a bit less "OK".
Show me some similar data that explains the benefits of feature branching and we have room for a good debate.
I am not trying to be argumentative here, but my thing, the focus of this channel, is to figure out how we can start to think more like engineers. For that we need measurements that allow us to figure out which are the good ideas, worth pursuing, and which are not. I think that we are bad at getting rid of bad ideas. I think that long-lived (longer than a day) feature branches are a a candidate for an approach to development, as is CI, but when we look at the data, FB doesn't do very well against CI, so we should, if we are being rational, prefer CI until we find something that works better.
You can read more about the data that my CI conclusion is based on in the Accelerate book: https://amzn.to/2YYf5Z8
1
-
1
-
1
-
1
-
1
-
1
-
The QA team often, not always, wrote the top level script for acceptance tests, this was written as BDD scenarios in the language of the problem domain. So they were relatively easy to write. Devs wrote, and maintained everything else about tests, unit tests, performance tests, as well as the plumbing that makes it easy for anyone, including QAs, to write acceptance tests.
The usual ration on teams in our org was 1:QA/4-6:DEV but it is important to re-state, testing was NOT a QA's responsibility. Testing is the team's responsibility, and some parts of it, sometimes a QA could help with. If the QA was busy when the team needed acceptance tests, someone else on the team wrote acceptance tests.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
My point is, I think, a bit more technical than that. If you want to test something, then to test it well, you MUST control the variables, This is generic, it is not even only about software.
If you rely on E2E testing then you relax control of the variables, and so the quality of the testing is reduced. So E2E testing is a less accurate, more complex way to test things.
There are other ways to do a better job, and that is what I am describing. This is not about "stakeholder interest" really, though that certainly informs our testing, this is about verifying that our SW does what it needs to do, and to do that, we need to exert control over it when we are testing it. The more control we can exert, the more sure we are that the results are meaningful. This may be an "ideal situation", but not in the sense that it never happens, only in the sense that this is the best way to do things. I and many others have done this many times for real-world, complex, systems. It works and it works better than any of the alternatives that I have seen or tried.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@thescourgeofathousan Sure you can do it badly, but you can do anything badly. I have consulted with many companies that have implemented the strategy that you describe. Their commonest release schedule was measured in months because of the difficulty of assembling a collection of pieces that work together. They have thrown out all the advantages of CI.
Sure, different collections of components, services, sub-systems have different levels of coupling, so I think that the best strategy is to define your scope of evaluation to align with "independently deployable units of software" and do CD at that level.
I am sorry, I don't mean to be rude, but I don't buy the idea that "citing Google is a call to authority". I don't hold Google on a pedestal, we are talking about ways of working and you said "The best way to manage relationships between SW elements is via CICD pipelines that trigger each other due to events that happen to each related element.".
Google a real world example of not doing that, and succeeding at a massive scale by doing so. So by what measure do you judge "best" by? Not scale, because you reject my Google example. Maybe not speed either, because I can give you examples of orgs working much faster than you can with your strategy - Tesla can change the design of he car, and the factory that produces it in under 3 hours - but is that a call to authority too?
How about quality? The small team that I led built one, if not the, highest performance financial exchanges in the world. We could get the answer to "is our SW releasable" in under 1 hour for our entire enterprise system, any change whatever its nature in under 1 hour, and we were in prod for 13 months & 5 days before the first defect was noticed by a user.
Finally there is data, read the State of DevOps reports, and the Accelerate book. They describe the most scientifically justifiable approach to analysing performance in our industry based on over 33k respondents so far. They measure Stability & Throughput, and can predict outcomes, like wether you company will make more money or not, based on their approach to SW dev. They say that if you can't determine the releasability of your SW at least once per day, then you SW will statistically be lower quality (measured by stability) and you will produce it more slowly (measured by Throughput).
If your system is small, and very simple, it is possible that you can build a system like you described, with chained pipelines, that can answer the question "is my change releasable" for everyone on the team once per day. But I don't believe that this approach scales to SW more complex than the very simple and is able to achieve that. I have not seen it work so far. The long time it takes to get that answer, the more difficult it is to stay on top of failures and understand what is "the current version" for your system.
I really don't mean to be rude, but this really isn't the "best way", at best it is "sometimes survivable" in my experience.The best way that I have seen working so far, is to match the scope of evaluation to "independently deployable units of software", and the easiest way to do that is to have everything that constitutes that "deployable unit" in the same repo, whatever it's scale.
1
-
1
-
Actually you are wrong, that is exactly how the Apollo programme progressed. It was an incremental progress of discovery. Started with Mercury, can you get a man into space, then Gemini, can you orbit, can you dock two spacecraft, can you space walk (do spacesuits work) and then Apollo, can you get 3 people into orbit, can you get to the moon and back, can you can you get to the moon and back carrying a spaceship that can land on the moon, can you land on the moon.
Another interesting version is the Ranger programme (part of the Apollo programme to see if you could hit the moon with a spaceship) watch this from about 27:40 https://youtu.be/ephM9Nw9pAA?si=IvNPYhNilP63uXIe
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
It is an interesting myth in our industry that this is a young-persons profession. The reason that most people are young is that our profession has grown so fast. There are lots of people over 50 writing very good software, but they are in a tiny minority, because when we were in our 20's the whole industry was lots smaller.
Speaking as someone in that 'old-programmer' category, what you are good at certainly changes as you grow older. My memory for some things is worse, I used to know every feature of the languages and tools that I used. Now I have to look more of those things up if I don't use them regularly. That is a function of my memory getting a bit worse, but also of the growth in complexity of tools and languages. My experience is much broader now though, that I am confident that I can write software to solve any problem solvable with software, and do a decent job. Because I feel that know what the fundamental principles are, and trust myself to be able to work through a problem. That doesn't mean that I claim to know all the answers, but I do know how to go about finding the answers. I can design bigger more complex systems than I used to be able to because I am much better at design now, and know what that takes to evolve a great design.
I think that I have gained a more holistic view of software development over the years.
So I wouldn't worry that this is a limited-time career.
Having said that, at some point AI will be better at this stuff than us humans, but we will have more to worry about than just job-security at that point.😳
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I think that it is an understandable, but wrong, response to the complexity of things. But in my experience of working in and around other heavily regulated industries (Finance & Healthcare mostly) more bureaucracy doesn't lead to better outcomes. Margaret Hamilton, head of development for the NASA flight control systems, and inventor of the idea of software engineering, said that NASA went from lax and uninterested in software to bureaucratic overkill. The really innovative, and safe, work was done while they didn't think software was important. These days SpaceX, also heavily regulated and closer to defence industries, operates a full blown Continuous Delivery process with TDD, Trunk Based Development and much of the stuff that I generally recommend, to great success, they launch more rockets and more payload into space than any other group, ever!
There are some parts of the US Airforce that are doing CD for fighter Jet software, and the US Army operates a CD program too.
It seems to me that the problem is more about the culture in particular groups, than the domain, or even regulatory framework in which they operate.
Often if you are stuck in one culture, another seems so alien to be impossible or inapplicable - Kent Becks "Forest vs Desert"
https://tidyfirst.substack.com/p/forest-and-desert
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@diederickbrauer3938 My point is that if you work so that your software is always in a releasable state, in terms of production readiness, not necessarily functional completeness, then your software will, er, always be releasable.
This is extrapolating too far, but what it looks like to me, is that they started out fixing scope, and then panicked as that dragged on and on, and ended up fixing time. The Crunch for over a year, and the quality-cutting that went into the release all point to that.
Waterfall is not inevitable, it is a choice and a bad one. I have worked on many projects of comparable scale, and worked, as a consultant, with many teams much larger than this. Waterfall is not a scalable approach, if you want to scale you have to distribute. If you want to control a release, you have to work incrementally.
Here is one of my takes on scaling: https://youtu.be/cpVLzcjCB-s
1
-
1
-
1
-
1
-
1
-
1
-
@warvariuc Sure, I think that you are right, we are all human after all. I think that Science and Engineering are primarily there to protect us from the often fairly severe limits of or humanity. We all give deference to the opinions of experts, and to some extent we should value the opinions of experts more highly than less well informed people, but the healthy, scientific mindset is to still always evaluate your understanding against what they are saying. I went to see Physicist Brian Cox talking about Cosmology this week. This is a subject that I am very interested in, but very far from being an expert. He said some things about the Inflation field before the Big Bang that I didn't know, but it fit with everything that I did know and although staggering (distance between two points in that field doubled every 10 to the -35 Sec!!!! 🤯) So I was Wow'd but if he had turned around and said "the Earth is flat" I would have said "Blimey Brian Cox has gone mad" I wouldn't have assumed that he was correct. Given that he is well known as someone smart and thoughtful and an expert in Physics, I may have paused to check that I hadn't misunderstood what he was saying, but then "Brian's Barmy!". Healthy scepticism matters, whatever the source of the idea.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I guess I am more of a Popperian. I don't think that corroboration is on the same level as falsification, and so while I agree that good TDD is about falsification, I think that bad unit testing is about attempts to prove an implementation good.
The difference, for me, is that if all my tests pass, I don't assume that my system works, but I know that it works to the degree to which I thought of ways that I could check that it works. So that corroboration is valuable, but is not definitive. When a test fails there are two reasons. My test was wrong, or it is falsifying my system. Both of these are good things to know, and to explore. So, for me, Popper wins 😉
1
-
1
-
1
-
1
-
1
-
1
-
1